DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 00/11] Unify the PMD coding style
@ 2023-10-07  2:33 Chaoyong He
  2023-10-07  2:33 ` [PATCH 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
                   ` (11 more replies)
  0 siblings, 12 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He

This patch series aims to unify the coding style of NFP PMD, make the
logics following the same rules, to make it easier to understand and
extend.

Chaoyong He (11):
  net/nfp: explicitly compare to null and 0
  net/nfp: unify the indent coding style
  net/nfp: unify the type of integer variable
  net/nfp: standard the local variable coding style
  net/nfp: adjust the log statement
  net/nfp: standard the comment style
  net/nfp: standard the blank character
  net/nfp: unify the guide line of header file
  net/nfp: rename some parameter and variable
  net/nfp: adjust logic to make it more readable
  net/nfp: refact the meson build file

 drivers/net/nfp/flower/nfp_flower.c           |  15 +-
 drivers/net/nfp/flower/nfp_flower.h           |  34 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |  18 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  62 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |  36 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.h      |   6 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  36 +-
 .../net/nfp/flower/nfp_flower_representor.h   |   8 +-
 drivers/net/nfp/meson.build                   |  23 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  39 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  26 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  43 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  12 +-
 drivers/net/nfp/nfp_common.c                  | 763 +++++++++---------
 drivers/net/nfp/nfp_common.h                  | 167 ++--
 drivers/net/nfp/nfp_cpp_bridge.c              | 135 ++--
 drivers/net/nfp/nfp_cpp_bridge.h              |   8 +-
 drivers/net/nfp/nfp_ctrl.h                    |  34 +-
 drivers/net/nfp/nfp_ethdev.c                  | 307 ++++---
 drivers/net/nfp/nfp_ethdev_vf.c               | 191 ++---
 drivers/net/nfp/nfp_flow.c                    | 229 +++---
 drivers/net/nfp/nfp_flow.h                    |  23 +-
 drivers/net/nfp/nfp_logs.h                    |   7 +-
 drivers/net/nfp/nfp_rxtx.c                    | 287 +++----
 drivers/net/nfp/nfp_rxtx.h                    |  36 +-
 25 files changed, 1235 insertions(+), 1310 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 01/11] net/nfp: explicitly compare to null and 0
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 02/11] net/nfp: unify the indent coding style Chaoyong He
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

To compliance with the coding standard, make the pointer variable
explicitly comparing to 'NULL' and the integer variable explicitly
comparing to '0'.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |   6 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c |   6 +-
 drivers/net/nfp/nfp_common.c             | 144 +++++++++++------------
 drivers/net/nfp/nfp_cpp_bridge.c         |   2 +-
 drivers/net/nfp/nfp_ethdev.c             |  38 +++---
 drivers/net/nfp/nfp_ethdev_vf.c          |  14 +--
 drivers/net/nfp/nfp_flow.c               |  90 +++++++-------
 drivers/net/nfp/nfp_rxtx.c               |  28 ++---
 8 files changed, 165 insertions(+), 163 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 98e6f7f927..3ddaf0f28d 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -69,7 +69,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -100,7 +100,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_RSS;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS2) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
 	else
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
@@ -110,7 +110,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index c5282053cf..b564e7cd73 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -103,7 +103,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
@@ -195,7 +195,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 
 	lmbuf = &txq->txbufs[txq->wr_p].mbuf;
 	RTE_MBUF_PREFETCH_TO_FREE(*lmbuf);
-	if (*lmbuf)
+	if (*lmbuf != NULL)
 		rte_pktmbuf_free_seg(*lmbuf);
 
 	*lmbuf = mbuf;
@@ -337,7 +337,7 @@ nfp_flower_ctrl_vnic_nfdk_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	txq->wr_p = D_IDX(txq, txq->wr_p + used_descs);
-	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT)
+	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT != 0)
 		txq->data_pending += mbuf->pkt_len;
 	else
 		txq->data_pending = 0;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 5683afc40a..36752583dd 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -221,7 +221,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
-		if (new & NFP_NET_CFG_UPDATE_ERR) {
+		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
 			return -1;
 		}
@@ -390,18 +390,18 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0)
 		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
-	if (txmode->mq_mode) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY)) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -493,11 +493,11 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
 		 NFP_NET_CFG_UPDATE_MSIX;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -537,8 +537,8 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
 				  " port enabled");
 		return -EBUSY;
@@ -550,10 +550,10 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
-	if (nfp_net_reconfig(hw, ctrl, update) < 0) {
+	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
 		return -EIO;
 	}
@@ -568,7 +568,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues)) {
+				    dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
 			     " intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
@@ -580,7 +580,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
-		if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
@@ -591,7 +591,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1))
+							       i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
 				rte_intr_vec_list_index_get(intr_handle,
@@ -619,53 +619,53 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
-		if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
-		else if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
+		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 	}
 
 	/* L2 broadcast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2BC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2BC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2BC;
 
 	/* L2 multicast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2MC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2MC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) {
-		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
 			ctrl |= NFP_NET_CFG_CTRL_LSO2;
 	}
 
 	/* RX gather */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -693,7 +693,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) != 0) {
 		PMD_DRV_LOG(INFO, "Promiscuous mode already enabled");
 		return 0;
 	}
@@ -706,7 +706,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	 * it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -736,7 +736,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	 * assuming it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -770,7 +770,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
-	if (nn_link_status & NFP_NET_CFG_STS_LINK)
+	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
 	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -802,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == 0) {
-		if (link.link_status)
+		if (link.link_status != 0)
 			PMD_DRV_LOG(INFO, "NIC Link is Up");
 		else
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
@@ -907,7 +907,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats) {
+	if (stats != NULL) {
 		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
 		return 0;
 	}
@@ -1229,32 +1229,32 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	/* Next should change when PF support is implemented */
 	dev_info->max_mac_addrs = 1;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0)
 		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
-		if (hw->cap & NFP_NET_CFG_CTRL_VXLAN)
+		if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0)
 			dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
+	if ((hw->cap & NFP_NET_CFG_CTRL_GATHER) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	cap_extend = nn_cfg_readl(hw, NFP_NET_CFG_CAP_WORD1);
@@ -1297,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
 	};
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) {
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
@@ -1431,7 +1431,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	struct rte_eth_link link;
 
 	rte_eth_linkstatus_get(dev, &link);
-	if (link.link_status)
+	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
 			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
@@ -1462,7 +1462,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
 		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
@@ -1524,7 +1524,7 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	if (rte_eal_alarm_set(timeout * 1000,
 			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) < 0) {
+			      (void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1577,16 +1577,16 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
 	/* VLAN stripping setting */
-	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
 	}
 
 	/* QinQ stripping setting */
-	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1674,7 +1674,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1748,28 +1748,28 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & RTE_ETH_RSS_IPV4)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_SCTP;
 
-	if (rss_hf & RTE_ETH_RSS_IPV6)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_SCTP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1814,7 +1814,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1838,28 +1838,28 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	rss_hf = rss_conf->rss_hf;
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV4;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV6;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 	/* Propagate current RSS hash functions to caller */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index ed9a946b0c..34764a8a32 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -70,7 +70,7 @@ nfp_map_service(uint32_t service_id)
 	rte_service_runstate_set(service_id, 1);
 	rte_service_component_runstate_set(service_id, 1);
 	rte_service_lcore_start(slcore);
-	if (rte_service_may_be_active(slcore))
+	if (rte_service_may_be_active(slcore) != 0)
 		PMD_INIT_LOG(INFO, "The service %s is running", service_name);
 	else
 		PMD_INIT_LOG(ERR, "The service %s is not running", service_name);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index ebc5538291..12feec8eb4 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -89,7 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -113,7 +113,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -125,15 +125,15 @@ nfp_net_start(struct rte_eth_dev *dev)
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
 	/* Enable vxlan */
-	if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) {
 		new_ctrl |= NFP_NET_CFG_CTRL_VXLAN;
 		update |= NFP_NET_CFG_UPDATE_VXLAN;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/* Enable packet type offload by extend ctrl word1. */
@@ -146,14 +146,14 @@ nfp_net_start(struct rte_eth_dev *dev)
 				| NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP;
 
 	update = NFP_NET_CFG_UPDATE_GEN;
-	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) < 0)
+	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -298,7 +298,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		/* Check to see if ports are still in use */
-		if (app_fw_nic->ports[i])
+		if (app_fw_nic->ports[i] != NULL)
 			return 0;
 	}
 
@@ -598,7 +598,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -618,7 +618,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -695,10 +695,11 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* Finally try the card type and media */
 	snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card);
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
-	if (rte_firmware_read(fw_name, &fw_buf, &fsize) < 0) {
-		PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name);
-		return -ENOENT;
-	}
+	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
+		goto load_fw;
+
+	PMD_DRV_LOG(ERR, "Can't find suitable firmware.");
+	return -ENOENT;
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
@@ -727,7 +728,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 	if (nfp_fw_model == NULL)
 		nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno");
 
-	if (nfp_fw_model) {
+	if (nfp_fw_model != NULL) {
 		PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model);
 	} else {
 		PMD_DRV_LOG(ERR, "firmware model NOT found");
@@ -865,7 +866,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		 * nfp_net_init
 		 */
 		ret = nfp_net_init(eth_dev);
-		if (ret) {
+		if (ret != 0) {
 			ret = -ENODEV;
 			goto port_cleanup;
 		}
@@ -878,7 +879,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 port_cleanup:
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
-		if (app_fw_nic->ports[i] && app_fw_nic->ports[i]->eth_dev) {
+		if (app_fw_nic->ports[i] != NULL &&
+				app_fw_nic->ports[i]->eth_dev != NULL) {
 			struct rte_eth_dev *tmp_dev;
 			tmp_dev = app_fw_nic->ports[i]->eth_dev;
 			nfp_ipsec_uninit(tmp_dev);
@@ -950,7 +952,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwinfo_cleanup;
 	}
 
-	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) {
+	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo) != 0) {
 		PMD_INIT_LOG(ERR, "Error when uploading firmware");
 		ret = -EIO;
 		goto eth_table_cleanup;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 0c94fc51ad..c8d6b0461b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -66,7 +66,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -83,7 +83,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -94,18 +94,18 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -330,7 +330,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -350,7 +350,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	nfp_netvf_read_mac(hw);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
 				   port);
 		/* Using random mac addresses for VFs */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index aa286535f7..bdbc92180d 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -489,8 +489,8 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx)
 
 	/* Check if buffer is full */
 	ring = &priv->stats_ids.free_list;
-	if (!CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
-			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1))
+	if (CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
+			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1) == 0)
 		return -ENOBUFS;
 
 	memcpy(&ring->buf[ring->head], &ctx, NFP_FL_STATS_ELEM_RS);
@@ -575,7 +575,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count++;
 			rte_spinlock_unlock(&priv->ipv6_off_lock);
 			return 0;
@@ -609,7 +609,7 @@ nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count--;
 			if (entry->ref_count == 0) {
 				LIST_REMOVE(entry, next);
@@ -639,14 +639,14 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (ext_meta != NULL)
 		key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
 
-	if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+	if ((key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv6_gre_tun));
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
@@ -656,7 +656,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
 		}
 	} else {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv4_gre_tun));
 			ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
@@ -750,7 +750,7 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 	mbuf_off_mask  += sizeof(struct nfp_flower_meta_tci);
 
 	/* Populate Extended Metadata if required */
-	if (key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		nfp_flower_compile_ext_meta(mbuf_off_exact, key_layer);
 		nfp_flower_compile_ext_meta(mbuf_off_mask, key_layer);
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
@@ -1035,7 +1035,7 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 			break;
 		case RTE_FLOW_ACTION_TYPE_SET_TTL:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_SET_TTL detected");
-			if (key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) {
+			if ((key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 				if (!ttl_tos_flag) {
 					key_ls->act_size +=
 						sizeof(struct nfp_fl_act_set_ip4_ttl_tos);
@@ -1130,15 +1130,15 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
 	struct nfp_flower_meta_tci *meta_tci;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) != 0)
 		return true;
 
-	if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) == 0)
 		return false;
 
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 	key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
-	if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
+	if ((key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) != 0)
 		return true;
 
 	return false;
@@ -1234,7 +1234,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1245,8 +1245,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
 
 			ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
@@ -1271,7 +1271,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1312,7 +1312,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1324,8 +1324,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
 		vtc_flow = rte_be_to_cpu_32(hdr->vtc_flow);
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 
 			ipv6_gre_tun->ip_ext.tos = vtc_flow >> RTE_IPV6_HDR_TC_SHIFT;
@@ -1354,7 +1354,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1398,7 +1398,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
 			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
@@ -1421,7 +1421,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		tcp_flags       = spec->hdr.tcp_flags;
 	}
 
-	if (ipv4) {
+	if (ipv4 != NULL) {
 		if (tcp_flags & RTE_TCP_FIN_FLAG)
 			ipv4->ip_ext.flags |= NFP_FL_TCP_FLAG_FIN;
 		if (tcp_flags & RTE_TCP_SYN_FLAG)
@@ -1476,7 +1476,7 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
@@ -1519,7 +1519,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
@@ -1559,7 +1559,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1571,8 +1571,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = hdr->vx_vni;
 		if (!is_mask)
@@ -1585,8 +1585,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 vxlan_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6))
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1613,7 +1613,7 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1625,8 +1625,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	geneve = is_mask ? mask : spec;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
 				(geneve->vni[1] << 8) | (geneve->vni[2]));
@@ -1641,8 +1641,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 geneve_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	} else {
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1669,8 +1669,8 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	/* NVGRE is the only supported GRE tunnel type */
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		if (is_mask)
 			tun6->ethertype = rte_cpu_to_be_16(~0);
@@ -1717,8 +1717,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	tun_key = is_mask ? *mask : *spec;
 
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		tun6->tun_key = tun_key;
 		tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
@@ -1733,8 +1733,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 gre_key_end:
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
@@ -2079,7 +2079,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor,
 			sizeof(struct nfp_flower_in_port);
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
 		mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
 	}
@@ -2522,7 +2522,7 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -2649,7 +2649,7 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -3145,7 +3145,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0)
 		return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
 	else
 		return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 66a5d6cb3a..4528417559 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -163,22 +163,22 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM) == 0)
 		return;
 
 	/* If IPv4 and IP checksum error, fail */
-	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
+	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) != 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK) == 0))
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
-	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM))
+	if ((rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) == 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM) == 0)
 		return;
 
-	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
+	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK) != 0)
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -232,7 +232,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0)
+		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
 	return 0;
@@ -387,7 +387,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 	 * to do anything.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
-		if (meta->vlan_layer >= 1 && meta->vlan[0].offload != 0) {
+		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
 			mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci);
 			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
@@ -771,7 +771,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM +
@@ -846,7 +846,7 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 		return;
 
 	for (i = 0; i < rxq->rx_count; i++) {
-		if (rxq->rxbufs[i].mbuf) {
+		if (rxq->rxbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf);
 			rxq->rxbufs[i].mbuf = NULL;
 		}
@@ -858,7 +858,7 @@ nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
-	if (rxq) {
+	if (rxq != NULL) {
 		nfp_net_rx_queue_release_mbufs(rxq);
 		rte_eth_dma_zone_free(dev, "rx_ring", queue_idx);
 		rte_free(rxq->rxbufs);
@@ -906,7 +906,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * Free memory prior to re-allocation if needed. This is the case after
 	 * calling nfp_net_stop
 	 */
-	if (dev->data->rx_queues[queue_idx]) {
+	if (dev->data->rx_queues[queue_idx] != NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
@@ -1037,7 +1037,7 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 		return;
 
 	for (i = 0; i < txq->tx_count; i++) {
-		if (txq->txbufs[i].mbuf) {
+		if (txq->txbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(txq->txbufs[i].mbuf);
 			txq->txbufs[i].mbuf = NULL;
 		}
@@ -1049,7 +1049,7 @@ nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
-	if (txq) {
+	if (txq != NULL) {
 		nfp_net_tx_queue_release_mbufs(txq);
 		rte_eth_dma_zone_free(dev, "tx_ring", queue_idx);
 		rte_free(txq->txbufs);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 02/11] net/nfp: unify the indent coding style
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
  2023-10-07  2:33 ` [PATCH 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 03/11] net/nfp: unify the type of integer variable Chaoyong He
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Each parameter of function should occupy one line, and indent two TAB
character.
All the statement which span multi line should indent two TAB character.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c           |   3 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |   7 +-
 .../net/nfp/flower/nfp_flower_representor.c   |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   4 +-
 drivers/net/nfp/nfp_common.c                  | 250 +++++++++---------
 drivers/net/nfp/nfp_common.h                  |  81 ++++--
 drivers/net/nfp/nfp_cpp_bridge.c              |  56 ++--
 drivers/net/nfp/nfp_ethdev.c                  |  82 +++---
 drivers/net/nfp/nfp_ethdev_vf.c               |  66 +++--
 drivers/net/nfp/nfp_flow.c                    |  36 +--
 drivers/net/nfp/nfp_rxtx.c                    |  86 +++---
 drivers/net/nfp/nfp_rxtx.h                    |  10 +-
 13 files changed, 357 insertions(+), 328 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 3ddaf0f28d..59717fa6b1 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue,
 }
 
 static int
-nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
+nfp_flower_init_vnic_common(struct nfp_net_hw *hw,
+		const char *vnic_type)
 {
 	int err;
 	uint32_t start_q;
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index b564e7cd73..4967cc2375 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(ERR,
-				"RX mbuf alloc failed port_id=%u queue_id=%hu",
-				rxq->port_id, rxq->qidx);
+			PMD_RX_LOG(ERR, "RX mbuf alloc failed port_id=%u queue_id=%hu",
+					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
 		}
@@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 	rte_wmb();
 	if (nb_hold >= rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 55ca3e6db0..01c2c5a517 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -826,7 +826,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		 /* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device*/
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 75ecb361ee..99675b6bd7 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 		free_desc = txq->rd_p - txq->wr_p;
 
 	return (free_desc > NFDK_TX_DESC_STOP_CNT) ?
-		(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
+			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
 /*
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index d4bd5edb0a..2426ffb261 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -101,9 +101,7 @@ static inline uint16_t
 nfp_net_nfdk_headlen_to_segs(uint16_t headlen)
 {
 	/* First descriptor fits less data, so adjust for that */
-	return DIV_ROUND_UP(headlen +
-			NFDK_TX_MAX_DATA_PER_DESC -
-			NFDK_TX_MAX_DATA_PER_HEAD,
+	return DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD,
 			NFDK_TX_MAX_DATA_PER_DESC);
 }
 
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 36752583dd..9719a9212b 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed)
 }
 
 static void
-nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
+nfp_net_notify_port_speed(struct nfp_net_hw *hw,
+		struct rte_eth_link *link)
 {
 	/**
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
@@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
 	nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE,
-		      nfp_net_link_speed_rte2nfp(link->link_speed));
+			nfp_net_link_speed_rte2nfp(link->link_speed));
 }
 
 /* The length of firmware version string */
 #define FW_VER_LEN        32
 
 static int
-__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
+__nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t update)
 {
 	int cnt;
 	uint32_t new;
 	struct timespec wait;
 
 	PMD_DRV_LOG(DEBUG, "Writing to the configuration queue (%p)...",
-		    hw->qcp_cfg);
+			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
 		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
@@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					  " %dms", update, cnt);
+					" %dms", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
+nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl,
+		uint32_t update)
 {
 	int ret;
 
@@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update)
+nfp_net_ext_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl_ext,
+		uint32_t update)
 {
 	int ret;
 
@@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
 		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
-				    rxmode->mtu, NFP_FRAME_SIZE_MAX);
+				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
 
@@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)
 }
 
 static inline void
-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl)
+nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
+		uint32_t *ctrl)
 {
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)
 		*ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2;
@@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0);
 
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;
-	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
-		 NFP_NET_CFG_UPDATE_MSIX;
+	update = NFP_NET_CFG_UPDATE_GEN |
+			NFP_NET_CFG_UPDATE_RING |
+			NFP_NET_CFG_UPDATE_MSIX;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
@@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw)
 }
 
 void
-nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
+nfp_net_write_mac(struct nfp_net_hw *hw,
+		uint8_t *mac)
 {
 	uint32_t mac0 = *(uint32_t *)mac;
 	uint16_t mac1;
@@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
 	mac += 4;
 	mac1 = *(uint16_t *)mac;
 	nn_writew(rte_cpu_to_be_16(mac1),
-		  hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
+			hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
 }
 
 int
-nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
+nfp_net_set_mac_addr(struct rte_eth_dev *dev,
+		struct rte_ether_addr *mac_addr)
 {
 	struct nfp_net_hw *hw;
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				  " port enabled");
+				" port enabled");
 		return -EBUSY;
 	}
 
@@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
@@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 
 int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			   struct rte_intr_handle *intr_handle)
+		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues) != 0) {
+				dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-			     " intr_vec", dev->data->nb_rx_queues);
+				" intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
@@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			 * efd interrupts
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
-			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1) != 0)
+			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-				rte_intr_vec_list_index_get(intr_handle,
-								   i));
+					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 
 	/* TX checksum offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
  * status.
  */
 int
-nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+nfp_net_link_update(struct rte_eth_dev *dev,
+		__rte_unused int wait_to_complete)
 {
 	int ret;
 	uint32_t i;
@@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 }
 
 int
-nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+nfp_net_stats_get(struct rte_eth_dev *dev,
+		struct rte_eth_stats *stats)
 {
 	int i;
 	struct nfp_net_hw *hw;
@@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		nfp_dev_stats.q_ipackets[i] -=
-			hw->eth_stats_base.q_ipackets[i];
+				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 
 		nfp_dev_stats.q_ibytes[i] -=
-			hw->eth_stats_base.q_ibytes[i];
+				hw->eth_stats_base.q_ibytes[i];
 	}
 
 	/* reading per TX ring stats */
@@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
-		nfp_dev_stats.q_opackets[i] -=
-			hw->eth_stats_base.q_opackets[i];
+		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 
-		nfp_dev_stats.q_obytes[i] -=
-			hw->eth_stats_base.q_obytes[i];
+		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
-	nfp_dev_stats.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
-	nfp_dev_stats.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* reading general device stats */
 	nfp_dev_stats.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
@@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	nfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 
 	nfp_dev_stats.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
@@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		hw->eth_stats_base.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
 	/* reading per TX ring stats */
@@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
 		hw->eth_stats_base.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 	}
 
 	hw->eth_stats_base.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	hw->eth_stats_base.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	hw->eth_stats_base.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	hw->eth_stats_base.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	/* reading general device stats */
 	hw->eth_stats_base.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	hw->eth_stats_base.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	/* RX ring mbuf allocation failures */
 	dev->data->rx_mbuf_alloc_failed = 0;
 
 	hw->eth_stats_base.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	return 0;
 }
@@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
@@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
-						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
-						   RTE_ETH_RSS_IPV6 |
-						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				RTE_ETH_RSS_IPV6 |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
-			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
-			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_50G |
+			RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 }
 
 int
-nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
-		      NFP_NET_CFG_ICR_UNMASKED);
+			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
 }
 
 int
-nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
@@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
-			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
-			    ? "full-duplex" : "half-duplex");
+				dev->data->port_id, link.link_speed,
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
+				"full-duplex" : "half-duplex");
 	else
-		PMD_DRV_LOG(INFO, " Port %d: Link Down",
-			    dev->data->port_id);
+		PMD_DRV_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
 
 	PMD_DRV_LOG(INFO, "PCI Address: " PCI_PRI_FMT,
-		    pci_dev->addr.domain, pci_dev->addr.bus,
-		    pci_dev->addr.devid, pci_dev->addr.function);
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
 /* Interrupt configuration and handling */
@@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 		/* Make sure all updates are written before un-masking */
 		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
-			      NFP_NET_CFG_ICR_UNMASKED);
+				NFP_NET_CFG_ICR_UNMASKED);
 	}
 }
 
@@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param)
 	}
 
 	if (rte_eal_alarm_set(timeout * 1000,
-			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) != 0) {
+			nfp_net_dev_interrupt_delayed_handler,
+			(void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param)
 }
 
 int
-nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
+		uint16_t mtu)
 {
 	struct nfp_net_hw *hw;
 
@@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* mtu setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
-			    dev->data->port_id);
+				dev->data->port_id);
 		return -EBUSY;
 	}
 
 	/* MTU larger than current mbufsize not supported */
 	if (mtu > hw->flbufsz) {
 		PMD_DRV_LOG(ERR, "MTU (%u) larger than current mbufsize (%u) not supported",
-			    mtu, hw->flbufsz);
+				mtu, hw->flbufsz);
 		return -ERANGE;
 	}
 
@@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 }
 
 int
-nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
+		int mask)
 {
 	uint32_t new_ctrl, update;
 	struct nfp_net_hw *hw;
@@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 nfp_net_rss_reta_write(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint32_t reta, mask;
 	int i, j;
@@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
-		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift,
-			      reta);
+		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
 	return 0;
 }
@@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */
 int
 nfp_net_reta_update(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
  /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
-		   struct rte_eth_rss_reta_entry64 *reta_conf,
-		   uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint8_t i, j, mask;
 	int idx, shift;
@@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		if (mask == 0)
 			continue;
 
-		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) +
-				    shift);
+		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift);
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
 			reta_conf[idx].reta[shift + j] =
-				(uint8_t)((reta >> (8 * j)) & 0xF);
+					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
 	return 0;
@@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	struct nfp_net_hw *hw;
 	uint64_t rss_hf;
@@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint32_t update;
 	uint64_t rss_hf;
@@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			  struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
@@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	int i, j, ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-		rx_queues);
+			rx_queues);
 
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
@@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 
 	for (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) {
 		nn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port),
-			(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
+				(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
 	}
 
 	rte_spinlock_lock(&hw->reconfig_lock);
@@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
  * than 40 bits
  */
 int
-nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
+nfp_net_check_dma_mask(struct nfp_net_hw *hw,
+		char *name)
 {
 	if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&
 			rte_mem_check_dma_mask(40) != 0) {
@@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw)
 }
 
 static void
-nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
+nfp_net_get_nsp_info(struct nfp_net_hw *hw,
+		char *nsp_version)
 {
 	struct nfp_nsp *nsp;
 
@@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
 }
 
 static void
-nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
+nfp_net_get_mip_name(struct nfp_net_hw *hw,
+		char *mip_name)
 {
 	struct nfp_mip *mip;
 
@@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
 }
 
 static void
-nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name)
+nfp_net_get_app_name(struct nfp_net_hw *hw,
+		char *app_name)
 {
 	switch (hw->pf_dev->app_fw_id) {
 	case NFP_APP_FW_CORE_NIC:
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index bc3a948231..e4fd394868 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -180,37 +180,47 @@ struct nfp_net_adapter {
 	struct nfp_net_hw hw;
 };
 
-static inline uint8_t nn_readb(volatile const void *addr)
+static inline uint8_t
+nn_readb(volatile const void *addr)
 {
 	return rte_read8(addr);
 }
 
-static inline void nn_writeb(uint8_t val, volatile void *addr)
+static inline void
+nn_writeb(uint8_t val,
+		volatile void *addr)
 {
 	rte_write8(val, addr);
 }
 
-static inline uint32_t nn_readl(volatile const void *addr)
+static inline uint32_t
+nn_readl(volatile const void *addr)
 {
 	return rte_read32(addr);
 }
 
-static inline void nn_writel(uint32_t val, volatile void *addr)
+static inline void
+nn_writel(uint32_t val,
+		volatile void *addr)
 {
 	rte_write32(val, addr);
 }
 
-static inline uint16_t nn_readw(volatile const void *addr)
+static inline uint16_t
+nn_readw(volatile const void *addr)
 {
 	return rte_read16(addr);
 }
 
-static inline void nn_writew(uint16_t val, volatile void *addr)
+static inline void
+nn_writew(uint16_t val,
+		volatile void *addr)
 {
 	rte_write16(val, addr);
 }
 
-static inline uint64_t nn_readq(volatile void *addr)
+static inline uint64_t
+nn_readq(volatile void *addr)
 {
 	const volatile uint32_t *p = addr;
 	uint32_t low, high;
@@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr)
 	return low + ((uint64_t)high << 32);
 }
 
-static inline void nn_writeq(uint64_t val, volatile void *addr)
+static inline void
+nn_writeq(uint64_t val,
+		volatile void *addr)
 {
 	nn_writel(val >> 32, (volatile char *)addr + 4);
 	nn_writel(val, addr);
@@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr)
  * Performs any endian conversion necessary.
  */
 static inline uint8_t
-nn_cfg_readb(struct nfp_net_hw *hw, int off)
+nn_cfg_readb(struct nfp_net_hw *hw,
+		int off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
-nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val)
+nn_cfg_writeb(struct nfp_net_hw *hw,
+		int off,
+		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
 }
 
 static inline uint16_t
-nn_cfg_readw(struct nfp_net_hw *hw, int off)
+nn_cfg_readw(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val)
+nn_cfg_writew(struct nfp_net_hw *hw,
+		int off,
+		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
 }
 
 static inline uint32_t
-nn_cfg_readl(struct nfp_net_hw *hw, int off)
+nn_cfg_readl(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val)
+nn_cfg_writel(struct nfp_net_hw *hw,
+		int off,
+		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
 }
 
 static inline uint64_t
-nn_cfg_readq(struct nfp_net_hw *hw, int off)
+nn_cfg_readq(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
+nn_cfg_writeq(struct nfp_net_hw *hw,
+		int off,
+		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
@@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
  * @val: Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
+nfp_qcp_ptr_add(uint8_t *q,
+		enum nfp_qcp_ptr ptr,
+		uint32_t val)
 {
 	uint32_t off;
 
@@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
  * @ptr: Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr)
+nfp_qcp_read(uint8_t *q,
+		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
 	uint32_t val;
@@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw);
 void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac);
 int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr);
 int nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			       struct rte_intr_handle *intr_handle);
+		struct rte_intr_handle *intr_handle);
 uint32_t nfp_check_offloads(struct rte_eth_dev *dev);
 int nfp_net_promisc_enable(struct rte_eth_dev *dev);
 int nfp_net_promisc_disable(struct rte_eth_dev *dev);
 int nfp_net_link_update(struct rte_eth_dev *dev,
-			__rte_unused int wait_to_complete);
+		__rte_unused int wait_to_complete);
 int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
 int nfp_net_stats_reset(struct rte_eth_dev *dev);
 uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev);
@@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		unsigned int n);
 int nfp_net_xstats_reset(struct rte_eth_dev *dev);
 int nfp_net_infos_get(struct rte_eth_dev *dev,
-		      struct rte_eth_dev_info *dev_info);
+		struct rte_eth_dev_info *dev_info);
 const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev);
 int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
 int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id);
@@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param);
 int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 int nfp_net_reta_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_reta_entry64 *reta_conf,
-			uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_reta_query(struct rte_eth_dev *dev,
-		       struct rte_eth_rss_reta_entry64 *reta_conf,
-		       uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			    struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			      struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_config_default(struct rte_eth_dev *dev);
 void nfp_net_stop_rx_queue(struct rte_eth_dev *dev);
 void nfp_net_close_rx_queue(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 34764a8a32..85a8bf9235 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)
  * of CPP interface handler configured by the PMD setup.
  */
 static int
-nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_write(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-		offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-		cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc fail");
 			return -EIO;
@@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 				len = sizeof(tmpbuf);
 
 			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when receiving, %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when receiving, %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			 NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 
 	return 0;
@@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
  * data is sent to the requester using the same socket.
  */
 static int
-nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_read(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-			   offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-			   cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc failed");
 			return -EIO;
@@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 				return -EIO;
 			}
 			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when sending: %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when sending: %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 	return 0;
 }
@@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
  * does not require any CPP access at all.
  */
 static int
-nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_ioctl(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	uint32_t cmd, ident_size, tmp;
 	int err;
@@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args)
 	strcpy(address.sa_data, "/tmp/nfp_cpp");
 
 	ret = bind(sockfd, (const struct sockaddr *)&address,
-		   sizeof(struct sockaddr));
+			sizeof(struct sockaddr));
 	if (ret < 0) {
 		PMD_CPP_LOG(ERR, "bind error (%d). Service failed", errno);
 		close(sockfd);
@@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n",
-						   __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
 				break;
 			}
 
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 12feec8eb4..65473d87e8 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -22,7 +22,8 @@
 #include "nfp_logs.h"
 
 static int
-nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port)
+nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
+		int port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
-					  "with NFP multiport PF");
+					"with NFP multiport PF");
 				return -EINVAL;
 		}
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 		/* Configure the physical port up */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 1);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 
 	hw->ctrl = new_ctrl;
 
@@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 0);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 
 	return 0;
 }
@@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 1);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 }
 
 /* Set the link down. */
@@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 0);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 }
 
 /* Reset and stop device. The device can not be restarted. */
@@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	nfp_ipsec_uninit(dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
 	/* Mark this port as unused and free device priv resources*/
@@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_private = hw;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		return -ENOMEM;
@@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	/* Registering LSC interrupt handler */
 	rte_intr_callback_register(pci_dev->intr_handle,
@@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 #define DEFAULT_FW_PATH       "/lib/firmware/netronome"
 
 static int
-nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
+nfp_fw_upload(struct rte_pci_device *dev,
+		struct nfp_nsp *nsp,
+		char *card)
 {
 	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
@@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* First try to find a firmware image specific for this device */
 	snprintf(serial, sizeof(serial),
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
-		cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
-		cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
+			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
+			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
 
-	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH,
-			serial);
+	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
@@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
-		fw_name, fsize);
+			fw_name, fsize);
 	PMD_DRV_LOG(INFO, "Uploading the firmware ...");
 	nfp_nsp_load_fw(nsp, fw_buf, fsize);
 	PMD_DRV_LOG(INFO, "Done");
@@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 
 	if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) {
 		PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u",
-			nfp_eth_table->count);
+				nfp_eth_table->count);
 		return -EIO;
 	}
 
@@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	numa_node = rte_socket_id();
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		snprintf(port_name, sizeof(port_name), "%s_port%d",
-			 pf_dev->pci_dev->device.name, i);
+				pf_dev->pci_dev->device.name, i);
 
 		/* Allocate a eth_dev for this phyport */
 		eth_dev = rte_eth_dev_allocate(port_name);
@@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		}
 
 		/* Allocate memory for this phyport */
-		eth_dev->data->dev_private =
-			rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw),
+		eth_dev->data->dev_private = rte_zmalloc_socket(port_name,
+				sizeof(struct nfp_net_hw),
 				RTE_CACHE_LINE_SIZE, numa_node);
 		if (eth_dev->data->dev_private == NULL) {
 			ret = -ENOMEM;
@@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	/* Now the symbol table should be there */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		ret = -EIO;
 		goto eth_table_cleanup;
 	}
@@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	 */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		return -EIO;
 	}
 
@@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		.vendor_id = 0,
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c8d6b0461b..ac6a10685d 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	/* check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	/* unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
-				     nfp_net_dev_interrupt_handler,
-				     (void *)dev);
+			nfp_net_dev_interrupt_handler, (void *)dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/*
 	 * The ixgbe PMD disables the pcie master on the
@@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-	hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) *
-			nfp_net_xstats_size(eth_dev), 0);
+	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
+			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
 		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
 				pci_dev->device.name);
@@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
 	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
 
-	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     tx_bar_off;
-	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     rx_bar_off;
+	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
+	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		err = -ENOMEM;
@@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
-				   port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		/* Registering LSC interrupt handler */
 		rte_intr_callback_register(pci_dev->intr_handle,
-					   nfp_net_dev_interrupt_handler,
-					   (void *)eth_dev);
+				nfp_net_dev_interrupt_handler, (void *)eth_dev);
 		/* Telling the firmware about the LSC interrupt entry */
 		nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
 		/* Recording current stats counters values */
@@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 static const struct rte_pci_id pci_id_nfp_vf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		.vendor_id = 0,
 	},
 };
 
-static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
+static int
+nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 {
 	/* VF cleanup, just free private port data */
 	return nfp_netvf_close(eth_dev);
 }
 
-static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
-		sizeof(struct nfp_net_adapter), nfp_netvf_init);
+			sizeof(struct nfp_net_adapter), nfp_netvf_init);
 }
 
-static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bdbc92180d..156b9599db 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -166,7 +166,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
 }
 
 static int
-nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
+nfp_mask_id_alloc(struct nfp_flow_priv *priv,
+		uint8_t *mask_id)
 {
 	uint8_t temp_id;
 	uint8_t freed_id;
@@ -198,7 +199,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
 }
 
 static int
-nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id)
+nfp_mask_id_free(struct nfp_flow_priv *priv,
+		uint8_t mask_id)
 {
 	struct circ_buf *ring;
 
@@ -671,7 +673,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 }
 
 static void
-nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_meta_tci(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_meta_tci *tci_meta;
 
@@ -682,7 +685,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
 }
 
 static void
-nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
+nfp_flower_update_meta_tci(char *exact,
+		uint8_t mask_id)
 {
 	struct nfp_flower_meta_tci *meta_tci;
 
@@ -691,7 +695,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
 }
 
 static void
-nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_ext_meta(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_ext_meta *ext_meta;
 
@@ -1400,14 +1405,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
 	} else { /* IPv6 */
 		ipv6  = (struct nfp_flower_ipv6 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv6));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv6));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
 	}
 
 	mask = item->mask ? item->mask : proc->mask_default;
@@ -1478,10 +1483,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1521,10 +1526,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1915,9 +1920,8 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 		return 0;
 	}
 
-	mask = item->mask ?
-		(const uint8_t *)item->mask :
-		(const uint8_t *)proc->mask_default;
+	mask = item->mask ? (const uint8_t *)item->mask :
+			(const uint8_t *)proc->mask_default;
 
 	/*
 	 * Single-pass check to make sure that:
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 4528417559..7885166753 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -158,8 +158,9 @@ struct nfp_ptype_parsed {
 
 /* set mbuf checksum flags based on RX descriptor flags */
 void
-nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb)
+nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
+		struct nfp_net_rx_desc *rxd,
+		struct rte_mbuf *mb)
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
@@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	unsigned int i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
-		   rxq->rx_count);
+			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
 		struct nfp_net_rx_desc *rxd;
@@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u",
-		   rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
  *   Mbuf to set the packet type.
  */
 static void
-nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb)
+nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype,
+		struct rte_mbuf *mb)
 {
 	uint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER;
 	uint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype;
@@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  */
 
 uint16_t
-nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+nfp_net_recv_pkts(void *rx_queue,
+		struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
@@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(DEBUG,
-			"RX mbuf alloc failed port_id=%u queue_id=%hu",
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%hu",
 					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
@@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxb->mbuf = new_mb;
 
 		PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u",
-			   rxds->rxd.data_len, rxq->mbuf_size);
+				rxds->rxd.data_len, rxq->mbuf_size);
 
 		/* Size of this segment */
 		mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 		/* Size of the whole packet. We just support 1 segment */
 		mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 
-		if (unlikely((mb->data_len + hw->rx_offset) >
-			     rxq->mbuf_size)) {
+		if (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) {
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
 			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+					"mbuf overflow likely due to the RX offset.\n"
+					"\t\tYour mbuf size should have extra space for"
+					" RX offset=%u bytes.\n"
+					"\t\tCurrently you just have %u bytes available"
+					" but the received packet is %u bytes long",
+					hw->rx_offset,
+					rxq->mbuf_size - hw->rx_offset,
+					mb->data_len);
 			rte_pktmbuf_free(mb);
 			break;
 		}
@@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
-			mb->data_off = RTE_PKTMBUF_HEADROOM +
-				       NFP_DESC_META_LEN(rxds);
+			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
 
 		/* No scatter mode supported */
 		mb->nb_segs = 1;
@@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return nb_hold;
 
 	PMD_RX_LOG(DEBUG, "RX  port_id=%hu queue_id=%hu, %hu packets received",
-		   rxq->port_id, rxq->qidx, avail);
+			rxq->port_id, rxq->qidx, avail);
 
 	nb_hold += rxq->nb_rx_hold;
 
@@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 	if (nb_hold > rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			   rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
@@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 }
 
 void
-nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
@@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq)
 
 int
 nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t queue_idx, uint16_t nb_desc,
-		       unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
-		       struct rte_mempool *mp)
+		uint16_t queue_idx,
+		uint16_t nb_desc,
+		unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp)
 {
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
@@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Validating number of descriptors */
 	rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc);
 	if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 ||
-	    nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
+			nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
 		PMD_DRV_LOG(ERR, "Wrong nb_desc value");
 		return -EINVAL;
 	}
@@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Allocating rx queue data structure */
 	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq),
-				 RTE_CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (rxq == NULL)
 		return -ENOMEM;
 
@@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * resizing in later calls to the queue setup function.
 	 */
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
-				   sizeof(struct nfp_net_rx_desc) *
-				   max_rx_desc, NFP_MEMZONE_ALIGN,
-				   socket_id);
+			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
+			NFP_MEMZONE_ALIGN, socket_id);
 
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
@@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
-					 sizeof(*rxq->rxbufs) * nb_desc,
-					 RTE_CACHE_LINE_SIZE, socket_id);
+			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
+			socket_id);
 	if (rxq->rxbufs == NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
@@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	}
 
 	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-		   rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
+			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
 
 	nfp_net_reset_rx_queue(rxq);
 
@@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 	int todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
-		   " status", txq->qidx);
+			" status", txq->qidx);
 
 	/* Work out how many packets have been sent */
 	qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR);
 
 	if (qcp_rd_p == txq->rd_p) {
 		PMD_TX_LOG(DEBUG, "queue %hu: It seems harrier is not sending "
-			   "packets (%u, %u)", txq->qidx,
-			   qcp_rd_p, txq->rd_p);
+				"packets (%u, %u)", txq->qidx,
+				qcp_rd_p, txq->rd_p);
 		return 0;
 	}
 
@@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 		todo = qcp_rd_p + txq->tx_count - txq->rd_p;
 
 	PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u",
-		   qcp_rd_p, txq->rd_p, txq->rd_p);
+			qcp_rd_p, txq->rd_p, txq->rd_p);
 
 	if (todo == 0)
 		return todo;
@@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 }
 
 void
-nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_tx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 3c7138f7d6..9a30ebd89e 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq)
 }
 
 void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb);
+		struct rte_mbuf *mb);
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
 uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				  uint16_t nb_pkts);
+		uint16_t nb_pkts);
 void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq);
 int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-				  uint16_t nb_desc, unsigned int socket_id,
-				  const struct rte_eth_rxconf *rx_conf,
-				  struct rte_mempool *mp);
+		uint16_t nb_desc, unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp);
 void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_tx_queue(struct nfp_net_txq *txq);
 
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 03/11] net/nfp: unify the type of integer variable
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
  2023-10-07  2:33 ` [PATCH 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
  2023-10-07  2:33 ` [PATCH 02/11] net/nfp: unify the indent coding style Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 04/11] net/nfp: standard the local variable coding style Chaoyong He
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the type of integer variable to the DPDK prefer style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |  2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c | 16 +++++-----
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c       |  6 ++--
 drivers/net/nfp/nfp_common.c             | 37 +++++++++++++-----------
 drivers/net/nfp/nfp_common.h             | 16 +++++-----
 drivers/net/nfp/nfp_ethdev.c             | 24 +++++++--------
 drivers/net/nfp/nfp_ethdev_vf.c          |  2 +-
 drivers/net/nfp/nfp_flow.c               |  4 +--
 drivers/net/nfp/nfp_rxtx.c               | 12 ++++----
 drivers/net/nfp/nfp_rxtx.h               |  2 +-
 10 files changed, 62 insertions(+), 59 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 59717fa6b1..bd961043b2 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -26,7 +26,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 6b9532f5b6..5d6912b079 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -64,10 +64,10 @@ nfp_flower_cmsg_mac_repr_init(struct rte_mbuf *mbuf,
 
 static void
 nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
-		unsigned int idx,
-		unsigned int nbi,
-		unsigned int nbi_port,
-		unsigned int phys_port)
+		uint8_t idx,
+		uint32_t nbi,
+		uint32_t nbi_port,
+		uint32_t phys_port)
 {
 	struct nfp_flower_cmsg_mac_repr *msg;
 
@@ -81,11 +81,11 @@ nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
 int
 nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower)
 {
-	int i;
+	uint8_t i;
 	uint16_t cnt;
-	unsigned int nbi;
-	unsigned int nbi_port;
-	unsigned int phys_port;
+	uint32_t nbi;
+	uint32_t nbi_port;
+	uint32_t phys_port;
 	struct rte_mbuf *mbuf;
 	struct nfp_eth_table *nfp_eth_table;
 
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 64928254d8..5a84629ed7 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -227,9 +227,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		uint16_t nb_pkts,
 		bool repr_flag)
 {
-	int i;
-	int pkt_size;
-	int dma_size;
+	uint16_t i;
+	uint32_t pkt_size;
+	uint16_t dma_size;
 	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 9719a9212b..cb2c2afbd7 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -199,7 +199,7 @@ static int
 __nfp_net_reconfig(struct nfp_net_hw *hw,
 		uint32_t update)
 {
-	int cnt;
+	uint32_t cnt;
 	uint32_t new;
 	struct timespec wait;
 
@@ -229,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %dms", update, cnt);
+					" %ums", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -466,7 +466,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -575,7 +575,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
-	int i;
+	uint16_t i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -832,7 +832,7 @@ int
 nfp_net_stats_get(struct rte_eth_dev *dev,
 		struct rte_eth_stats *stats)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
@@ -923,7 +923,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1398,7 +1398,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1419,7 +1419,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1619,9 +1619,10 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint32_t reta, mask;
-	int i, j;
-	int idx, shift;
+	uint8_t mask;
+	uint32_t reta;
+	uint16_t i, j;
+	uint16_t idx, shift;
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1695,8 +1696,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint8_t i, j, mask;
-	int idx, shift;
+	uint16_t i, j;
+	uint8_t mask;
+	uint16_t idx, shift;
 	uint32_t reta;
 	struct nfp_net_hw *hw;
 
@@ -1720,7 +1722,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		/* Handling 4 RSS entries per loop */
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
-		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
+		mask = (reta_conf[idx].mask >> shift) & 0xF;
 
 		if (mask == 0)
 			continue;
@@ -1744,7 +1746,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl = 0;
 	uint8_t key;
-	int i;
+	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1835,7 +1837,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
 	uint8_t key;
-	int i;
+	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1893,7 +1895,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	uint16_t queue;
-	int i, j, ret;
+	uint8_t i, j;
+	int ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index e4fd394868..71153ea25b 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -245,14 +245,14 @@ nn_writeq(uint64_t val,
  */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
 nn_cfg_writeb(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
@@ -260,14 +260,14 @@ nn_cfg_writeb(struct nfp_net_hw *hw,
 
 static inline uint16_t
 nn_cfg_readw(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writew(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
@@ -275,14 +275,14 @@ nn_cfg_writew(struct nfp_net_hw *hw,
 
 static inline uint32_t
 nn_cfg_readl(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writel(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
@@ -290,14 +290,14 @@ nn_cfg_writel(struct nfp_net_hw *hw,
 
 static inline uint64_t
 nn_cfg_readq(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writeq(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 65473d87e8..140d20dcf7 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -23,7 +23,7 @@
 
 static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
-		int port)
+		uint16_t port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -255,7 +255,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	int i;
+	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -487,7 +487,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct rte_ether_addr *tmp_ether_addr;
 	uint64_t rx_base;
 	uint64_t tx_base;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 
 	PMD_INIT_FUNC_TRACE();
@@ -501,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
 	port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx;
-	if (port < 0 || port > 7) {
+	if (port > 7) {
 		PMD_DRV_LOG(ERR, "Port value is wrong");
 		return -ENODEV;
 	}
@@ -761,10 +761,10 @@ static int
 nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
-	int i;
+	uint8_t i;
 	int ret;
 	int err = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
 	struct rte_eth_dev *eth_dev;
@@ -785,7 +785,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -795,7 +795,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	 * For coreNIC the number of vNICs exposed should be the same as the
 	 * number of physical ports
 	 */
-	if (total_vnics != (int)nfp_eth_table->count) {
+	if (total_vnics != nfp_eth_table->count) {
 		PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -1053,15 +1053,15 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 		struct nfp_rtsym_table *sym_tbl,
 		struct nfp_cpp *cpp)
 {
-	int i;
+	uint32_t i;
 	int err = 0;
 	int ret = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		return -ENODEV;
 	}
@@ -1069,7 +1069,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 	for (i = 0; i < total_vnics; i++) {
 		struct rte_eth_dev *eth_dev;
 		char port_name[RTE_ETH_NAME_MAX_LEN];
-		snprintf(port_name, sizeof(port_name), "%s_port%d",
+		snprintf(port_name, sizeof(port_name), "%s_port%u",
 				pci_dev->device.name, i);
 
 		PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index ac6a10685d..892300a909 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -260,7 +260,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	uint64_t tx_bar_off = 0, rx_bar_off = 0;
 	uint32_t start_q;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 	const struct nfp_dev_info *dev_info;
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 156b9599db..a254d839ff 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -2001,7 +2001,7 @@ nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
 		char **mbuf_off_mask,
 		bool is_outer_layer)
 {
-	int i;
+	uint32_t i;
 	int ret = 0;
 	bool continue_flag = true;
 	const struct rte_flow_item *item;
@@ -2235,7 +2235,7 @@ nfp_flow_action_set_ipv6(char *act_data,
 		const struct rte_flow_action *action,
 		bool ip_src_flag)
 {
-	int i;
+	uint32_t i;
 	rte_be32_t tmp;
 	size_t act_size;
 	struct nfp_fl_act_set_ipv6_addr *set_ip;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 7885166753..8cbb9b74a2 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,7 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 	uint64_t dma_addr;
-	unsigned int i;
+	uint16_t i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -229,7 +229,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 int
 nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
@@ -840,7 +840,7 @@ nfp_net_recv_pkts(void *rx_queue,
 static void
 nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 {
-	unsigned int i;
+	uint16_t i;
 
 	if (rxq->rxbufs == NULL)
 		return;
@@ -992,11 +992,11 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
  * @txq: TX queue to work with
  * Returns number of descriptors freed
  */
-int
+uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
 	uint32_t qcp_rd_p;
-	int todo;
+	uint32_t todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1032,7 +1032,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 static void
 nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 {
-	unsigned int i;
+	uint32_t i;
 
 	if (txq->txbufs == NULL)
 		return;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 9a30ebd89e..98ef6c3d93 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -253,7 +253,7 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t nb_desc,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
-int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
+uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
 void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 04/11] net/nfp: standard the local variable coding style
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (2 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 03/11] net/nfp: unify the type of integer variable Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 05/11] net/nfp: adjust the log statement Chaoyong He
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

There should only declare one local variable in each line, and the local
variable should obey the unify sequence.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c |  6 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c  |  4 +-
 drivers/net/nfp/nfp_common.c        | 97 ++++++++++++++++-------------
 drivers/net/nfp/nfp_common.h        |  3 +-
 drivers/net/nfp/nfp_cpp_bridge.c    | 39 ++++++++----
 drivers/net/nfp/nfp_ethdev.c        | 47 +++++++-------
 drivers/net/nfp/nfp_ethdev_vf.c     | 23 +++----
 drivers/net/nfp/nfp_flow.c          | 28 ++++-----
 drivers/net/nfp/nfp_rxtx.c          | 38 +++++------
 9 files changed, 154 insertions(+), 131 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index bd961043b2..9000ee191c 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -24,9 +24,9 @@
 static void
 nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
@@ -50,9 +50,9 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 static void
 nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 {
-	struct nfp_net_hw *hw;
+	uint32_t update;
 	uint32_t new_ctrl;
-	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 5a84629ed7..699f65ebef 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -228,13 +228,13 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		bool repr_flag)
 {
 	uint16_t i;
+	uint8_t offset;
 	uint32_t pkt_size;
 	uint16_t dma_size;
-	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
-	uint16_t issued_descs;
 	struct rte_mbuf *pkt;
+	uint16_t issued_descs;
 	struct nfp_net_hw *hw;
 	struct rte_mbuf **lmbuf;
 	struct nfp_net_txq *txq;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index cb2c2afbd7..18291a1cde 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -375,10 +375,10 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 int
 nfp_net_configure(struct rte_eth_dev *dev)
 {
+	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -464,9 +464,9 @@ nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
 void
 nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -488,8 +488,9 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 void
 nfp_net_disable_queues(struct rte_eth_dev *dev)
 {
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
-	uint32_t new_ctrl, update = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -528,9 +529,10 @@ void
 nfp_net_write_mac(struct nfp_net_hw *hw,
 		uint8_t *mac)
 {
-	uint32_t mac0 = *(uint32_t *)mac;
+	uint32_t mac0;
 	uint16_t mac1;
 
+	mac0 = *(uint32_t *)mac;
 	nn_writel(rte_cpu_to_be_32(mac0), hw->ctrl_bar + NFP_NET_CFG_MACADDR);
 
 	mac += 4;
@@ -543,8 +545,9 @@ int
 nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 		struct rte_ether_addr *mac_addr)
 {
+	uint32_t ctrl;
+	uint32_t update;
 	struct nfp_net_hw *hw;
-	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
@@ -574,8 +577,8 @@ int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
-	struct nfp_net_hw *hw;
 	uint16_t i;
+	struct nfp_net_hw *hw;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -615,11 +618,11 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
+	uint32_t ctrl = 0;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	uint32_t ctrl = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -682,9 +685,10 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_enable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
@@ -725,9 +729,10 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_disable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -764,8 +769,8 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	uint32_t nn_link_status;
 	struct nfp_net_hw *hw;
+	uint32_t nn_link_status;
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
@@ -988,12 +993,13 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_xstats_size(const struct rte_eth_dev *dev)
 {
-	/* If the device is a VF, then there will be no MAC stats */
-	struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t count;
+	struct nfp_net_hw *hw;
 	const uint32_t size = RTE_DIM(nfp_net_xstats);
 
+	/* If the device is a VF, then there will be no MAC stats */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hw->mac_stats == NULL) {
-		uint32_t count;
 		for (count = 0; count < size; count++) {
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
@@ -1396,9 +1402,9 @@ int
 nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1417,9 +1423,9 @@ int
 nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1436,8 +1442,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 static void
 nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_eth_link link;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
@@ -1573,16 +1579,16 @@ int
 nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 		int mask)
 {
-	uint32_t new_ctrl, update;
+	int ret;
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
+	uint32_t rxvlan_ctrl = 0;
 	struct rte_eth_conf *dev_conf;
-	uint32_t rxvlan_ctrl;
-	int ret;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	dev_conf = &dev->data->dev_conf;
 	new_ctrl = hw->ctrl;
-	rxvlan_ctrl = 0;
 
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
@@ -1619,12 +1625,15 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
 	uint32_t reta;
-	uint16_t i, j;
-	uint16_t idx, shift;
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t shift;
+	struct nfp_net_hw *hw;
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
@@ -1670,11 +1679,11 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	uint32_t update;
 	int ret;
+	uint32_t update;
+	struct nfp_net_hw *hw;
 
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1696,10 +1705,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint16_t i, j;
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
-	uint16_t idx, shift;
 	uint32_t reta;
+	uint16_t shift;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1742,11 +1753,11 @@ static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
-	struct nfp_net_hw *hw;
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
+	struct nfp_net_hw *hw;
 	uint32_t cfg_rss_ctrl = 0;
-	uint8_t key;
-	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1834,10 +1845,10 @@ int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
-	uint8_t key;
-	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1890,13 +1901,14 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 int
 nfp_net_rss_config_default(struct rte_eth_dev *dev)
 {
+	int ret;
+	uint8_t i;
+	uint8_t j;
+	uint16_t queue = 0;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rss_conf rss_conf;
-	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
-	uint16_t queue;
-	uint8_t i, j;
-	int ret;
+	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
@@ -1904,7 +1916,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
-	queue = 0;
 	for (i = 0; i < 0x40; i += 8) {
 		for (j = i; j < (i + 8); j++) {
 			nfp_reta_conf[0].reta[j] = queue;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 71153ea25b..9cb889c4a6 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -222,8 +222,9 @@ nn_writew(uint16_t val,
 static inline uint64_t
 nn_readq(volatile void *addr)
 {
+	uint32_t low;
+	uint32_t high;
 	const volatile uint32_t *p = addr;
-	uint32_t low, high;
 
 	high = nn_readl((volatile const void *)(p + 1));
 	low = nn_readl((volatile const void *)p);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 85a8bf9235..727ec7a7b2 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -119,12 +119,16 @@ static int
 nfp_cpp_bridge_serve_write(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -220,12 +224,16 @@ static int
 nfp_cpp_bridge_serve_read(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -319,8 +327,10 @@ static int
 nfp_cpp_bridge_serve_ioctl(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	uint32_t cmd, ident_size, tmp;
 	int err;
+	uint32_t cmd;
+	uint32_t tmp;
+	uint32_t ident_size;
 
 	/* Reading now the IOCTL command */
 	err = recv(sockfd, &cmd, 4, 0);
@@ -375,10 +385,13 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 static int
 nfp_cpp_bridge_service_func(void *args)
 {
-	struct sockaddr address;
+	int op;
+	int ret;
+	int sockfd;
+	int datafd;
 	struct nfp_cpp *cpp;
+	struct sockaddr address;
 	struct nfp_pf_dev *pf_dev;
-	int sockfd, datafd, op, ret;
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 140d20dcf7..7d149decfb 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -25,8 +25,8 @@ static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 		uint16_t port)
 {
+	struct nfp_net_hw *hw;
 	struct nfp_eth_table *nfp_eth_table;
-	struct nfp_net_hw *hw = NULL;
 
 	/* Grab a pointer to the correct physical port */
 	hw = app_fw_nic->ports[port];
@@ -42,18 +42,19 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 static int
 nfp_net_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
 	uint32_t cap_extend;
-	uint32_t ctrl_extend = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
+	uint32_t ctrl_extend = 0;
 	struct nfp_pf_dev *pf_dev;
-	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct nfp_app_fw_nic *app_fw_nic;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
@@ -251,11 +252,11 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 static int
 nfp_net_close(struct rte_eth_dev *dev)
 {
+	uint8_t i;
 	struct nfp_net_hw *hw;
-	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -480,15 +481,15 @@ nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_net_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
+	int err;
+	uint16_t port;
+	uint64_t rx_base;
+	uint64_t tx_base;
+	struct nfp_net_hw *hw;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	struct nfp_net_hw *hw;
 	struct rte_ether_addr *tmp_ether_addr;
-	uint64_t rx_base;
-	uint64_t tx_base;
-	uint16_t port = 0;
-	int err;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -650,14 +651,14 @@ nfp_fw_upload(struct rte_pci_device *dev,
 		struct nfp_nsp *nsp,
 		char *card)
 {
-	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
-	char fw_name[125];
-	char serial[40];
 	size_t fsize;
+	char serial[40];
+	char fw_name[125];
 	uint16_t interface;
 	uint32_t cpp_serial_len;
 	const uint8_t *cpp_serial;
+	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 
 	cpp_serial_len = nfp_cpp_serial(cpp, &cpp_serial);
 	if (cpp_serial_len != NFP_SERIAL_LEN)
@@ -713,10 +714,10 @@ nfp_fw_setup(struct rte_pci_device *dev,
 		struct nfp_eth_table *nfp_eth_table,
 		struct nfp_hwinfo *hwinfo)
 {
+	int err;
+	char card_desc[100];
 	struct nfp_nsp *nsp;
 	const char *nfp_fw_model;
-	char card_desc[100];
-	int err = 0;
 
 	nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "nffw.partno");
 	if (nfp_fw_model == NULL)
@@ -897,9 +898,9 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
-	enum nfp_app_fw_id app_fw_id;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_hwinfo *hwinfo;
+	enum nfp_app_fw_id app_fw_id;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct nfp_rtsym_table *sym_tbl;
 	struct nfp_eth_table *nfp_eth_table;
@@ -1220,8 +1221,8 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 static int
 nfp_pci_uninit(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
 	uint16_t port_id;
+	struct rte_pci_device *pci_dev;
 
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 892300a909..aaef6ea91a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -29,14 +29,15 @@ nfp_netvf_read_mac(struct nfp_net_hw *hw)
 static int
 nfp_netvf_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -254,15 +255,15 @@ nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
-	struct rte_ether_addr *tmp_ether_addr;
-
-	uint64_t tx_bar_off = 0, rx_bar_off = 0;
+	int err;
 	uint32_t start_q;
 	uint16_t port = 0;
-	int err;
+	struct nfp_net_hw *hw;
+	uint64_t tx_bar_off = 0;
+	uint64_t rx_bar_off = 0;
+	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
+	struct rte_ether_addr *tmp_ether_addr;
 
 	PMD_INIT_FUNC_TRACE();
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index a254d839ff..476eb0c7f8 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -728,9 +728,9 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 		struct nfp_fl_key_ls *key_layer,
 		uint32_t stats_ctx)
 {
-	struct nfp_fl_rule_metadata *nfp_flow_meta;
-	char *mbuf_off_exact;
 	char *mbuf_off_mask;
+	char *mbuf_off_exact;
+	struct nfp_fl_rule_metadata *nfp_flow_meta;
 
 	/*
 	 * Convert to long words as firmware expects
@@ -941,9 +941,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 	int ret = 0;
 	bool meter_flag = false;
 	bool tc_hl_flag = false;
-	bool mac_set_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
+	bool mac_set_flag = false;
 	bool ttl_tos_flag = false;
 	const struct rte_flow_action *action;
 
@@ -3165,11 +3165,11 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv4 *ipv4;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3205,11 +3205,11 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv6 *ipv6;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3245,10 +3245,10 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv4 *ipv4;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv4 *ipv4;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3283,10 +3283,10 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv6 *ipv6;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv6 *ipv6;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3431,12 +3431,12 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
 	uint32_t count;
 	char *position;
 	char *action_data;
-	bool ttl_tos_flag = false;
-	bool tc_hl_flag = false;
 	bool drop_flag = false;
+	bool tc_hl_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
 	bool mac_set_flag = false;
+	bool ttl_tos_flag = false;
 	uint32_t total_actions = 0;
 	const struct rte_flow_action *action;
 	struct nfp_flower_meta_tci *meta_tci;
@@ -4206,10 +4206,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 	size_t stats_size;
 	uint64_t ctx_count;
 	uint64_t ctx_split;
+	struct nfp_flow_priv *priv;
 	char mask_name[RTE_HASH_NAMESIZE];
 	char flow_name[RTE_HASH_NAMESIZE];
 	char pretun_name[RTE_HASH_NAMESIZE];
-	struct nfp_flow_priv *priv;
 	struct nfp_app_fw_flower *app_fw_flower;
 	const char *pci_name = strchr(pf_dev->pci_dev->name, ':') + 1;
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 8cbb9b74a2..db6122eac3 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -188,9 +188,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 static int
 nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
-	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
-	uint64_t dma_addr;
 	uint16_t i;
+	uint64_t dma_addr;
+	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -241,17 +241,15 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_rx_queue_count(void *rx_queue)
 {
+	uint32_t idx;
+	uint32_t count = 0;
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
-	uint32_t idx;
-	uint32_t count;
 
 	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
-	count = 0;
-
 	/*
 	 * Other PMDs are just checking the DD bit in intervals of 4
 	 * descriptors and counting all four if the first has the DD
@@ -282,9 +280,9 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
 		struct nfp_meta_parsed *meta)
 {
-	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
+	uint8_t *meta_offset;
 
 	meta_info = rte_be_to_cpu_32(meta_header);
 	meta_offset = meta_base + 4;
@@ -683,15 +681,15 @@ nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts)
 {
-	struct nfp_net_rxq *rxq;
-	struct nfp_net_rx_desc *rxds;
-	struct nfp_net_dp_buf *rxb;
-	struct nfp_net_hw *hw;
+	uint64_t dma_addr;
+	uint16_t avail = 0;
 	struct rte_mbuf *mb;
+	uint16_t nb_hold = 0;
+	struct nfp_net_hw *hw;
 	struct rte_mbuf *new_mb;
-	uint16_t nb_hold;
-	uint64_t dma_addr;
-	uint16_t avail;
+	struct nfp_net_rxq *rxq;
+	struct nfp_net_dp_buf *rxb;
+	struct nfp_net_rx_desc *rxds;
 	uint16_t avail_multiplexed = 0;
 
 	rxq = rx_queue;
@@ -706,8 +704,6 @@ nfp_net_recv_pkts(void *rx_queue,
 
 	hw = rxq->hw;
 
-	avail = 0;
-	nb_hold = 0;
 	while (avail + avail_multiplexed < nb_pkts) {
 		rxb = &rxq->rxbufs[rxq->rd_p];
 		if (unlikely(rxb == NULL)) {
@@ -883,12 +879,12 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mp)
 {
+	uint32_t rx_desc_sz;
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
-	const struct rte_memzone *tz;
-	struct nfp_net_rxq *rxq;
 	struct nfp_net_hw *hw;
-	uint32_t rx_desc_sz;
+	struct nfp_net_rxq *rxq;
+	const struct rte_memzone *tz;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -995,8 +991,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
-	uint32_t qcp_rd_p;
 	uint32_t todo;
+	uint32_t qcp_rd_p;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1072,8 +1068,8 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer)
 {
-	uint16_t vlan_tci;
 	uint16_t tpid;
+	uint16_t vlan_tci;
 
 	tpid = RTE_ETHER_TYPE_VLAN;
 	vlan_tci = pkt->vlan_tci;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 05/11] net/nfp: adjust the log statement
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (3 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 04/11] net/nfp: standard the local variable coding style Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 06/11] net/nfp: standard the comment style Chaoyong He
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Add log statement to the important control logic, and remove verbose
info log statement.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower_ctrl.c      | 17 +++---
 .../net/nfp/flower/nfp_flower_representor.c   |  4 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  2 -
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  2 -
 drivers/net/nfp/nfp_common.c                  | 59 ++++++++-----------
 drivers/net/nfp/nfp_cpp_bridge.c              | 28 ++++-----
 drivers/net/nfp/nfp_ethdev.c                  | 21 +------
 drivers/net/nfp/nfp_ethdev_vf.c               | 17 +-----
 drivers/net/nfp/nfp_logs.h                    |  1 -
 drivers/net/nfp/nfp_rxtx.c                    | 17 ++----
 10 files changed, 58 insertions(+), 110 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 4967cc2375..1f4c5fd7f9 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -88,15 +88,14 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
+					"\t\tYour mbuf size should have extra space for"
+					" RX offset=%u bytes.\n"
+					"\t\tCurrently you just have %u bytes available"
+					" but the received packet is %u bytes long",
+					hw->rx_offset,
+					rxq->mbuf_size - hw->rx_offset,
+					mb->data_len);
 			rte_pktmbuf_free(mb);
 			break;
 		}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 01c2c5a517..be0dfb2890 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -464,7 +464,7 @@ nfp_flower_repr_rx_burst(void *rx_queue,
 	total_dequeue = rte_ring_dequeue_burst(repr->ring, (void *)rx_pkts,
 			nb_pkts, &available);
 	if (total_dequeue != 0) {
-		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: 0x%x, "
+		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: %#x, "
 				"received: %u, available: %u", repr->name,
 				repr->port_id, total_dequeue, available);
 
@@ -510,7 +510,7 @@ nfp_flower_repr_tx_burst(void *tx_queue,
 	pf_tx_queue = dev->data->tx_queues[0];
 	sent = nfp_flower_pf_xmit_pkts(pf_tx_queue, tx_pkts, nb_pkts);
 	if (sent != 0) {
-		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: 0x%x transmitted: %u",
+		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: %#x transmitted: %hu",
 				repr->name, repr->port_id, sent);
 		repr->repr_stats.opackets += sent;
 	}
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 699f65ebef..51755f4324 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -381,8 +381,6 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index 2426ffb261..dae87ac6df 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -455,8 +455,6 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 18291a1cde..f48e1930dc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -207,7 +207,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
-		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
+		PMD_DRV_LOG(ERR, "Bad configuration queue pointer");
 		return -ENXIO;
 	}
 
@@ -224,15 +224,15 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		if (new == 0)
 			break;
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
-			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
+			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
-			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %ums", update, cnt);
+			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
+					update, cnt);
 			return -EIO;
 		}
-		nanosleep(&wait, 0); /* waiting for a 1ms */
+		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
@@ -390,8 +390,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	 * called after that internal process
 	 */
 
-	PMD_INIT_LOG(DEBUG, "Configure");
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -401,20 +399,20 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking TX mode */
 	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
-		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
+		PMD_DRV_LOG(ERR, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		PMD_INIT_LOG(INFO, "RSS not supported");
+		PMD_DRV_LOG(ERR, "RSS not supported");
 		return -EINVAL;
 	}
 
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
-		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
+		PMD_DRV_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u)",
 				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
@@ -552,8 +550,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
-		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				" port enabled");
+		PMD_DRV_LOG(ERR, "MAC address unable to change when port enabled");
 		return -EBUSY;
 	}
 
@@ -567,7 +564,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
-		PMD_INIT_LOG(INFO, "MAC address update failed");
+		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
 	return 0;
@@ -582,21 +579,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
-		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-				" intr_vec", dev->data->nb_rx_queues);
+		PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec",
+				dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
 		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with VFIO");
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			/*
 			 * The first msix vector is reserved for non
@@ -605,8 +602,6 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
-			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -691,8 +686,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
-	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
-
 	if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) {
 		repr = dev->data->dev_private;
 		hw = repr->app_fw_flower->pf_hw;
@@ -701,7 +694,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	}
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_PROMISC) == 0) {
-		PMD_INIT_LOG(INFO, "Promiscuous mode not supported");
+		PMD_DRV_LOG(ERR, "Promiscuous mode not supported");
 		return -ENOTSUP;
 	}
 
@@ -774,9 +767,6 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
-
-	PMD_DRV_LOG(DEBUG, "Link update");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Read link status */
@@ -1636,9 +1626,9 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1719,9 +1709,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1827,7 +1817,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 	}
 
 	if (rss_conf->rss_key_len > NFP_NET_CFG_RSS_KEY_SZ) {
-		PMD_DRV_LOG(ERR, "hash key too long");
+		PMD_DRV_LOG(ERR, "RSS hash key too long");
 		return -EINVAL;
 	}
 
@@ -1910,9 +1900,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
-	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-			rx_queues);
-
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
@@ -1929,7 +1916,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 
 	dev_conf = &dev->data->dev_conf;
 	if (dev_conf == NULL) {
-		PMD_DRV_LOG(INFO, "wrong rss conf");
+		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
 	rss_conf = dev_conf->rx_adv_conf.rss_conf;
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 727ec7a7b2..222cfdcbc3 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -130,7 +130,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -149,9 +149,9 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -162,7 +162,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	}
 
 	while (count > 0) {
-		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
+		/* Configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
 				nfp_offset, curlen);
 		if (area == NULL) {
@@ -170,7 +170,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			return -EIO;
 		}
 
-		/* mapping the target */
+		/* Mapping the target */
 		err = nfp_cpp_area_acquire(area);
 		if (err < 0) {
 			PMD_CPP_LOG(ERR, "area acquire failed");
@@ -183,7 +183,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			if (len > sizeof(tmpbuf))
 				len = sizeof(tmpbuf);
 
-			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu", __func__,
 					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
@@ -235,7 +235,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -254,9 +254,9 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -293,7 +293,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
-			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu", __func__,
 					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
@@ -353,7 +353,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_model(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -363,7 +363,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_interface(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -440,11 +440,11 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
 				break;
 			}
 
-			PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op);
+			PMD_CPP_LOG(DEBUG, "%s: getting op %u", __func__, op);
 
 			if (op == NFP_BRIDGE_OP_READ)
 				nfp_cpp_bridge_serve_read(datafd, cpp);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 7d149decfb..72abc4c16e 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -60,8 +60,6 @@ nfp_net_start(struct rte_eth_dev *dev)
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -194,8 +192,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	nfp_net_disable_queues(dev);
@@ -220,8 +216,6 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link up");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -237,8 +231,6 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link down");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -261,8 +253,6 @@ nfp_net_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -491,8 +481,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	/* Use backpointer here to the PF of this eth_dev */
@@ -513,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	 */
 	hw = app_fw_nic->ports[port];
 
-	PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, "
+	PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, "
 			"NFP internal port number: %d", port, hw->nfp_idx);
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
@@ -579,9 +567,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 
-	PMD_INIT_LOG(DEBUG, "tx_base: 0x%" PRIx64 "", tx_base);
-	PMD_INIT_LOG(DEBUG, "rx_base: 0x%" PRIx64 "", rx_base);
-
 	hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	eth_dev->data->dev_private = hw;
@@ -627,7 +612,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %d VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
@@ -997,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto pf_cleanup;
 	}
 
-	PMD_INIT_LOG(DEBUG, "qc_bar address: 0x%p", pf_dev->qc_bar);
+	PMD_INIT_LOG(DEBUG, "qc_bar address: %p", pf_dev->qc_bar);
 
 	/*
 	 * PF initialization has been done at this point. Call app specific
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index aaef6ea91a..d3c3c9e953 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -41,8 +41,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -136,8 +134,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 static int
 nfp_netvf_stop(struct rte_eth_dev *dev)
 {
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
@@ -170,8 +166,6 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	/*
@@ -265,8 +259,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	const struct nfp_dev_info *dev_info;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -301,7 +293,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
 			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
-		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
+		PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!",
 				pci_dev->device.name);
 		return -ENOMEM;
 	}
@@ -312,9 +304,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 	rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q);
 
-	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
-	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
-
 	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
 	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
@@ -345,7 +334,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -359,7 +348,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 315a57811c..16ff61700b 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -12,7 +12,6 @@ extern int nfp_logtype_init;
 #define PMD_INIT_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, nfp_logtype_init, \
 		"%s(): " fmt "\n", __func__, ## args)
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 extern int nfp_logtype_rx;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index db6122eac3..5bfdfd28b3 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -192,7 +192,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	uint64_t dma_addr;
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
-	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
+	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %hu descriptors",
 			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
@@ -212,14 +212,13 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
 		rxe[i].mbuf = mbuf;
-		PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr);
 	}
 
 	/* Make sure all writes are flushed before telling the hardware */
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %hu", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -432,7 +431,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
-	PMD_RX_LOG(DEBUG, "Received outer vlan is %u inter vlan is %u",
+	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
@@ -754,12 +753,11 @@ nfp_net_recv_pkts(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-					"mbuf overflow likely due to the RX offset.\n"
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
 					"\t\tYour mbuf size should have extra space for"
 					" RX offset=%u bytes.\n"
 					"\t\tCurrently you just have %u bytes available"
-					" but the received packet is %u bytes long",
+					" but the received packet is %hu bytes long",
 					hw->rx_offset,
 					rxq->mbuf_size - hw->rx_offset,
 					mb->data_len);
@@ -888,8 +886,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_rx_desc_limits(hw, &min_rx_desc, &max_rx_desc);
 
 	/* Validating number of descriptors */
@@ -965,9 +961,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		return -ENOMEM;
 	}
 
-	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
-
 	nfp_net_reset_rx_queue(rxq);
 
 	rxq->hw = hw;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 06/11] net/nfp: standard the comment style
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (4 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 05/11] net/nfp: adjust the log statement Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 07/11] net/nfp: standard the blank character Chaoyong He
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Follow the DPDK coding style, use the kdoc comment style.
Also delete some comment which are not valid anymore and add some
comment to help understand logic.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.h           |  28 ++--
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |   2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  56 +++----
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |   6 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  32 ++--
 .../net/nfp/flower/nfp_flower_representor.h   |   2 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  33 ++--
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  16 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  41 ++---
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   6 +-
 drivers/net/nfp/nfp_common.c                  | 142 ++++++++----------
 drivers/net/nfp/nfp_common.h                  |  59 ++++----
 drivers/net/nfp/nfp_cpp_bridge.c              |   2 -
 drivers/net/nfp/nfp_ctrl.h                    |  22 +--
 drivers/net/nfp/nfp_ethdev.c                  |  22 ++-
 drivers/net/nfp/nfp_ethdev_vf.c               |  11 +-
 drivers/net/nfp/nfp_flow.c                    |  44 +++---
 drivers/net/nfp/nfp_flow.h                    |  10 +-
 drivers/net/nfp/nfp_rxtx.c                    | 109 +++++---------
 drivers/net/nfp/nfp_rxtx.h                    |  18 +--
 20 files changed, 284 insertions(+), 377 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 244b6daa37..0b4e38cedd 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -53,49 +53,49 @@ struct nfp_flower_nfd_func {
 
 /* The flower application's private structure */
 struct nfp_app_fw_flower {
-	/* switch domain for this app */
+	/** Switch domain for this app */
 	uint16_t switch_domain_id;
 
-	/* Number of VF representors */
+	/** Number of VF representors */
 	uint8_t num_vf_reprs;
 
-	/* Number of phyport representors */
+	/** Number of phyport representors */
 	uint8_t num_phyport_reprs;
 
-	/* Pointer to the PF vNIC */
+	/** Pointer to the PF vNIC */
 	struct nfp_net_hw *pf_hw;
 
-	/* Pointer to a mempool for the ctrlvNIC */
+	/** Pointer to a mempool for the Ctrl vNIC */
 	struct rte_mempool *ctrl_pktmbuf_pool;
 
-	/* Pointer to the ctrl vNIC */
+	/** Pointer to the ctrl vNIC */
 	struct nfp_net_hw *ctrl_hw;
 
-	/* Ctrl vNIC Rx counter */
+	/** Ctrl vNIC Rx counter */
 	uint64_t ctrl_vnic_rx_count;
 
-	/* Ctrl vNIC Tx counter */
+	/** Ctrl vNIC Tx counter */
 	uint64_t ctrl_vnic_tx_count;
 
-	/* Array of phyport representors */
+	/** Array of phyport representors */
 	struct nfp_flower_representor *phy_reprs[MAX_FLOWER_PHYPORTS];
 
-	/* Array of VF representors */
+	/** Array of VF representors */
 	struct nfp_flower_representor *vf_reprs[MAX_FLOWER_VFS];
 
-	/* PF representor */
+	/** PF representor */
 	struct nfp_flower_representor *pf_repr;
 
-	/* service id of ctrl vnic service */
+	/** Service id of Ctrl vNIC service */
 	uint32_t ctrl_vnic_id;
 
-	/* Flower extra features */
+	/** Flower extra features */
 	uint64_t ext_features;
 
 	struct nfp_flow_priv *flow_priv;
 	struct nfp_mtr_priv *mtr_priv;
 
-	/* Function pointers for different NFD version */
+	/** Function pointers for different NFD version */
 	struct nfp_flower_nfd_func nfd_func;
 };
 
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 5d6912b079..2ec9498d22 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -230,7 +230,7 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
 		return -ENOMEM;
 	}
 
-	/* copy the flow to mbuf */
+	/* Copy the flow to mbuf */
 	nfp_flow_meta = flow->payload.meta;
 	msg_len = (nfp_flow_meta->key_len + nfp_flow_meta->mask_len +
 			nfp_flow_meta->act_len) << NFP_FL_LW_SIZ;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 9449760145..cb019171b6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -348,7 +348,7 @@ struct nfp_flower_stats_frame {
 	rte_be64_t stats_cookie;
 };
 
-/**
+/*
  * See RFC 2698 for more details.
  * Word[0](Flag options):
  * [15] p(pps) 1 for pps, 0 for bps
@@ -378,40 +378,24 @@ struct nfp_cfg_head {
 	rte_be32_t profile_id;
 };
 
-/**
- * Struct nfp_profile_conf - profile config, offload to NIC
- * @head:        config head information
- * @bkt_tkn_p:   token bucket peak
- * @bkt_tkn_c:   token bucket committed
- * @pbs:         peak burst size
- * @cbs:         committed burst size
- * @pir:         peak information rate
- * @cir:         committed information rate
- */
+/* Profile config, offload to NIC */
 struct nfp_profile_conf {
-	struct nfp_cfg_head head;
-	rte_be32_t bkt_tkn_p;
-	rte_be32_t bkt_tkn_c;
-	rte_be32_t pbs;
-	rte_be32_t cbs;
-	rte_be32_t pir;
-	rte_be32_t cir;
-};
-
-/**
- * Struct nfp_mtr_stats_reply - meter stats, read from firmware
- * @head:          config head information
- * @pass_bytes:    count of passed bytes
- * @pass_pkts:     count of passed packets
- * @drop_bytes:    count of dropped bytes
- * @drop_pkts:     count of dropped packets
- */
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be32_t bkt_tkn_p;        /**< Token bucket peak */
+	rte_be32_t bkt_tkn_c;        /**< Token bucket committed */
+	rte_be32_t pbs;              /**< Peak burst size */
+	rte_be32_t cbs;              /**< Committed burst size */
+	rte_be32_t pir;              /**< Peak information rate */
+	rte_be32_t cir;              /**< Committed information rate */
+};
+
+/* Meter stats, read from firmware */
 struct nfp_mtr_stats_reply {
-	struct nfp_cfg_head head;
-	rte_be64_t pass_bytes;
-	rte_be64_t pass_pkts;
-	rte_be64_t drop_bytes;
-	rte_be64_t drop_pkts;
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be64_t pass_bytes;       /**< Count of passed bytes */
+	rte_be64_t pass_pkts;        /**< Count of passed packets */
+	rte_be64_t drop_bytes;       /**< Count of dropped bytes */
+	rte_be64_t drop_pkts;        /**< Count of dropped packets */
 };
 
 enum nfp_flower_cmsg_port_type {
@@ -851,7 +835,7 @@ struct nfp_fl_act_set_ipv6_addr {
 };
 
 /*
- * ipv6 tc hl fl
+ * Ipv6 tc hl fl
  *    3                   2                   1
  *  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -954,9 +938,9 @@ struct nfp_fl_act_set_tun {
 	uint8_t    tos;
 	rte_be16_t outer_vlan_tpid;
 	rte_be16_t outer_vlan_tci;
-	uint8_t    tun_len;      /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	uint8_t    tun_len;      /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 	uint8_t    reserved2;
-	rte_be16_t tun_proto;    /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	rte_be16_t tun_proto;    /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 } __rte_packed;
 
 /*
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 1f4c5fd7f9..d27579d2d8 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -123,7 +123,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -206,7 +206,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	txds->offset_eop = FLOWER_PKT_DATA_OFFSET | NFD3_DESC_TX_EOP;
 
 	txq->wr_p++;
-	if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/
+	if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 		txq->wr_p = 0;
 
 	cnt++;
@@ -520,7 +520,7 @@ nfp_flower_ctrl_vnic_poll(struct nfp_app_fw_flower *app_fw_flower)
 	ctrl_hw = app_fw_flower->ctrl_hw;
 	ctrl_eth_dev = ctrl_hw->eth_dev;
 
-	/* ctrl vNIC only has a single Rx queue */
+	/* Ctrl vNIC only has a single Rx queue */
 	rxq = ctrl_eth_dev->data->rx_queues[0];
 
 	while (rte_service_runstate_get(app_fw_flower->ctrl_vnic_id) != 0) {
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index be0dfb2890..9a9a66e4b0 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -10,18 +10,12 @@
 #include "../nfp_logs.h"
 #include "../nfp_mtr.h"
 
-/*
- * enum nfp_repr_type - type of representor
- * @NFP_REPR_TYPE_PHYS_PORT:   external NIC port
- * @NFP_REPR_TYPE_PF:          physical function
- * @NFP_REPR_TYPE_VF:          virtual function
- * @NFP_REPR_TYPE_MAX:         number of representor types
- */
+/* Type of representor */
 enum nfp_repr_type {
-	NFP_REPR_TYPE_PHYS_PORT,
-	NFP_REPR_TYPE_PF,
-	NFP_REPR_TYPE_VF,
-	NFP_REPR_TYPE_MAX,
+	NFP_REPR_TYPE_PHYS_PORT,    /*<< External NIC port */
+	NFP_REPR_TYPE_PF,           /*<< Physical function */
+	NFP_REPR_TYPE_VF,           /*<< Virtual function */
+	NFP_REPR_TYPE_MAX,          /*<< Number of representor types */
 };
 
 static int
@@ -86,7 +80,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -159,7 +153,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -170,7 +164,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = (uint64_t)tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -185,7 +179,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -603,7 +597,7 @@ nfp_flower_pf_repr_init(struct rte_eth_dev *eth_dev,
 	/* Memory has been allocated in the eth_dev_create() function */
 	repr = eth_dev->data->dev_private;
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->repr_type        = init_repr_data->repr_type;
@@ -672,7 +666,7 @@ nfp_flower_repr_init(struct rte_eth_dev *eth_dev,
 		return -ENOMEM;
 	}
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->port_id          = init_repr_data->port_id;
@@ -752,7 +746,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 	nfp_eth_table = app_fw_flower->pf_hw->pf_dev->nfp_eth_table;
 	eth_dev = app_fw_flower->ctrl_hw->eth_dev;
 
-	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware*/
+	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware */
 	ret = nfp_flower_cmsg_mac_repr(app_fw_flower);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Cloud not send mac repr cmsgs");
@@ -826,7 +820,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		/* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device */
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index 5ac5e38186..eda19cbb16 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -13,7 +13,7 @@ struct nfp_flower_representor {
 	uint16_t switch_domain_id;
 	uint32_t repr_type;
 	uint32_t port_id;
-	uint32_t nfp_idx;    /* only valid for the repr of physical port */
+	uint32_t nfp_idx;    /**< Only valid for the repr of physical port */
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct rte_ether_addr mac_addr;
 	struct nfp_app_fw_flower *app_fw_flower;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 7c56ca4908..0b0ca361f4 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -17,24 +17,24 @@
 struct nfp_net_nfd3_tx_desc {
 	union {
 		struct {
-			uint8_t dma_addr_hi; /* High bits of host buf address */
-			uint16_t dma_len;    /* Length to DMA for this desc */
-			/* Offset in buf where pkt starts + highest bit is eop flag */
+			uint8_t dma_addr_hi; /**< High bits of host buf address */
+			uint16_t dma_len;    /**< Length to DMA for this desc */
+			/** Offset in buf where pkt starts + highest bit is eop flag */
 			uint8_t offset_eop;
-			uint32_t dma_addr_lo; /* Low 32bit of host buf addr */
+			uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */
 
-			uint16_t mss;         /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;   /* LSO, where the data starts */
-			uint8_t flags;        /* TX Flags, see @NFD3_DESC_TX_* */
+			uint16_t mss;         /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;   /**< LSO, where the data starts */
+			uint8_t flags;        /**< TX Flags, see @NFD3_DESC_TX_* */
 
 			union {
 				struct {
-					uint8_t l3_offset; /* L3 header offset */
-					uint8_t l4_offset; /* L4 header offset */
+					uint8_t l3_offset; /**< L3 header offset */
+					uint8_t l4_offset; /**< L4 header offset */
 				};
-				uint16_t vlan; /* VLAN tag to add if indicated */
+				uint16_t vlan; /**< VLAN tag to add if indicated */
 			};
-			uint16_t data_len;     /* Length of frame + meta data */
+			uint16_t data_len;     /**< Length of frame + meta data */
 		} __rte_packed;
 		uint32_t vals[4];
 	};
@@ -54,13 +54,14 @@ nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq)
 	return (free_desc > 8) ? (free_desc - 8) : 0;
 }
 
-/*
- * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfd3
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware with nfd3
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfd3_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 51755f4324..a26d4bf4c8 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -113,14 +113,12 @@ nfp_flower_nfd3_pkt_add_metadata(struct rte_mbuf *mbuf,
 }
 
 /*
- * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc
+ * Set vlan info in the nfd3 tx desc
  *
  * If enable NFP_NET_CFG_CTRL_TXVLAN_V2
- *	Vlan_info is stored in the meta and
- *	is handled in the nfp_net_nfd3_set_meta_vlan()
+ *   Vlan_info is stored in the meta and is handled in the @nfp_net_nfd3_set_meta_vlan()
  * else if enable NFP_NET_CFG_CTRL_TXVLAN
- *	Vlan_info is stored in the tx_desc and
- *	is handled in the nfp_net_nfd3_tx_vlan()
+ *   Vlan_info is stored in the tx_desc and is handled in the @nfp_net_nfd3_tx_vlan()
  */
 static inline void
 nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq,
@@ -299,7 +297,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		nfp_net_nfd3_tx_vlan(txq, &txd, pkt);
 
 		/*
-		 * mbuf data_len is the data in one segment and pkt_len data
+		 * Mbuf data_len is the data in one segment and pkt_len data
 		 * in the whole packet. When the packet is just one segment,
 		 * then data_len = pkt_len
 		 */
@@ -330,7 +328,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 			free_descs--;
 
 			txq->wr_p++;
-			if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping */
+			if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 				txq->wr_p = 0;
 
 			pkt_size -= dma_size;
@@ -439,7 +437,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFD3_TX_DESC_PER_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -449,7 +447,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 99675b6bd7..04bd3c7600 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -75,7 +75,7 @@
  * dma_addr_hi - bits [47:32] of host memory address
  * dma_addr_lo - bits [31:0] of host memory address
  *
- * --> metadata descriptor
+ * --> Metadata descriptor
  * Bit     3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
  * -----\  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * Word   +-------+-----------------------+---------------------+---+-----+
@@ -104,27 +104,27 @@
  */
 struct nfp_net_nfdk_tx_desc {
 	union {
-		/* Address descriptor */
+		/** Address descriptor */
 		struct {
-			uint16_t dma_addr_hi;  /* High bits of host buf address */
-			uint16_t dma_len_type; /* Length to DMA for this desc */
-			uint32_t dma_addr_lo;  /* Low 32bit of host buf addr */
+			uint16_t dma_addr_hi;  /**< High bits of host buf address */
+			uint16_t dma_len_type; /**< Length to DMA for this desc */
+			uint32_t dma_addr_lo;  /**< Low 32bit of host buf addr */
 		};
 
-		/* TSO descriptor */
+		/** TSO descriptor */
 		struct {
-			uint16_t mss;          /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;    /* LSO, TCP payload offset */
-			uint8_t lso_totsegs;   /* LSO, total segments */
-			uint8_t l3_offset;     /* L3 header offset */
-			uint8_t l4_offset;     /* L4 header offset */
-			uint16_t lso_meta_res; /* Rsvd bits in TSO metadata */
+			uint16_t mss;          /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;    /**< LSO, TCP payload offset */
+			uint8_t lso_totsegs;   /**< LSO, total segments */
+			uint8_t l3_offset;     /**< L3 header offset */
+			uint8_t l4_offset;     /**< L4 header offset */
+			uint16_t lso_meta_res; /**< Rsvd bits in TSO metadata */
 		};
 
-		/* Metadata descriptor */
+		/** Metadata descriptor */
 		struct {
-			uint8_t flags;         /* TX Flags, see @NFDK_DESC_TX_* */
-			uint8_t reserved[7];   /* meta byte placeholder */
+			uint8_t flags;         /**< TX Flags, see @NFDK_DESC_TX_* */
+			uint8_t reserved[7];   /**< Meta byte place holder */
 		};
 
 		uint32_t vals[2];
@@ -146,13 +146,14 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
-/*
- * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfdk
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware of nfdk
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfdk_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index dae87ac6df..0e1f72cee8 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -478,7 +478,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Free memory prior to re-allocation if needed. This is the case after
-	 * calling nfp_net_stop
+	 * calling nfp_net_stop()
 	 */
 	if (dev->data->tx_queues[queue_idx] != NULL) {
 		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
@@ -513,7 +513,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -523,7 +523,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->ktxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index f48e1930dc..ed3c5c15d2 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -55,7 +55,7 @@ struct nfp_xstat {
 }
 
 static const struct nfp_xstat nfp_net_xstats[] = {
-	/**
+	/*
 	 * Basic xstats available on both VF and PF.
 	 * Note that in case new statistics of group NFP_XSTAT_GROUP_NET
 	 * are added to this array, they must appear before any statistics
@@ -80,7 +80,7 @@ static const struct nfp_xstat nfp_net_xstats[] = {
 	NFP_XSTAT_NET("bpf_app2_bytes", APP2_BYTES),
 	NFP_XSTAT_NET("bpf_app3_pkts", APP3_FRAMES),
 	NFP_XSTAT_NET("bpf_app3_bytes", APP3_BYTES),
-	/**
+	/*
 	 * MAC xstats available only on PF. These statistics are not available for VFs as the
 	 * PF is not initialized when the VF is initialized as it is still bound to the kernel
 	 * driver. As such, the PMD cannot obtain a CPP handle and access the rtsym_table in order
@@ -175,7 +175,7 @@ static void
 nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		struct rte_eth_link *link)
 {
-	/**
+	/*
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
 	 * then write the link speed NFP_NET_CFG_STS_LINK_RATE_UNKNOWN to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -184,7 +184,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
-	/**
+	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
@@ -214,7 +214,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 	nfp_qcp_ptr_add(hw->qcp_cfg, NFP_QCP_WRITE_PTR, 1);
 
 	wait.tv_sec = 0;
-	wait.tv_nsec = 1000000;
+	wait.tv_nsec = 1000000; /* 1ms */
 
 	PMD_DRV_LOG(DEBUG, "Polling for update ack...");
 
@@ -253,7 +253,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_reconfig(struct nfp_net_hw *hw,
@@ -297,7 +297,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_ext_reconfig(struct nfp_net_hw *hw,
@@ -368,9 +368,15 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 }
 
 /*
- * Configure an Ethernet device. This function must be invoked first
- * before any other function in the Ethernet API. This function can
- * also be re-invoked when a device is in the stopped state.
+ * Configure an Ethernet device.
+ *
+ * This function must be invoked first before any other function in the Ethernet API.
+ * This function can also be re-invoked when a device is in the stopped state.
+ *
+ * A DPDK app sends info about how many queues to use and how  those queues
+ * need to be configured. This is used by the DPDK core and it makes sure no
+ * more queues than those advertised by the driver are requested.
+ * This function is called after that internal process.
  */
 int
 nfp_net_configure(struct rte_eth_dev *dev)
@@ -382,14 +388,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * A DPDK app sends info about how many queues to use and how
-	 * those queues need to be configured. This is used by the
-	 * DPDK core and it makes sure no more queues than those
-	 * advertised by the driver are requested. This function is
-	 * called after that internal process
-	 */
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -557,12 +555,12 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	/* Writing new MAC to the specific port BAR address */
 	nfp_net_write_mac(hw, (uint8_t *)mac_addr);
 
-	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
@@ -706,10 +704,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode on just after this call assuming
-	 * it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -737,10 +731,6 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode off just before this call
-	 * assuming it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -751,7 +741,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 }
 
 /*
- * return 0 means link status changed, -1 means not changed
+ * Return 0 means link status changed, -1 means not changed
  *
  * Wait to complete is needed as it can take up to 9 seconds to get the Link
  * status.
@@ -793,7 +783,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 				}
 			}
 		} else {
-			/**
+			/*
 			 * Shift and mask nn_link_status so that it is effectively the value
 			 * at offset NFP_NET_CFG_STS_NSP_LINK_RATE.
 			 */
@@ -812,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
 	}
 
-	/**
+	/*
 	 * Notify the port to update the speed value in the CTRL BAR from NSP.
 	 * Not applicable for VFs as the associated PF is still attached to the
 	 * kernel driver.
@@ -833,11 +823,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* RTE_ETHDEV_QUEUE_STAT_CNTRS default value is 16 */
-
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
 
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -855,7 +843,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 				hw->eth_stats_base.q_ibytes[i];
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -889,7 +877,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -915,6 +903,10 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+/*
+ * hw->eth_stats_base records the per counter starting point.
+ * Lets update it now
+ */
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
@@ -923,12 +915,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * hw->eth_stats_base records the per counter starting point.
-	 * Lets update it now
-	 */
-
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -940,7 +927,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -964,7 +951,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 	hw->eth_stats_base.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	hw->eth_stats_base.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -1032,7 +1019,7 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	if (raw)
 		return value;
 
-	/**
+	/*
 	 * A baseline value of each statistic counter is recorded when stats are "reset".
 	 * Thus, the value returned by this function need to be decremented by this
 	 * baseline value. The result is the count of this statistic since the last time
@@ -1041,12 +1028,12 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	return value - hw->eth_xstats_base[index].value;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1066,12 +1053,12 @@ nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get(struct rte_eth_dev *dev,
 		struct rte_eth_xstat *xstats,
 		unsigned int n)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1092,16 +1079,16 @@ nfp_net_xstats_get(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
+ * ids, xstats_names and size are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
-	 * ids, xstats_names and size are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1123,16 +1110,16 @@ nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
+ * ids, values and n are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		uint64_t *values,
 		unsigned int n)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
-	 * ids, values and n are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1167,10 +1154,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
-	/**
-	 * Successfully reset xstats, now call function to reset basic stats
-	 * return value is then based on the success of that function
-	 */
+	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
 
@@ -1217,7 +1201,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
-	/*
+	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
 	 * includes layer 2 headers, CRC and other metadata that can
@@ -1358,7 +1342,7 @@ nfp_net_common_init(struct rte_pci_device *pci_dev,
 
 	nfp_net_init_metadata_format(hw);
 
-	/* read the Rx offset configured from firmware */
+	/* Read the Rx offset configured from firmware */
 	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
@@ -1375,7 +1359,6 @@ const uint32_t *
 nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
-		/* refers to nfp_net_set_hash() */
 		RTE_PTYPE_INNER_L3_IPV4,
 		RTE_PTYPE_INNER_L3_IPV6,
 		RTE_PTYPE_INNER_L3_IPV6_EXT,
@@ -1449,10 +1432,8 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
-/* Interrupt configuration and handling */
-
 /*
- * nfp_net_irq_unmask - Unmask an interrupt
+ * Unmask an interrupt
  *
  * If MSI-X auto-masking is enabled clear the mask bit, otherwise
  * clear the ICR for the entry.
@@ -1478,16 +1459,14 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	}
 }
 
-/*
+/**
  * Interrupt handler which shall be registered for alarm callback for delayed
  * handling specific interrupt to wait for the stable nic state. As the NIC
  * interrupt state is not stable for nfp after link is just down, it needs
  * to wait 4 seconds to get the stable status.
  *
- * @param handle   Pointer to interrupt handle.
- * @param param    The address of parameter (struct rte_eth_dev *)
- *
- * @return  void
+ * @param param
+ *   The address of parameter (struct rte_eth_dev *)
  */
 void
 nfp_net_dev_interrupt_delayed_handler(void *param)
@@ -1516,13 +1495,12 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	nfp_net_link_update(dev, 0);
 
-	/* likely to up */
+	/* Likely to up */
 	if (link.link_status == 0) {
-		/* handle it 1 sec later, wait it being stable */
+		/* Handle it 1 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_UP_CHECK_TIMEOUT;
-		/* likely to down */
-	} else {
-		/* handle it 4 sec later, wait it being stable */
+	} else {  /* Likely to down */
+		/* Handle it 4 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_DOWN_CHECK_TIMEOUT;
 	}
 
@@ -1543,7 +1521,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* mtu setting is forbidden if port is started */
+	/* MTU setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
 				dev->data->port_id);
@@ -1557,7 +1535,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 		return -ERANGE;
 	}
 
-	/* writing to configuration space */
+	/* Writing to configuration space */
 	nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
 
 	hw->mtu = mtu;
@@ -1653,8 +1631,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+			/* Clearing the entry bits */
 			if (mask != 0xF)
-				/* Clearing the entry bits */
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
@@ -1689,7 +1667,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
- /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
+/* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -1717,7 +1695,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	/*
 	 * Reading Redirection Table. There are 128 8bit-entries which can be
-	 * manage as 32 32bit-entries
+	 * manage as 32 32bit-entries.
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
@@ -1751,7 +1729,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Writing the key byte a byte */
+	/* Writing the key byte by byte */
 	for (i = 0; i < rss_conf->rss_key_len; i++) {
 		memcpy(&key, &rss_conf->rss_key[i], 1);
 		nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key);
@@ -1786,7 +1764,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ;
 
-	/* configuring where to apply the RSS hash */
+	/* Configuring where to apply the RSS hash */
 	nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl);
 
 	/* Writing the key size */
@@ -1809,7 +1787,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	/* Checking if RSS is enabled */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		if (rss_hf != 0) { /* Enable RSS? */
+		if (rss_hf != 0) {
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 9cb889c4a6..b41d834165 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -53,7 +53,7 @@ enum nfp_app_fw_id {
 	NFP_APP_FW_FLOWER_NIC             = 0x3,
 };
 
-/* nfp_qcp_ptr - Read or Write Pointer of a queue */
+/* Read or Write Pointer of a queue */
 enum nfp_qcp_ptr {
 	NFP_QCP_READ_PTR = 0,
 	NFP_QCP_WRITE_PTR
@@ -72,15 +72,15 @@ struct nfp_net_tlv_caps {
 };
 
 struct nfp_pf_dev {
-	/* Backpointer to associated pci device */
+	/** Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
 
 	enum nfp_app_fw_id app_fw_id;
 
-	/* Pointer to the app running on the PF */
+	/** Pointer to the app running on the PF */
 	void *app_fw_priv;
 
-	/* The eth table reported by firmware */
+	/** The eth table reported by firmware */
 	struct nfp_eth_table *nfp_eth_table;
 
 	uint8_t *ctrl_bar;
@@ -94,17 +94,17 @@ struct nfp_pf_dev {
 	struct nfp_hwinfo *hwinfo;
 	struct nfp_rtsym_table *sym_tbl;
 
-	/* service id of cpp bridge service */
+	/** Service id of cpp bridge service */
 	uint32_t cpp_bridge_id;
 };
 
 struct nfp_app_fw_nic {
-	/* Backpointer to the PF device */
+	/** Backpointer to the PF device */
 	struct nfp_pf_dev *pf_dev;
 
 	/*
-	 * Array of physical ports belonging to the this CoreNIC app
-	 * This is really a list of vNIC's. One for each physical port
+	 * Array of physical ports belonging to this CoreNIC app.
+	 * This is really a list of vNIC's, one for each physical port.
 	 */
 	struct nfp_net_hw *ports[NFP_MAX_PHYPORTS];
 
@@ -113,13 +113,13 @@ struct nfp_app_fw_nic {
 };
 
 struct nfp_net_hw {
-	/* Backpointer to the PF this port belongs to */
+	/** Backpointer to the PF this port belongs to */
 	struct nfp_pf_dev *pf_dev;
 
-	/* Backpointer to the eth_dev of this port*/
+	/** Backpointer to the eth_dev of this port*/
 	struct rte_eth_dev *eth_dev;
 
-	/* Info from the firmware */
+	/** Info from the firmware */
 	struct nfp_net_fw_ver ver;
 	uint32_t cap;
 	uint32_t max_mtu;
@@ -130,7 +130,7 @@ struct nfp_net_hw {
 	/** NFP ASIC params */
 	const struct nfp_dev_info *dev_info;
 
-	/* Current values for control */
+	/** Current values for control */
 	uint32_t ctrl;
 
 	uint8_t *ctrl_bar;
@@ -156,7 +156,7 @@ struct nfp_net_hw {
 
 	struct rte_ether_addr mac_addr;
 
-	/* Records starting point for counters */
+	/** Records starting point for counters */
 	struct rte_eth_stats eth_stats_base;
 	struct rte_eth_xstat *eth_xstats_base;
 
@@ -166,9 +166,9 @@ struct nfp_net_hw {
 	uint8_t *mac_stats_bar;
 	uint8_t *mac_stats;
 
-	/* Sequential physical port number, only valid for CoreNIC firmware */
+	/** Sequential physical port number, only valid for CoreNIC firmware */
 	uint8_t idx;
-	/* Internal port number as seen from NFP */
+	/** Internal port number as seen from NFP */
 	uint8_t nfp_idx;
 
 	struct nfp_net_tlv_caps tlv_caps;
@@ -240,10 +240,6 @@ nn_writeq(uint64_t val,
 	nn_writel(val, addr);
 }
 
-/*
- * Functions to read/write from/to Config BAR
- * Performs any endian conversion necessary.
- */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
 		uint32_t off)
@@ -304,11 +300,15 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
 
-/*
- * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue
- * @q: Base address for queue structure
- * @ptr: Add to the Read or Write pointer
- * @val: Value to add to the queue pointer
+/**
+ * Add the value to the selected pointer of a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Add to the read or write pointer
+ * @param val
+ *   Value to add to the queue pointer
  */
 static inline void
 nfp_qcp_ptr_add(uint8_t *q,
@@ -325,10 +325,13 @@ nfp_qcp_ptr_add(uint8_t *q,
 	nn_writel(rte_cpu_to_le_32(val), q + off);
 }
 
-/*
- * nfp_qcp_read - Read the current Read/Write pointer value for a queue
- * @q:  Base address for queue structure
- * @ptr: Read or Write pointer
+/**
+ * Read the current read/write pointer value for a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Read or Write pointer
  */
 static inline uint32_t
 nfp_qcp_read(uint8_t *q,
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 222cfdcbc3..b5bfe17d0e 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -1,8 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
 #include "nfp_cpp_bridge.h"
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 55073c3cea..a13f95894a 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -20,7 +20,7 @@
 /* Offset in Freelist buffer where packet starts on RX */
 #define NFP_NET_RX_OFFSET               32
 
-/* working with metadata api (NFD version > 3.0) */
+/* Working with metadata api (NFD version > 3.0) */
 #define NFP_NET_META_FIELD_SIZE         4
 #define NFP_NET_META_FIELD_MASK ((1 << NFP_NET_META_FIELD_SIZE) - 1)
 #define NFP_NET_META_HEADER_SIZE        4
@@ -36,7 +36,7 @@
 						NFP_NET_META_VLAN_TPID_MASK)
 
 /* Prepend field types */
-#define NFP_NET_META_HASH               1 /* next field carries hash type */
+#define NFP_NET_META_HASH               1 /* Next field carries hash type */
 #define NFP_NET_META_VLAN               4
 #define NFP_NET_META_PORTID             5
 #define NFP_NET_META_IPSEC              9
@@ -205,7 +205,7 @@ struct nfp_net_fw_ver {
  * @NFP_NET_CFG_SPARE_ADDR:  DMA address for ME code to use (e.g. YDS-155 fix)
  */
 #define NFP_NET_CFG_SPARE_ADDR          0x0050
-/**
+/*
  * NFP6000/NFP4000 - Prepend configuration
  */
 #define NFP_NET_CFG_RX_OFFSET		0x0050
@@ -330,7 +330,7 @@ struct nfp_net_fw_ver {
 
 /*
  * General device stats (0x0d00 - 0x0d90)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_NET_CFG_STATS_BASE          0x0d00
 #define NFP_NET_CFG_STATS_RX_DISCARDS   (NFP_NET_CFG_STATS_BASE + 0x00)
@@ -364,7 +364,7 @@ struct nfp_net_fw_ver {
 
 /*
  * Per ring stats (0x1000 - 0x1800)
- * options, 64bit per entry
+ * Options, 64bit per entry
  * @NFP_NET_CFG_TXR_STATS:   TX ring statistics (Packet and Byte count)
  * @NFP_NET_CFG_RXR_STATS:   RX ring statistics (Packet and Byte count)
  */
@@ -375,9 +375,9 @@ struct nfp_net_fw_ver {
 #define NFP_NET_CFG_RXR_STATS(_x)       (NFP_NET_CFG_RXR_STATS_BASE + \
 					 ((_x) * 0x10))
 
-/**
+/*
  * Mac stats (0x0000 - 0x0200)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_MAC_STATS_BASE                0x0000
 #define NFP_MAC_STATS_SIZE                0x0200
@@ -558,9 +558,11 @@ struct nfp_net_fw_ver {
 
 int nfp_net_tlv_caps_parse(struct rte_eth_dev *dev);
 
-/*
- * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
- * @hw_cap: The firmware's capabilities
+/**
+ * Get RSS flag based on firmware's capability
+ *
+ * @param hw_cap
+ *   The firmware's capabilities
  */
 static inline uint32_t
 nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 72abc4c16e..dece821e4a 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -66,7 +66,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
@@ -273,11 +273,11 @@ nfp_net_close(struct rte_eth_dev *dev)
 	/* Clear ipsec */
 	nfp_ipsec_uninit(dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
-	/* Mark this port as unused and free device priv resources*/
+	/* Mark this port as unused and free device priv resources */
 	nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff);
 	app_fw_nic->ports[hw->idx] = NULL;
 	rte_eth_dev_release_port(dev);
@@ -300,15 +300,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
@@ -842,8 +837,9 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 		eth_dev->device = &pf_dev->pci_dev->device;
 
-		/* ctrl/tx/rx BAR mappings and remaining init happens in
-		 * nfp_net_init
+		/*
+		 * Ctrl/tx/rx BAR mappings and remaining init happens in
+		 * @nfp_net_init()
 		 */
 		ret = nfp_net_init(eth_dev);
 		if (ret != 0) {
@@ -970,7 +966,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	pf_dev->pci_dev = pci_dev;
 	pf_dev->nfp_eth_table = nfp_eth_table;
 
-	/* configure access to tx/rx vNIC BARs */
+	/* Configure access to tx/rx vNIC BARs */
 	addr = nfp_qcp_queue_offset(dev_info, 0);
 	cpp_id = NFP_CPP_ISLAND_ID(0, NFP_CPP_ACTION_RW, 0, 0);
 
@@ -1011,7 +1007,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwqueues_cleanup;
 	}
 
-	/* register the CPP bridge service here for primary use */
+	/* Register the CPP bridge service here for primary use */
 	ret = nfp_enable_cpp_service(pf_dev);
 	if (ret != 0)
 		PMD_INIT_LOG(INFO, "Enable cpp service failed.");
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index d3c3c9e953..0a1eb04294 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -47,7 +47,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
@@ -182,18 +182,13 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 476eb0c7f8..7b1abe926e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -118,21 +118,21 @@ struct vxlan_data {
 #define NVGRE_V4_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv4_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 #define NVGRE_V6_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv6_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 
 /* Process structure associated with a flow item */
 struct nfp_flow_item_proc {
-	/* Bit-mask for fields supported by this PMD. */
+	/** Bit-mask for fields supported by this PMD. */
 	const void *mask_support;
-	/* Bit-mask to use when @p item->mask is not provided. */
+	/** Bit-mask to use when @p item->mask is not provided. */
 	const void *mask_default;
-	/* Size in bytes for @p mask_support and @p mask_default. */
+	/** Size in bytes for @p mask_support and @p mask_default. */
 	const unsigned int mask_sz;
-	/* Merge a pattern item into a flow rule handle. */
+	/** Merge a pattern item into a flow rule handle. */
 	int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
 			struct rte_flow *nfp_flow,
 			char **mbuf_off,
@@ -140,7 +140,7 @@ struct nfp_flow_item_proc {
 			const struct nfp_flow_item_proc *proc,
 			bool is_mask,
 			bool is_outer_layer);
-	/* List of possible subsequent items. */
+	/** List of possible subsequent items. */
 	const enum rte_flow_item_type *const next_item;
 };
 
@@ -318,14 +318,14 @@ nfp_check_mask_add(struct nfp_flow_priv *priv,
 
 	mask_entry = nfp_mask_table_search(priv, mask_data, mask_len);
 	if (mask_entry == NULL) {
-		/* mask entry does not exist, let's create one */
+		/* Mask entry does not exist, let's create one */
 		ret = nfp_mask_table_add(priv, mask_data, mask_len, mask_id);
 		if (ret != 0)
 			return false;
 
 		*meta_flags |= NFP_FL_META_FLAG_MANAGE_MASK;
 	} else {
-		/* mask entry already exist */
+		/* Mask entry already exist */
 		mask_entry->ref_cnt++;
 		*mask_id = mask_entry->mask_id;
 	}
@@ -785,7 +785,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_ETH detected");
 			/*
-			 * eth is set with no specific params.
+			 * Eth is set with no specific params.
 			 * NFP does not need this.
 			 */
 			if (item->spec == NULL)
@@ -1273,7 +1273,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
+		 * Reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
@@ -1356,7 +1356,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
+		 * Reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
@@ -3330,9 +3330,9 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
 		return -EINVAL;
 	}
 
-	/* Pre_tunnel action must be the first on action list.
-	 * If other actions already exist, they need to be
-	 * pushed forward.
+	/**
+	 * Pre_tunnel action must be the first on action list.
+	 * If other actions already exist, they need to be pushed forward.
 	 */
 	act_len = act_data - actions;
 	if (act_len != 0) {
@@ -4290,7 +4290,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_id;
 	}
 
-	/* flow stats */
+	/* Flow stats */
 	rte_spinlock_init(&priv->stats_lock);
 	stats_size = (ctx_count & NFP_FL_STAT_ID_STAT) |
 			((ctx_split - 1) & NFP_FL_STAT_ID_MU_NUM);
@@ -4304,7 +4304,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats_id;
 	}
 
-	/* mask table */
+	/* Mask table */
 	mask_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->mask_table = rte_hash_create(&mask_hash_params);
 	if (priv->mask_table == NULL) {
@@ -4313,7 +4313,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats;
 	}
 
-	/* flow table */
+	/* Flow table */
 	flow_hash_params.hash_func_init_val = priv->hash_seed;
 	flow_hash_params.entries = ctx_count;
 	priv->flow_table = rte_hash_create(&flow_hash_params);
@@ -4323,7 +4323,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_table;
 	}
 
-	/* pre tunnel table */
+	/* Pre tunnel table */
 	priv->pre_tun_cnt = 1;
 	pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
@@ -4333,15 +4333,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_flow_table;
 	}
 
-	/* ipv4 off list */
+	/* IPv4 off list */
 	rte_spinlock_init(&priv->ipv4_off_lock);
 	LIST_INIT(&priv->ipv4_off_list);
 
-	/* ipv6 off list */
+	/* IPv6 off list */
 	rte_spinlock_init(&priv->ipv6_off_lock);
 	LIST_INIT(&priv->ipv6_off_list);
 
-	/* neighbor next list */
+	/* Neighbor next list */
 	LIST_INIT(&priv->nn_list);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 7ce7f62453..68b6fb6abe 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -115,19 +115,19 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
-	/* mask hash table */
+	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
-	/* flow hash table */
+	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
-	/* flow stats */
+	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
 	uint32_t stats_ring_size; /**< The size of stats id ring. */
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
-	/* pre tunnel rule */
+	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
@@ -137,7 +137,7 @@ struct nfp_flow_priv {
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
-	/* neighbor next */
+	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 };
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 5bfdfd28b3..7b77351f1c 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -20,43 +20,22 @@
 /* Maximum number of supported VLANs in parsed form packet metadata. */
 #define NFP_META_MAX_VLANS       2
 
-/*
- * struct nfp_meta_parsed - Record metadata parsed from packet
- *
- * Parsed NFP packet metadata are recorded in this struct. The content is
- * read-only after it have been recorded during parsing by nfp_net_parse_meta().
- *
- * @port_id: Port id value
- * @sa_idx: IPsec SA index
- * @hash: RSS hash value
- * @hash_type: RSS hash type
- * @ipsec_type: IPsec type
- * @vlan_layer: The layers of VLAN info which are passed from nic.
- *              Only this number of entries of the @vlan array are valid.
- *
- * @vlan: Holds information parses from NFP_NET_META_VLAN. The inner most vlan
- *        starts at position 0 and only @vlan_layer entries contain valid
- *        information.
- *
- *        Currently only 2 layers of vlan are supported,
- *        vlan[0] - vlan strip info
- *        vlan[1] - qinq strip info
- *
- * @vlan.offload:  Flag indicates whether VLAN is offloaded
- * @vlan.tpid: Vlan TPID
- * @vlan.tci: Vlan TCI including PCP + Priority + VID
- */
+/* Record metadata parsed from packet */
 struct nfp_meta_parsed {
-	uint32_t port_id;
-	uint32_t sa_idx;
-	uint32_t hash;
-	uint8_t hash_type;
-	uint8_t ipsec_type;
-	uint8_t vlan_layer;
+	uint32_t port_id;         /**< Port id value */
+	uint32_t sa_idx;          /**< IPsec SA index */
+	uint32_t hash;            /**< RSS hash value */
+	uint8_t hash_type;        /**< RSS hash type */
+	uint8_t ipsec_type;       /**< IPsec type */
+	uint8_t vlan_layer;       /**< The valid number of value in @vlan[] */
+	/**
+	 * Holds information parses from NFP_NET_META_VLAN.
+	 * The inner most vlan starts at position 0
+	 */
 	struct {
-		uint8_t offload;
-		uint8_t tpid;
-		uint16_t tci;
+		uint8_t offload;  /**< Flag indicates whether VLAN is offloaded */
+		uint8_t tpid;     /**< Vlan TPID */
+		uint16_t tci;     /**< Vlan TCI (PCP + Priority + VID) */
 	} vlan[NFP_META_MAX_VLANS];
 };
 
@@ -156,7 +135,7 @@ struct nfp_ptype_parsed {
 	uint8_t outer_l3_ptype; /**< Packet type of outer layer 3. */
 };
 
-/* set mbuf checksum flags based on RX descriptor flags */
+/* Set mbuf checksum flags based on RX descriptor flags */
 void
 nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 		struct nfp_net_rx_desc *rxd,
@@ -254,7 +233,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * descriptors and counting all four if the first has the DD
 	 * bit on. Of course, this is not accurate but can be good for
 	 * performance. But ideally that should be done in descriptors
-	 * chunks belonging to the same cache line
+	 * chunks belonging to the same cache line.
 	 */
 
 	while (count < rxq->rx_count) {
@@ -265,7 +244,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 		count++;
 		idx++;
 
-		/* Wrapping? */
+		/* Wrapping */
 		if ((idx) == rxq->rx_count)
 			idx = 0;
 	}
@@ -273,7 +252,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+/* Parse the chained metadata from packet */
 static bool
 nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
@@ -320,12 +299,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 	return true;
 }
 
-/*
- * nfp_net_parse_meta_hash() - Set mbuf hash data based on the metadata info
- *
- * The RSS hash and hash-type are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf hash data based on the metadata info */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rxq *rxq,
@@ -341,7 +315,7 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_single_meta() - Parse the single metadata
+ * Parse the single metadata
  *
  * The RSS hash and hash-type are prepended to the packet data.
  * Get it from metadata area.
@@ -355,12 +329,7 @@ nfp_net_parse_single_meta(uint8_t *meta_base,
 	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
 }
 
-/*
- * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
- *
- * The VLAN info TPID and TCI are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf vlan_strip data based on metadata info */
 static void
 nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rx_desc *rxd,
@@ -369,19 +338,14 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	/* Skip if hardware don't support setting vlan. */
+	/* Skip if firmware don't support setting vlan. */
 	if ((hw->ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0)
 		return;
 
 	/*
-	 * The nic support the two way to send the VLAN info,
-	 * 1. According the metadata to send the VLAN info when NFP_NET_CFG_CTRL_RXVLAN_V2
-	 * is set
-	 * 2. According the descriptor to sned the VLAN info when NFP_NET_CFG_CTRL_RXVLAN
-	 * is set
-	 *
-	 * If the nic doesn't send the VLAN info, it is not necessary
-	 * to do anything.
+	 * The firmware support two ways to send the VLAN info (with priority) :
+	 * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set,
+	 * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
 		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
@@ -397,7 +361,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_meta_qinq() - Set mbuf qinq_strip data based on metadata info
+ * Set mbuf qinq_strip data based on metadata info
  *
  * The out VLAN tci are prepended to the packet data.
  * Extract and decode it and set the mbuf fields.
@@ -469,7 +433,7 @@ nfp_net_parse_meta_ipsec(struct nfp_meta_parsed *meta,
 	}
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
+/* Parse the metadata from packet */
 static void
 nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
 		struct nfp_net_rxq *rxq,
@@ -672,7 +636,7 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * doing now have any benefit at all. Again, tests with this change have not
  * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing
  * so looking at the implications of this type of allocation should be studied
- * deeply
+ * deeply.
  */
 
 uint16_t
@@ -803,7 +767,7 @@ nfp_net_recv_pkts(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -951,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
 			socket_id);
@@ -975,11 +939,14 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
-/*
- * nfp_net_tx_free_bufs - Check for descriptors with a complete
- * status
- * @txq: TX queue to work with
- * Returns number of descriptors freed
+/**
+ * Check for descriptors with a complete status
+ *
+ * @param txq
+ *   TX queue to work with
+ *
+ * @return
+ *   Number of descriptors freed
  */
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 98ef6c3d93..899cc42c97 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -19,21 +19,11 @@
 /* Maximum number of NFP packet metadata fields. */
 #define NFP_META_MAX_FIELDS      8
 
-/*
- * struct nfp_net_meta_raw - Raw memory representation of packet metadata
- *
- * Describe the raw metadata format, useful when preparing metadata for a
- * transmission mbuf.
- *
- * @header: NFD3 or NFDk field type header (see format in nfp.rst)
- * @data: Array of each fields data member
- * @length: Keep track of number of valid fields in @header and data. Not part
- *          of the raw metadata.
- */
+/* Describe the raw metadata format. */
 struct nfp_net_meta_raw {
-	uint32_t header;
-	uint32_t data[NFP_META_MAX_FIELDS];
-	uint8_t length;
+	uint32_t header; /**< Field type header (see format in nfp.rst) */
+	uint32_t data[NFP_META_MAX_FIELDS]; /**< Array of each fields data member */
+	uint8_t length; /**< Number of valid fields in @header */
 };
 
 /* Descriptor alignment */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 07/11] net/nfp: standard the blank character
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (5 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 06/11] net/nfp: standard the comment style Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 08/11] net/nfp: unify the guide line of header file Chaoyong He
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Use space character to align instead of TAB character.
There should one blank line to split the block of logic, no more no less.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c     | 39 +++++++++++----------
 drivers/net/nfp/nfp_common.h     |  6 ++--
 drivers/net/nfp/nfp_cpp_bridge.c |  5 +++
 drivers/net/nfp/nfp_ctrl.h       |  6 ++--
 drivers/net/nfp/nfp_ethdev.c     | 58 ++++++++++++++++----------------
 drivers/net/nfp/nfp_ethdev_vf.c  | 49 +++++++++++++--------------
 drivers/net/nfp/nfp_flow.c       | 27 +++++++++------
 drivers/net/nfp/nfp_flow.h       |  7 ++++
 drivers/net/nfp/nfp_rxtx.c       |  7 ++--
 9 files changed, 113 insertions(+), 91 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index ed3c5c15d2..3409ee8cb8 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -36,6 +36,7 @@ enum nfp_xstat_group {
 	NFP_XSTAT_GROUP_NET,
 	NFP_XSTAT_GROUP_MAC
 };
+
 struct nfp_xstat {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
 	int offset;
@@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
+
 	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
+
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
+
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
 					update, cnt);
 			return -EIO;
 		}
+
 		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
+
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
 }
@@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+
 	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
+
 	return 0;
 }
 
@@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_ipackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
-
 		nfp_dev_stats.q_ipackets[i] -=
 				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_ibytes[i] -=
 				hw->eth_stats_base.q_ibytes[i];
 	}
@@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_opackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
-
 		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
 	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
-
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
 	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
-
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
-
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
-
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
-
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
-
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
 	/* RX ring mbuf allocation failures */
@@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.imissed =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
-
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
 	if (stats != NULL) {
@@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev)
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
 		}
+
 		return count;
 	}
 
@@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
+
 	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
@@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
+
 	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
@@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 
 	if (dev->rx_pkt_burst == nfp_net_recv_pkts)
 		return ptypes;
+
 	return NULL;
 }
 
@@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
-
 		if (mask == 0)
 			continue;
 
 		reta = 0;
+
 		/* If all 4 entries were set, don't need read RETA register */
 		if (mask != 0xF)
 			reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i);
@@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			/* Clearing the entry bits */
 			if (mask != 0xF)
 				reta &= ~(0xFF << (8 * j));
+
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
+
 		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
+
 	return 0;
 }
 
@@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			reta_conf[idx].reta[shift + j] =
 					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
+
 	return 0;
 }
 
@@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
+
 		return 0; /* Nothing to do */
 	}
 
@@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 			queue %= rx_queues;
 		}
 	}
+
 	ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80);
 	if (ret != 0)
 		return ret;
@@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
-	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 
+	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 	ret = nfp_net_rss_hash_write(dev, &rss_conf);
 
 	return ret;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index b41d834165..27dc2175e3 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -32,7 +32,7 @@
 #define DEFAULT_RX_HTHRESH      8
 #define DEFAULT_RX_WTHRESH      0
 
-#define DEFAULT_TX_RS_THRESH	32
+#define DEFAULT_TX_RS_THRESH    32
 #define DEFAULT_TX_FREE_THRESH  32
 #define DEFAULT_TX_PTHRESH      32
 #define DEFAULT_TX_HTHRESH      0
@@ -40,12 +40,12 @@
 #define DEFAULT_TX_RSBIT_THRESH 32
 
 /* Alignment for dma zones */
-#define NFP_MEMZONE_ALIGN	128
+#define NFP_MEMZONE_ALIGN       128
 
 #define NFP_QCP_QUEUE_ADDR_SZ   (0x800)
 
 /* Number of supported physical ports */
-#define NFP_MAX_PHYPORTS	12
+#define NFP_MAX_PHYPORTS        12
 
 /* Firmware application ID's */
 enum nfp_app_fw_id {
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index b5bfe17d0e..080070f58b 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
+
 			err = nfp_cpp_area_write(area, pos, tmpbuf, len);
 			if (err < 0) {
 				PMD_CPP_LOG(ERR, "nfp_cpp_area_write error");
@@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
 				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
+
 	return 0;
 }
 
@@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args)
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
+
 	sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (sockfd < 0) {
 		PMD_CPP_LOG(ERR, "socket creation error. Service failed");
@@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args)
 			if (op == 0)
 				break;
 		}
+
 		close(datafd);
 	}
+
 	close(sockfd);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index a13f95894a..ef8bf486cb 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -208,8 +208,8 @@ struct nfp_net_fw_ver {
 /*
  * NFP6000/NFP4000 - Prepend configuration
  */
-#define NFP_NET_CFG_RX_OFFSET		0x0050
-#define NFP_NET_CFG_RX_OFFSET_DYNAMIC		0	/* Prepend mode */
+#define NFP_NET_CFG_RX_OFFSET           0x0050
+#define NFP_NET_CFG_RX_OFFSET_DYNAMIC          0    /* Prepend mode */
 
 /* Start anchor of the TLV area */
 #define NFP_NET_CFG_TLV_BASE            0x0058
@@ -442,7 +442,7 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
+#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
 
 /*
  * General use mailbox area (0x1800 - 0x19ff)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index dece821e4a..0493548c81 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 	rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr);
 
 	free(nfp_eth_table);
+
 	return 0;
 }
 
@@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 					"with NFP multiport PF");
 				return -EINVAL;
 		}
+
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
@@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 
 	/* Clear queues */
 	nfp_net_stop_tx_queue(dev);
-
 	nfp_net_stop_rx_queue(dev);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	/* Clear ipsec */
@@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev,
 
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nfp_net_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_net_start,
-	.dev_stop		= nfp_net_stop,
-	.dev_set_link_up	= nfp_net_set_link_up,
-	.dev_set_link_down	= nfp_net_set_link_down,
-	.dev_close		= nfp_net_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_net_start,
+	.dev_stop               = nfp_net_stop,
+	.dev_set_link_up        = nfp_net_set_link_up,
+	.dev_set_link_down      = nfp_net_set_link_down,
+	.dev_close              = nfp_net_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 	.udp_tunnel_port_add    = nfp_udp_tunnel_port_add,
@@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
 		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
@@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar");
 			return -EIO;
 		}
+
 		hw->mac_stats = hw->mac_stats_bar;
 	} else {
 		if (pf_dev->ctrl_bar == NULL)
 			return -ENODEV;
+
 		/* Use port offset in pf ctrl_bar for this ports control bar */
 		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
@@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 		return -ENOMEM;
 	}
 
-
 	/* Work out where in the BAR the queues start. */
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
@@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev,
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
 			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
 			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
-
 	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
 		goto load_fw;
+
 	/* Then try the PCI name */
 	snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH,
 			dev->name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 0a1eb04294..8053808b02 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	rte_intr_disable(pci_dev->intr_handle);
@@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 /* Initialise and register VF driver with DPDK Application */
 static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_netvf_start,
-	.dev_stop		= nfp_netvf_stop,
-	.dev_set_link_up	= nfp_netvf_set_link_up,
-	.dev_set_link_down	= nfp_netvf_set_link_down,
-	.dev_close		= nfp_netvf_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_netvf_start,
+	.dev_stop               = nfp_netvf_stop,
+	.dev_set_link_up        = nfp_netvf_set_link_up,
+	.dev_set_link_down      = nfp_netvf_set_link_down,
+	.dev_close              = nfp_netvf_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 };
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 7b1abe926e..a806cbfbeb 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -464,6 +464,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx)
 			priv->stats_ids.init_unallocated--;
 			priv->active_mem_unit = 0;
 		}
+
 		return 0;
 	}
 
@@ -590,6 +591,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 		PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
 		return -ENOMEM;
 	}
+
 	memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
 	tmp_entry->ref_count = 1;
 
@@ -1760,7 +1762,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_eth){
+		.mask_support = &(const struct rte_flow_item_eth) {
 			.hdr = {
 				.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 				.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -1775,7 +1777,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_vlan){
+		.mask_support = &(const struct rte_flow_item_vlan) {
 			.hdr = {
 				.vlan_tci  = RTE_BE16(0xefff),
 				.eth_proto = RTE_BE16(0xffff),
@@ -1791,7 +1793,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv4){
+		.mask_support = &(const struct rte_flow_item_ipv4) {
 			.hdr = {
 				.type_of_service = 0xff,
 				.fragment_offset = RTE_BE16(0xffff),
@@ -1810,7 +1812,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv6){
+		.mask_support = &(const struct rte_flow_item_ipv6) {
 			.hdr = {
 				.vtc_flow   = RTE_BE32(0x0ff00000),
 				.proto      = 0xff,
@@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_ipv6,
 	},
 	[RTE_FLOW_ITEM_TYPE_TCP] = {
-		.mask_support = &(const struct rte_flow_item_tcp){
+		.mask_support = &(const struct rte_flow_item_tcp) {
 			.hdr = {
 				.tcp_flags = 0xff,
 				.src_port  = RTE_BE16(0xffff),
@@ -1841,7 +1843,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_UDP] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
 			RTE_FLOW_ITEM_TYPE_GENEVE),
-		.mask_support = &(const struct rte_flow_item_udp){
+		.mask_support = &(const struct rte_flow_item_udp) {
 			.hdr = {
 				.src_port = RTE_BE16(0xffff),
 				.dst_port = RTE_BE16(0xffff),
@@ -1852,7 +1854,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_udp,
 	},
 	[RTE_FLOW_ITEM_TYPE_SCTP] = {
-		.mask_support = &(const struct rte_flow_item_sctp){
+		.mask_support = &(const struct rte_flow_item_sctp) {
 			.hdr = {
 				.src_port  = RTE_BE16(0xffff),
 				.dst_port  = RTE_BE16(0xffff),
@@ -1864,7 +1866,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_vxlan){
+		.mask_support = &(const struct rte_flow_item_vxlan) {
 			.hdr = {
 				.vx_vni = RTE_BE32(0xffffff00),
 			},
@@ -1875,7 +1877,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_geneve){
+		.mask_support = &(const struct rte_flow_item_geneve) {
 			.vni = "\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_geneve_mask,
@@ -1884,7 +1886,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
-		.mask_support = &(const struct rte_flow_item_gre){
+		.mask_support = &(const struct rte_flow_item_gre) {
 			.c_rsvd0_ver = RTE_BE16(0xa000),
 			.protocol = RTE_BE16(0xffff),
 		},
@@ -1916,6 +1918,7 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 					" without a corresponding 'spec'.");
 			return -EINVAL;
 		}
+
 		/* No spec, no mask, no problem. */
 		return 0;
 	}
@@ -2995,6 +2998,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
@@ -3021,6 +3025,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 
 	*index = entry->mac_index;
 	priv->pre_tun_cnt++;
+
 	return 0;
 }
 
@@ -3055,12 +3060,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
 			find_entry->ref_cnt--;
 			if (find_entry->ref_cnt != 0)
 				goto free_entry;
+
 			priv->pre_tun_bitmap[i] = 0;
 			break;
 		}
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 68b6fb6abe..5a5b6a7d19 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -115,11 +115,14 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
+
 	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
+
 	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
+
 	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
@@ -127,16 +130,20 @@ struct nfp_flow_priv {
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+
 	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+
 	/* IPv4 off */
 	LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
 	rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
+
 	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 };
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 7b77351f1c..4632837c0e 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dd = 0;
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
+
 		rxe[i].mbuf = mbuf;
 	}
 
@@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
+
 	return 0;
 }
 
@@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	struct nfp_net_rx_desc *rxds;
 
 	rxq = rx_queue;
-
 	idx = rxq->rd_p;
 
 	/*
@@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * performance. But ideally that should be done in descriptors
 	 * chunks belonging to the same cache line.
 	 */
-
 	while (count < rxq->rx_count) {
 		rxds = &rxq->rxds[idx];
 		if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0)
@@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
+
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
 	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
@@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * so looking at the implications of this type of allocation should be studied
  * deeply.
  */
-
 uint16_t
 nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
@@ -903,7 +903,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
 			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
 			NFP_MEMZONE_ALIGN, socket_id);
-
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
 		nfp_net_rx_queue_release(dev, queue_idx);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 08/11] net/nfp: unify the guide line of header file
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (6 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 07/11] net/nfp: standard the blank character Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 09/11] net/nfp: rename some parameter and variable Chaoyong He
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the guide line of header file, we choose '__FOO_BAR_H__' style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.h             | 6 +++---
 drivers/net/nfp/flower/nfp_flower_cmsg.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_ctrl.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_representor.h | 6 +++---
 drivers/net/nfp/nfd3/nfp_nfd3.h                 | 6 +++---
 drivers/net/nfp/nfp_common.h                    | 6 +++---
 drivers/net/nfp/nfp_cpp_bridge.h                | 8 +++-----
 drivers/net/nfp/nfp_ctrl.h                      | 6 +++---
 drivers/net/nfp/nfp_flow.h                      | 6 +++---
 drivers/net/nfp/nfp_logs.h                      | 6 +++---
 drivers/net/nfp/nfp_rxtx.h                      | 6 +++---
 11 files changed, 33 insertions(+), 35 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 0b4e38cedd..b7ea830209 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_H_
-#define _NFP_FLOWER_H_
+#ifndef __NFP_FLOWER_H__
+#define __NFP_FLOWER_H__
 
 #include "../nfp_common.h"
 
@@ -118,4 +118,4 @@ int nfp_flower_pf_stop(struct rte_eth_dev *dev);
 uint32_t nfp_flower_pkt_add_metadata(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf, uint32_t port_id);
 
-#endif /* _NFP_FLOWER_H_ */
+#endif /* __NFP_FLOWER_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index cb019171b6..c2938fb6f6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CMSG_H_
-#define _NFP_CMSG_H_
+#ifndef __NFP_CMSG_H__
+#define __NFP_CMSG_H__
 
 #include "../nfp_flow.h"
 #include "nfp_flower.h"
@@ -989,4 +989,4 @@ int nfp_flower_cmsg_qos_delete(struct nfp_app_fw_flower *app_fw_flower,
 int nfp_flower_cmsg_qos_stats(struct nfp_app_fw_flower *app_fw_flower,
 		struct nfp_cfg_head *head);
 
-#endif /* _NFP_CMSG_H_ */
+#endif /* __NFP_CMSG_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.h b/drivers/net/nfp/flower/nfp_flower_ctrl.h
index f73a024266..4c94d36847 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.h
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_CTRL_H_
-#define _NFP_FLOWER_CTRL_H_
+#ifndef __NFP_FLOWER_CTRL_H__
+#define __NFP_FLOWER_CTRL_H__
 
 #include "nfp_flower.h"
 
@@ -13,4 +13,4 @@ uint16_t nfp_flower_ctrl_vnic_xmit(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf);
 void nfp_flower_ctrl_vnic_xmit_register(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_CTRL_H_ */
+#endif /* __NFP_FLOWER_CTRL_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index eda19cbb16..bcb4c3cdb5 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_REPRESENTOR_H_
-#define _NFP_FLOWER_REPRESENTOR_H_
+#ifndef __NFP_FLOWER_REPRESENTOR_H__
+#define __NFP_FLOWER_REPRESENTOR_H__
 
 #include "nfp_flower.h"
 
@@ -24,4 +24,4 @@ struct nfp_flower_representor {
 
 int nfp_flower_repr_create(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_REPRESENTOR_H_ */
+#endif /* __NFP_FLOWER_REPRESENTOR_H__ */
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 0b0ca361f4..3ba562cc3f 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_NFD3_H_
-#define _NFP_NFD3_H_
+#ifndef __NFP_NFD3_H__
+#define __NFP_NFD3_H__
 
 #include "../nfp_rxtx.h"
 
@@ -84,4 +84,4 @@ int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-#endif /* _NFP_NFD3_H_ */
+#endif /* __NFP_NFD3_H__ */
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 27dc2175e3..11eda70f1a 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_COMMON_H_
-#define _NFP_COMMON_H_
+#ifndef __NFP_COMMON_H__
+#define __NFP_COMMON_H__
 
 #include <bus_pci_driver.h>
 #include <ethdev_driver.h>
@@ -450,4 +450,4 @@ bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version);
 #define NFP_PRIV_TO_APP_FW_FLOWER(app_fw_priv)\
 	((struct nfp_app_fw_flower *)app_fw_priv)
 
-#endif /* _NFP_COMMON_H_ */
+#endif /* __NFP_COMMON_H__ */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h
index e6a957a090..a1103e85e4 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.h
+++ b/drivers/net/nfp/nfp_cpp_bridge.h
@@ -1,16 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
-#ifndef _NFP_CPP_BRIDGE_H_
-#define _NFP_CPP_BRIDGE_H_
+#ifndef __NFP_CPP_BRIDGE_H__
+#define __NFP_CPP_BRIDGE_H__
 
 #include "nfp_common.h"
 
 int nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev);
 int nfp_map_service(uint32_t service_id);
 
-#endif /* _NFP_CPP_BRIDGE_H_ */
+#endif /* __NFP_CPP_BRIDGE_H__ */
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index ef8bf486cb..71fe125420 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CTRL_H_
-#define _NFP_CTRL_H_
+#ifndef __NFP_CTRL_H__
+#define __NFP_CTRL_H__
 
 #include <stdint.h>
 
@@ -573,4 +573,4 @@ nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
 	return NFP_NET_CFG_CTRL_RSS;
 }
 
-#endif /* _NFP_CTRL_H_ */
+#endif /* __NFP_CTRL_H__ */
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 5a5b6a7d19..d4bde0a294 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOW_H_
-#define _NFP_FLOW_H_
+#ifndef __NFP_FLOW_H__
+#define __NFP_FLOW_H__
 
 #include "nfp_common.h"
 
@@ -164,4 +164,4 @@ int nfp_flow_priv_init(struct nfp_pf_dev *pf_dev);
 void nfp_flow_priv_uninit(struct nfp_pf_dev *pf_dev);
 int nfp_net_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops);
 
-#endif /* _NFP_FLOW_H_ */
+#endif /* __NFP_FLOW_H__ */
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 16ff61700b..690adabffd 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_LOGS_H_
-#define _NFP_LOGS_H_
+#ifndef __NFP_LOGS_H__
+#define __NFP_LOGS_H__
 
 #include <rte_log.h>
 
@@ -41,4 +41,4 @@ extern int nfp_logtype_driver;
 	rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \
 		"%s(): " fmt "\n", __func__, ## args)
 
-#endif /* _NFP_LOGS_H_ */
+#endif /* __NFP_LOGS_H__ */
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 899cc42c97..956cc7a0d2 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_RXTX_H_
-#define _NFP_RXTX_H_
+#ifndef __NFP_RXTX_H__
+#define __NFP_RXTX_H__
 
 #include <ethdev_driver.h>
 
@@ -253,4 +253,4 @@ void nfp_net_set_meta_ipsec(struct nfp_net_meta_raw *meta_data,
 		uint8_t layer,
 		uint8_t ipsec_layer);
 
-#endif /* _NFP_RXTX_H_ */
+#endif /* __NFP_RXTX_H__ */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 09/11] net/nfp: rename some parameter and variable
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (7 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 08/11] net/nfp: unify the guide line of header file Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Rename some parameter and variable to make the logic easier to
understand.
Also avoid the mix use of lowercase and uppercase in macro name.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.h    | 20 ++++++++++----------
 drivers/net/nfp/nfp_ethdev_vf.c |  8 ++++----
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 11eda70f1a..a5e20bc4a7 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -19,9 +19,9 @@
 #define NFP_QCP_QUEUE_ADD_RPTR                  0x0000
 #define NFP_QCP_QUEUE_ADD_WPTR                  0x0004
 #define NFP_QCP_QUEUE_STS_LO                    0x0008
-#define NFP_QCP_QUEUE_STS_LO_READPTR_mask     (0x3ffff)
+#define NFP_QCP_QUEUE_STS_LO_READPTR_MASK     (0x3ffff)
 #define NFP_QCP_QUEUE_STS_HI                    0x000c
-#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask    (0x3ffff)
+#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK    (0x3ffff)
 
 /* Interrupt definitions */
 #define NFP_NET_IRQ_LSC_IDX             0
@@ -303,7 +303,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 /**
  * Add the value to the selected pointer of a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Add to the read or write pointer
@@ -311,7 +311,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
  *   Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q,
+nfp_qcp_ptr_add(uint8_t *queue,
 		enum nfp_qcp_ptr ptr,
 		uint32_t val)
 {
@@ -322,19 +322,19 @@ nfp_qcp_ptr_add(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_ADD_WPTR;
 
-	nn_writel(rte_cpu_to_le_32(val), q + off);
+	nn_writel(rte_cpu_to_le_32(val), queue + off);
 }
 
 /**
  * Read the current read/write pointer value for a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q,
+nfp_qcp_read(uint8_t *queue,
 		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
@@ -345,12 +345,12 @@ nfp_qcp_read(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_STS_HI;
 
-	val = rte_cpu_to_le_32(nn_readl(q + off));
+	val = rte_cpu_to_le_32(nn_readl(queue + off));
 
 	if (ptr == NFP_QCP_READ_PTR)
-		return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_LO_READPTR_MASK;
 	else
-		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK;
 }
 
 static inline uint32_t
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 8053808b02..af0689832a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -396,7 +396,7 @@ nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 }
 
 static int
-eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
@@ -404,7 +404,7 @@ eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 }
 
 static int
-eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
@@ -412,8 +412,8 @@ eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 static struct rte_pci_driver rte_nfp_net_vf_pmd = {
 	.id_table = pci_id_nfp_vf_net_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
-	.probe = eth_nfp_vf_pci_probe,
-	.remove = eth_nfp_vf_pci_remove,
+	.probe = nfp_vf_pci_probe,
+	.remove = nfp_vf_pci_remove,
 };
 
 RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 10/11] net/nfp: adjust logic to make it more readable
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (8 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 09/11] net/nfp: rename some parameter and variable Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-07  2:33 ` [PATCH 11/11] net/nfp: refact the meson build file Chaoyong He
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Adjust some logic to make it easier to understand.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c     | 83 +++++++++++++++++---------------
 drivers/net/nfp/nfp_cpp_bridge.c |  5 +-
 drivers/net/nfp/nfp_ctrl.h       |  2 -
 drivers/net/nfp/nfp_ethdev.c     | 23 ++++-----
 drivers/net/nfp/nfp_ethdev_vf.c  | 15 +++---
 drivers/net/nfp/nfp_rxtx.c       |  2 +-
 6 files changed, 61 insertions(+), 69 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 3409ee8cb8..f6cd506dd6 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -467,19 +467,19 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
 	struct nfp_net_hw *hw;
-	uint64_t enabled_queues = 0;
+	uint64_t enabled_queues;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Enabling the required TX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		enabled_queues |= (1 << i);
 
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues);
 
-	enabled_queues = 0;
-
 	/* Enabling the required RX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		enabled_queues |= (1 << i);
 
@@ -619,33 +619,33 @@ uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
 	uint32_t ctrl = 0;
+	uint64_t rx_offload;
+	uint64_t tx_offload;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
-	struct rte_eth_rxmode *rxmode;
-	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	dev_conf = &dev->data->dev_conf;
-	rxmode = &dev_conf->rxmode;
-	txmode = &dev_conf->txmode;
+	rx_offload = dev_conf->rxmode.offloads;
+	tx_offload = dev_conf->txmode.offloads;
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
 		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
@@ -661,14 +661,14 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -676,7 +676,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -766,11 +766,10 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Read link status */
-	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
-
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
+	/* Read link status */
+	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
 	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
@@ -828,6 +827,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
+	if (stats == NULL)
+		return -EINVAL;
+
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
@@ -892,11 +894,8 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats != NULL) {
-		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
-		return 0;
-	}
-	return -EINVAL;
+	memcpy(stats, &nfp_dev_stats, sizeof(*stats));
+	return 0;
 }
 
 /*
@@ -1379,13 +1378,14 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
 			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
@@ -1399,14 +1399,16 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
-	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), 0x1);
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_RXTX);
+
 	return 0;
 }
 
@@ -1445,13 +1447,13 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
+	/* Make sure all updates are written before un-masking */
+	rte_wmb();
+
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
-		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
 	} else {
-		/* Make sure all updates are written before un-masking */
-		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
 				NFP_NET_CFG_ICR_UNMASKED);
 	}
@@ -1548,19 +1550,18 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t update;
 	uint32_t new_ctrl;
+	uint64_t rx_offload;
 	struct nfp_net_hw *hw;
 	uint32_t rxvlan_ctrl = 0;
-	struct rte_eth_conf *dev_conf;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	dev_conf = &dev->data->dev_conf;
+	rx_offload = dev->data->dev_conf.rxmode.offloads;
 	new_ctrl = hw->ctrl;
 
-	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
-
 	/* VLAN stripping setting */
 	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+		nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
@@ -1568,7 +1569,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 
 	/* QinQ stripping setting */
 	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1580,10 +1581,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	update = NFP_NET_CFG_UPDATE_GEN;
 
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret == 0)
-		hw->ctrl = new_ctrl;
+	if (ret != 0)
+		return ret;
 
-	return ret;
+	hw->ctrl = new_ctrl;
+
+	return 0;
 }
 
 static int
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 080070f58b..f37de7060a 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -22,9 +22,6 @@
 #define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t)
 
 /* Prototypes */
-static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp);
 static int nfp_cpp_bridge_service_func(void *args);
 
 int
@@ -438,7 +435,7 @@ nfp_cpp_bridge_service_func(void *args)
 			return -EIO;
 		}
 
-		while (1) {
+		for (;;) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
 				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 71fe125420..1012b37b1f 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -442,8 +442,6 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
-
 /*
  * General use mailbox area (0x1800 - 0x19ff)
  * 4B used for update command and 4B return code followed by
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 0493548c81..362fd2b601 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -80,7 +80,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -525,7 +525,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			return -ENODEV;
 
 		/* Use port offset in pf ctrl_bar for this ports control bar */
-		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
+		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
 	}
 
@@ -743,8 +743,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
 	uint8_t i;
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
@@ -765,8 +764,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	pf_dev->app_fw_priv = app_fw_nic;
 
 	/* Read the number of vNIC's created for the PF */
-	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
+	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &ret);
+	if (ret != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -874,8 +873,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 static int
 nfp_pf_init(struct rte_pci_device *pci_dev)
 {
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
@@ -943,8 +941,8 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		ret = -EIO;
 		goto sym_tbl_cleanup;
@@ -1080,7 +1078,6 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 static int
 nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 {
-	int err = 0;
 	int ret = 0;
 	struct nfp_cpp *cpp;
 	enum nfp_app_fw_id app_fw_id;
@@ -1124,8 +1121,8 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		goto sym_tbl_cleanup;
 	}
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index af0689832a..b6ebbc1ea5 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -39,8 +39,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -54,7 +52,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -77,6 +75,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	new_ctrl = nfp_check_offloads(dev);
 
 	/* Writing configuration parameters in the device */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nfp_net_params_setup(hw);
 
 	dev_conf = &dev->data->dev_conf;
@@ -244,15 +243,15 @@ static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
 	int err;
+	uint16_t port;
 	uint32_t start_q;
-	uint16_t port = 0;
 	struct nfp_net_hw *hw;
 	uint64_t tx_bar_off = 0;
 	uint64_t rx_bar_off = 0;
 	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
-	struct rte_ether_addr *tmp_ether_addr;
 
+	port = eth_dev->data->port_id;
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -325,9 +324,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	}
 
 	nfp_netvf_read_mac(hw);
-
-	tmp_ether_addr = &hw->mac_addr;
-	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
+	if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -344,7 +341,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
-			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			port, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
 			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 4632837c0e..e11f617f9a 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -284,7 +284,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 			meta->vlan[meta->vlan_layer].tci =
 					vlan_info & NFP_NET_META_VLAN_MASK;
 			meta->vlan[meta->vlan_layer].tpid = NFP_NET_META_TPID(vlan_info);
-			++meta->vlan_layer;
+			meta->vlan_layer++;
 			break;
 		case NFP_NET_META_IPSEC:
 			meta->sa_idx = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH 11/11] net/nfp: refact the meson build file
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (9 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
@ 2023-10-07  2:33 ` Chaoyong He
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-07  2:33 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Make the source files follow the alphabeta sequence.
Also update the copyright header line.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/meson.build | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build
index 3912566134..fd3e88a207 100644
--- a/drivers/net/nfp/meson.build
+++ b/drivers/net/nfp/meson.build
@@ -1,10 +1,11 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018 Corigine, Inc.
 
 if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
     build = false
     reason = 'only supported on 64-bit Linux'
 endif
+
 sources = files(
         'flower/nfp_flower.c',
         'flower/nfp_flower_cmsg.c',
@@ -12,30 +13,30 @@ sources = files(
         'flower/nfp_flower_representor.c',
         'nfd3/nfp_nfd3_dp.c',
         'nfdk/nfp_nfdk_dp.c',
-        'nfpcore/nfp_nsp.c',
         'nfpcore/nfp_cppcore.c',
-        'nfpcore/nfp_resource.c',
-        'nfpcore/nfp_mip.c',
-        'nfpcore/nfp_nffw.c',
-        'nfpcore/nfp_rtsym.c',
-        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_crc.c',
         'nfpcore/nfp_dev.c',
+        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_mip.c',
         'nfpcore/nfp_mutex.c',
+        'nfpcore/nfp_nffw.c',
+        'nfpcore/nfp_nsp.c',
+        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_nsp_eth.c',
-        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_resource.c',
+        'nfpcore/nfp_rtsym.c',
         'nfpcore/nfp_target.c',
         'nfpcore/nfp6000_pcie.c',
         'nfp_common.c',
-        'nfp_ctrl.c',
-        'nfp_rxtx.c',
         'nfp_cpp_bridge.c',
-        'nfp_ethdev_vf.c',
+        'nfp_ctrl.c',
         'nfp_ethdev.c',
+        'nfp_ethdev_vf.c',
         'nfp_flow.c',
         'nfp_ipsec.c',
         'nfp_logs.c',
         'nfp_mtr.c',
+        'nfp_rxtx.c',
 )
 
 deps += ['hash', 'security']
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 00/11] Unify the PMD coding style
  2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
                   ` (10 preceding siblings ...)
  2023-10-07  2:33 ` [PATCH 11/11] net/nfp: refact the meson build file Chaoyong He
@ 2023-10-12  1:26 ` Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
                     ` (11 more replies)
  11 siblings, 12 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He

This patch series aims to unify the coding style of NFP PMD, make the
logics following the same rules, to make it easier to understand and
extend.
Also prepare for the upcoming vDPA PMD patch series.

---
v2:
* Add some missing modification.
---

Chaoyong He (11):
  net/nfp: explicitly compare to null and 0
  net/nfp: unify the indent coding style
  net/nfp: unify the type of integer variable
  net/nfp: standard the local variable coding style
  net/nfp: adjust the log statement
  net/nfp: standard the comment style
  net/nfp: standard the blank character
  net/nfp: unify the guide line of header file
  net/nfp: rename some parameter and variable
  net/nfp: adjust logic to make it more readable
  net/nfp: refact the meson build file

 drivers/net/nfp/flower/nfp_conntrack.c        |   4 +-
 drivers/net/nfp/flower/nfp_flower.c           |  27 +-
 drivers/net/nfp/flower/nfp_flower.h           |  34 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |  18 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  62 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |  46 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.h      |   6 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  46 +-
 .../net/nfp/flower/nfp_flower_representor.h   |   8 +-
 drivers/net/nfp/meson.build                   |  23 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  39 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  34 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  49 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  14 +-
 drivers/net/nfp/nfp_common.c                  | 775 +++++++++---------
 drivers/net/nfp/nfp_common.h                  | 169 ++--
 drivers/net/nfp/nfp_cpp_bridge.c              | 139 ++--
 drivers/net/nfp/nfp_cpp_bridge.h              |   8 +-
 drivers/net/nfp/nfp_ctrl.h                    |  46 +-
 drivers/net/nfp/nfp_ethdev.c                  | 325 ++++----
 drivers/net/nfp/nfp_ethdev_vf.c               | 195 ++---
 drivers/net/nfp/nfp_flow.c                    | 251 +++---
 drivers/net/nfp/nfp_flow.h                    |  23 +-
 drivers/net/nfp/nfp_ipsec.h                   |  12 +-
 drivers/net/nfp/nfp_logs.h                    |   7 +-
 drivers/net/nfp/nfp_rxtx.c                    | 303 +++----
 drivers/net/nfp/nfp_rxtx.h                    |  36 +-
 drivers/net/nfp/nfpcore/nfp_resource.h        |   2 +-
 28 files changed, 1313 insertions(+), 1388 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 01/11] net/nfp: explicitly compare to null and 0
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 02/11] net/nfp: unify the indent coding style Chaoyong He
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

To compliance with the coding standard, make the pointer variable
explicitly comparing to 'NULL' and the integer variable explicitly
comparing to '0'.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |   6 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c |   6 +-
 drivers/net/nfp/nfp_common.c             | 144 +++++++++++------------
 drivers/net/nfp/nfp_cpp_bridge.c         |   2 +-
 drivers/net/nfp/nfp_ethdev.c             |  38 +++---
 drivers/net/nfp/nfp_ethdev_vf.c          |  14 +--
 drivers/net/nfp/nfp_flow.c               |  90 +++++++-------
 drivers/net/nfp/nfp_rxtx.c               |  28 ++---
 8 files changed, 165 insertions(+), 163 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 98e6f7f927..3ddaf0f28d 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -69,7 +69,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -100,7 +100,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_RSS;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS2) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
 	else
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
@@ -110,7 +110,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index c5282053cf..b564e7cd73 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -103,7 +103,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
@@ -195,7 +195,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 
 	lmbuf = &txq->txbufs[txq->wr_p].mbuf;
 	RTE_MBUF_PREFETCH_TO_FREE(*lmbuf);
-	if (*lmbuf)
+	if (*lmbuf != NULL)
 		rte_pktmbuf_free_seg(*lmbuf);
 
 	*lmbuf = mbuf;
@@ -337,7 +337,7 @@ nfp_flower_ctrl_vnic_nfdk_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	txq->wr_p = D_IDX(txq, txq->wr_p + used_descs);
-	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT)
+	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT != 0)
 		txq->data_pending += mbuf->pkt_len;
 	else
 		txq->data_pending = 0;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 5683afc40a..36752583dd 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -221,7 +221,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
-		if (new & NFP_NET_CFG_UPDATE_ERR) {
+		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
 			return -1;
 		}
@@ -390,18 +390,18 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0)
 		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
-	if (txmode->mq_mode) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY)) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -493,11 +493,11 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
 		 NFP_NET_CFG_UPDATE_MSIX;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -537,8 +537,8 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
 				  " port enabled");
 		return -EBUSY;
@@ -550,10 +550,10 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
-	if (nfp_net_reconfig(hw, ctrl, update) < 0) {
+	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
 		return -EIO;
 	}
@@ -568,7 +568,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues)) {
+				    dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
 			     " intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
@@ -580,7 +580,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
-		if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
@@ -591,7 +591,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1))
+							       i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
 				rte_intr_vec_list_index_get(intr_handle,
@@ -619,53 +619,53 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
-		if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
-		else if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
+		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 	}
 
 	/* L2 broadcast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2BC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2BC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2BC;
 
 	/* L2 multicast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2MC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2MC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) {
-		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
 			ctrl |= NFP_NET_CFG_CTRL_LSO2;
 	}
 
 	/* RX gather */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -693,7 +693,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) != 0) {
 		PMD_DRV_LOG(INFO, "Promiscuous mode already enabled");
 		return 0;
 	}
@@ -706,7 +706,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	 * it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -736,7 +736,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	 * assuming it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -770,7 +770,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
-	if (nn_link_status & NFP_NET_CFG_STS_LINK)
+	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
 	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -802,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == 0) {
-		if (link.link_status)
+		if (link.link_status != 0)
 			PMD_DRV_LOG(INFO, "NIC Link is Up");
 		else
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
@@ -907,7 +907,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats) {
+	if (stats != NULL) {
 		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
 		return 0;
 	}
@@ -1229,32 +1229,32 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	/* Next should change when PF support is implemented */
 	dev_info->max_mac_addrs = 1;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0)
 		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
-		if (hw->cap & NFP_NET_CFG_CTRL_VXLAN)
+		if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0)
 			dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
+	if ((hw->cap & NFP_NET_CFG_CTRL_GATHER) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	cap_extend = nn_cfg_readl(hw, NFP_NET_CFG_CAP_WORD1);
@@ -1297,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
 	};
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) {
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
@@ -1431,7 +1431,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	struct rte_eth_link link;
 
 	rte_eth_linkstatus_get(dev, &link);
-	if (link.link_status)
+	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
 			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
@@ -1462,7 +1462,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
 		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
@@ -1524,7 +1524,7 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	if (rte_eal_alarm_set(timeout * 1000,
 			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) < 0) {
+			      (void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1577,16 +1577,16 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
 	/* VLAN stripping setting */
-	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
 	}
 
 	/* QinQ stripping setting */
-	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1674,7 +1674,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1748,28 +1748,28 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & RTE_ETH_RSS_IPV4)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_SCTP;
 
-	if (rss_hf & RTE_ETH_RSS_IPV6)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_SCTP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1814,7 +1814,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1838,28 +1838,28 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	rss_hf = rss_conf->rss_hf;
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV4;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV6;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 	/* Propagate current RSS hash functions to caller */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index ed9a946b0c..34764a8a32 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -70,7 +70,7 @@ nfp_map_service(uint32_t service_id)
 	rte_service_runstate_set(service_id, 1);
 	rte_service_component_runstate_set(service_id, 1);
 	rte_service_lcore_start(slcore);
-	if (rte_service_may_be_active(slcore))
+	if (rte_service_may_be_active(slcore) != 0)
 		PMD_INIT_LOG(INFO, "The service %s is running", service_name);
 	else
 		PMD_INIT_LOG(ERR, "The service %s is not running", service_name);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index ebc5538291..12feec8eb4 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -89,7 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -113,7 +113,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -125,15 +125,15 @@ nfp_net_start(struct rte_eth_dev *dev)
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
 	/* Enable vxlan */
-	if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) {
 		new_ctrl |= NFP_NET_CFG_CTRL_VXLAN;
 		update |= NFP_NET_CFG_UPDATE_VXLAN;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/* Enable packet type offload by extend ctrl word1. */
@@ -146,14 +146,14 @@ nfp_net_start(struct rte_eth_dev *dev)
 				| NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP;
 
 	update = NFP_NET_CFG_UPDATE_GEN;
-	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) < 0)
+	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -298,7 +298,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		/* Check to see if ports are still in use */
-		if (app_fw_nic->ports[i])
+		if (app_fw_nic->ports[i] != NULL)
 			return 0;
 	}
 
@@ -598,7 +598,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -618,7 +618,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -695,10 +695,11 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* Finally try the card type and media */
 	snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card);
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
-	if (rte_firmware_read(fw_name, &fw_buf, &fsize) < 0) {
-		PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name);
-		return -ENOENT;
-	}
+	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
+		goto load_fw;
+
+	PMD_DRV_LOG(ERR, "Can't find suitable firmware.");
+	return -ENOENT;
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
@@ -727,7 +728,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 	if (nfp_fw_model == NULL)
 		nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno");
 
-	if (nfp_fw_model) {
+	if (nfp_fw_model != NULL) {
 		PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model);
 	} else {
 		PMD_DRV_LOG(ERR, "firmware model NOT found");
@@ -865,7 +866,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		 * nfp_net_init
 		 */
 		ret = nfp_net_init(eth_dev);
-		if (ret) {
+		if (ret != 0) {
 			ret = -ENODEV;
 			goto port_cleanup;
 		}
@@ -878,7 +879,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 port_cleanup:
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
-		if (app_fw_nic->ports[i] && app_fw_nic->ports[i]->eth_dev) {
+		if (app_fw_nic->ports[i] != NULL &&
+				app_fw_nic->ports[i]->eth_dev != NULL) {
 			struct rte_eth_dev *tmp_dev;
 			tmp_dev = app_fw_nic->ports[i]->eth_dev;
 			nfp_ipsec_uninit(tmp_dev);
@@ -950,7 +952,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwinfo_cleanup;
 	}
 
-	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) {
+	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo) != 0) {
 		PMD_INIT_LOG(ERR, "Error when uploading firmware");
 		ret = -EIO;
 		goto eth_table_cleanup;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 0c94fc51ad..c8d6b0461b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -66,7 +66,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -83,7 +83,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -94,18 +94,18 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -330,7 +330,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -350,7 +350,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	nfp_netvf_read_mac(hw);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
 				   port);
 		/* Using random mac addresses for VFs */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 020e31e9de..3ea6813d9a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -521,8 +521,8 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx)
 
 	/* Check if buffer is full */
 	ring = &priv->stats_ids.free_list;
-	if (!CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
-			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1))
+	if (CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
+			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1) == 0)
 		return -ENOBUFS;
 
 	memcpy(&ring->buf[ring->head], &ctx, NFP_FL_STATS_ELEM_RS);
@@ -607,7 +607,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count++;
 			rte_spinlock_unlock(&priv->ipv6_off_lock);
 			return 0;
@@ -641,7 +641,7 @@ nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count--;
 			if (entry->ref_count == 0) {
 				LIST_REMOVE(entry, next);
@@ -671,14 +671,14 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (ext_meta != NULL)
 		key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
 
-	if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+	if ((key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv6_gre_tun));
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
@@ -688,7 +688,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
 		}
 	} else {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv4_gre_tun));
 			ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
@@ -783,7 +783,7 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 	mbuf_off_mask  += sizeof(struct nfp_flower_meta_tci);
 
 	/* Populate Extended Metadata if required */
-	if (key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		nfp_flower_compile_ext_meta(mbuf_off_exact, key_layer);
 		nfp_flower_compile_ext_meta(mbuf_off_mask, key_layer);
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
@@ -1068,7 +1068,7 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 			break;
 		case RTE_FLOW_ACTION_TYPE_SET_TTL:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_SET_TTL detected");
-			if (key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) {
+			if ((key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 				if (!ttl_tos_flag) {
 					key_ls->act_size +=
 						sizeof(struct nfp_fl_act_set_ip4_ttl_tos);
@@ -1166,15 +1166,15 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
 	struct nfp_flower_meta_tci *meta_tci;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) != 0)
 		return true;
 
-	if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) == 0)
 		return false;
 
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 	key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
-	if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
+	if ((key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) != 0)
 		return true;
 
 	return false;
@@ -1270,7 +1270,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1281,8 +1281,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
 
 			ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
@@ -1307,7 +1307,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1348,7 +1348,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1360,8 +1360,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
 		vtc_flow = rte_be_to_cpu_32(hdr->vtc_flow);
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 
 			ipv6_gre_tun->ip_ext.tos = vtc_flow >> RTE_IPV6_HDR_TC_SHIFT;
@@ -1390,7 +1390,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1434,7 +1434,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
 			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
@@ -1457,7 +1457,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		tcp_flags       = spec->hdr.tcp_flags;
 	}
 
-	if (ipv4) {
+	if (ipv4 != NULL) {
 		if (tcp_flags & RTE_TCP_FIN_FLAG)
 			ipv4->ip_ext.flags |= NFP_FL_TCP_FLAG_FIN;
 		if (tcp_flags & RTE_TCP_SYN_FLAG)
@@ -1512,7 +1512,7 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
@@ -1555,7 +1555,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
@@ -1595,7 +1595,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1607,8 +1607,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = hdr->vx_vni;
 		if (!is_mask)
@@ -1621,8 +1621,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 vxlan_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6))
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1649,7 +1649,7 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1661,8 +1661,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	geneve = is_mask ? mask : spec;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
 				(geneve->vni[1] << 8) | (geneve->vni[2]));
@@ -1677,8 +1677,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 geneve_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	} else {
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1705,8 +1705,8 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	/* NVGRE is the only supported GRE tunnel type */
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		if (is_mask)
 			tun6->ethertype = rte_cpu_to_be_16(~0);
@@ -1753,8 +1753,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	tun_key = is_mask ? *mask : *spec;
 
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		tun6->tun_key = tun_key;
 		tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
@@ -1769,8 +1769,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 gre_key_end:
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
@@ -2115,7 +2115,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor,
 			sizeof(struct nfp_flower_in_port);
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
 		mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
 	}
@@ -2558,7 +2558,7 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -2685,7 +2685,7 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -3181,7 +3181,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0)
 		return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
 	else
 		return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 66a5d6cb3a..4528417559 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -163,22 +163,22 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM) == 0)
 		return;
 
 	/* If IPv4 and IP checksum error, fail */
-	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
+	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) != 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK) == 0))
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
-	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM))
+	if ((rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) == 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM) == 0)
 		return;
 
-	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
+	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK) != 0)
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -232,7 +232,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0)
+		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
 	return 0;
@@ -387,7 +387,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 	 * to do anything.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
-		if (meta->vlan_layer >= 1 && meta->vlan[0].offload != 0) {
+		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
 			mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci);
 			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
@@ -771,7 +771,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM +
@@ -846,7 +846,7 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 		return;
 
 	for (i = 0; i < rxq->rx_count; i++) {
-		if (rxq->rxbufs[i].mbuf) {
+		if (rxq->rxbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf);
 			rxq->rxbufs[i].mbuf = NULL;
 		}
@@ -858,7 +858,7 @@ nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
-	if (rxq) {
+	if (rxq != NULL) {
 		nfp_net_rx_queue_release_mbufs(rxq);
 		rte_eth_dma_zone_free(dev, "rx_ring", queue_idx);
 		rte_free(rxq->rxbufs);
@@ -906,7 +906,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * Free memory prior to re-allocation if needed. This is the case after
 	 * calling nfp_net_stop
 	 */
-	if (dev->data->rx_queues[queue_idx]) {
+	if (dev->data->rx_queues[queue_idx] != NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
@@ -1037,7 +1037,7 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 		return;
 
 	for (i = 0; i < txq->tx_count; i++) {
-		if (txq->txbufs[i].mbuf) {
+		if (txq->txbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(txq->txbufs[i].mbuf);
 			txq->txbufs[i].mbuf = NULL;
 		}
@@ -1049,7 +1049,7 @@ nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
-	if (txq) {
+	if (txq != NULL) {
 		nfp_net_tx_queue_release_mbufs(txq);
 		rte_eth_dma_zone_free(dev, "tx_ring", queue_idx);
 		rte_free(txq->txbufs);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 02/11] net/nfp: unify the indent coding style
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 03/11] net/nfp: unify the type of integer variable Chaoyong He
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Each parameter of function should occupy one line, and indent two TAB
character.
All the statement which span multi line should indent two TAB character.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c           |   5 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |   7 +-
 .../net/nfp/flower/nfp_flower_representor.c   |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   4 +-
 drivers/net/nfp/nfp_common.c                  | 250 +++++++++---------
 drivers/net/nfp/nfp_common.h                  |  81 ++++--
 drivers/net/nfp/nfp_cpp_bridge.c              |  56 ++--
 drivers/net/nfp/nfp_ethdev.c                  |  82 +++---
 drivers/net/nfp/nfp_ethdev_vf.c               |  66 +++--
 drivers/net/nfp/nfp_flow.c                    |  36 +--
 drivers/net/nfp/nfp_rxtx.c                    |  86 +++---
 drivers/net/nfp/nfp_rxtx.h                    |  10 +-
 13 files changed, 358 insertions(+), 329 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 3ddaf0f28d..3352693d71 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -63,7 +63,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;
 	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
-		 NFP_NET_CFG_UPDATE_MSIX;
+			NFP_NET_CFG_UPDATE_MSIX;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
@@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue,
 }
 
 static int
-nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
+nfp_flower_init_vnic_common(struct nfp_net_hw *hw,
+		const char *vnic_type)
 {
 	int err;
 	uint32_t start_q;
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index b564e7cd73..4967cc2375 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(ERR,
-				"RX mbuf alloc failed port_id=%u queue_id=%hu",
-				rxq->port_id, rxq->qidx);
+			PMD_RX_LOG(ERR, "RX mbuf alloc failed port_id=%u queue_id=%hu",
+					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
 		}
@@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 	rte_wmb();
 	if (nb_hold >= rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 55ca3e6db0..01c2c5a517 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -826,7 +826,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		 /* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device*/
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 75ecb361ee..99675b6bd7 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 		free_desc = txq->rd_p - txq->wr_p;
 
 	return (free_desc > NFDK_TX_DESC_STOP_CNT) ?
-		(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
+			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
 /*
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index d4bd5edb0a..2426ffb261 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -101,9 +101,7 @@ static inline uint16_t
 nfp_net_nfdk_headlen_to_segs(uint16_t headlen)
 {
 	/* First descriptor fits less data, so adjust for that */
-	return DIV_ROUND_UP(headlen +
-			NFDK_TX_MAX_DATA_PER_DESC -
-			NFDK_TX_MAX_DATA_PER_HEAD,
+	return DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD,
 			NFDK_TX_MAX_DATA_PER_DESC);
 }
 
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 36752583dd..9719a9212b 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed)
 }
 
 static void
-nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
+nfp_net_notify_port_speed(struct nfp_net_hw *hw,
+		struct rte_eth_link *link)
 {
 	/**
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
@@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
 	nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE,
-		      nfp_net_link_speed_rte2nfp(link->link_speed));
+			nfp_net_link_speed_rte2nfp(link->link_speed));
 }
 
 /* The length of firmware version string */
 #define FW_VER_LEN        32
 
 static int
-__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
+__nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t update)
 {
 	int cnt;
 	uint32_t new;
 	struct timespec wait;
 
 	PMD_DRV_LOG(DEBUG, "Writing to the configuration queue (%p)...",
-		    hw->qcp_cfg);
+			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
 		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
@@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					  " %dms", update, cnt);
+					" %dms", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
+nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl,
+		uint32_t update)
 {
 	int ret;
 
@@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update)
+nfp_net_ext_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl_ext,
+		uint32_t update)
 {
 	int ret;
 
@@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
 		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
-				    rxmode->mtu, NFP_FRAME_SIZE_MAX);
+				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
 
@@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)
 }
 
 static inline void
-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl)
+nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
+		uint32_t *ctrl)
 {
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)
 		*ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2;
@@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0);
 
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;
-	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
-		 NFP_NET_CFG_UPDATE_MSIX;
+	update = NFP_NET_CFG_UPDATE_GEN |
+			NFP_NET_CFG_UPDATE_RING |
+			NFP_NET_CFG_UPDATE_MSIX;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
@@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw)
 }
 
 void
-nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
+nfp_net_write_mac(struct nfp_net_hw *hw,
+		uint8_t *mac)
 {
 	uint32_t mac0 = *(uint32_t *)mac;
 	uint16_t mac1;
@@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
 	mac += 4;
 	mac1 = *(uint16_t *)mac;
 	nn_writew(rte_cpu_to_be_16(mac1),
-		  hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
+			hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
 }
 
 int
-nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
+nfp_net_set_mac_addr(struct rte_eth_dev *dev,
+		struct rte_ether_addr *mac_addr)
 {
 	struct nfp_net_hw *hw;
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				  " port enabled");
+				" port enabled");
 		return -EBUSY;
 	}
 
@@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
@@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 
 int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			   struct rte_intr_handle *intr_handle)
+		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues) != 0) {
+				dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-			     " intr_vec", dev->data->nb_rx_queues);
+				" intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
@@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			 * efd interrupts
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
-			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1) != 0)
+			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-				rte_intr_vec_list_index_get(intr_handle,
-								   i));
+					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 
 	/* TX checksum offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
  * status.
  */
 int
-nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+nfp_net_link_update(struct rte_eth_dev *dev,
+		__rte_unused int wait_to_complete)
 {
 	int ret;
 	uint32_t i;
@@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 }
 
 int
-nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+nfp_net_stats_get(struct rte_eth_dev *dev,
+		struct rte_eth_stats *stats)
 {
 	int i;
 	struct nfp_net_hw *hw;
@@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		nfp_dev_stats.q_ipackets[i] -=
-			hw->eth_stats_base.q_ipackets[i];
+				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 
 		nfp_dev_stats.q_ibytes[i] -=
-			hw->eth_stats_base.q_ibytes[i];
+				hw->eth_stats_base.q_ibytes[i];
 	}
 
 	/* reading per TX ring stats */
@@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
-		nfp_dev_stats.q_opackets[i] -=
-			hw->eth_stats_base.q_opackets[i];
+		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 
-		nfp_dev_stats.q_obytes[i] -=
-			hw->eth_stats_base.q_obytes[i];
+		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
-	nfp_dev_stats.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
-	nfp_dev_stats.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* reading general device stats */
 	nfp_dev_stats.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
@@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	nfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 
 	nfp_dev_stats.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
@@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		hw->eth_stats_base.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
 	/* reading per TX ring stats */
@@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
 		hw->eth_stats_base.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 	}
 
 	hw->eth_stats_base.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	hw->eth_stats_base.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	hw->eth_stats_base.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	hw->eth_stats_base.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	/* reading general device stats */
 	hw->eth_stats_base.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	hw->eth_stats_base.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	/* RX ring mbuf allocation failures */
 	dev->data->rx_mbuf_alloc_failed = 0;
 
 	hw->eth_stats_base.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	return 0;
 }
@@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
@@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
-						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
-						   RTE_ETH_RSS_IPV6 |
-						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				RTE_ETH_RSS_IPV6 |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
-			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
-			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_50G |
+			RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 }
 
 int
-nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
-		      NFP_NET_CFG_ICR_UNMASKED);
+			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
 }
 
 int
-nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
@@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
-			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
-			    ? "full-duplex" : "half-duplex");
+				dev->data->port_id, link.link_speed,
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
+				"full-duplex" : "half-duplex");
 	else
-		PMD_DRV_LOG(INFO, " Port %d: Link Down",
-			    dev->data->port_id);
+		PMD_DRV_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
 
 	PMD_DRV_LOG(INFO, "PCI Address: " PCI_PRI_FMT,
-		    pci_dev->addr.domain, pci_dev->addr.bus,
-		    pci_dev->addr.devid, pci_dev->addr.function);
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
 /* Interrupt configuration and handling */
@@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 		/* Make sure all updates are written before un-masking */
 		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
-			      NFP_NET_CFG_ICR_UNMASKED);
+				NFP_NET_CFG_ICR_UNMASKED);
 	}
 }
 
@@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param)
 	}
 
 	if (rte_eal_alarm_set(timeout * 1000,
-			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) != 0) {
+			nfp_net_dev_interrupt_delayed_handler,
+			(void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param)
 }
 
 int
-nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
+		uint16_t mtu)
 {
 	struct nfp_net_hw *hw;
 
@@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* mtu setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
-			    dev->data->port_id);
+				dev->data->port_id);
 		return -EBUSY;
 	}
 
 	/* MTU larger than current mbufsize not supported */
 	if (mtu > hw->flbufsz) {
 		PMD_DRV_LOG(ERR, "MTU (%u) larger than current mbufsize (%u) not supported",
-			    mtu, hw->flbufsz);
+				mtu, hw->flbufsz);
 		return -ERANGE;
 	}
 
@@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 }
 
 int
-nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
+		int mask)
 {
 	uint32_t new_ctrl, update;
 	struct nfp_net_hw *hw;
@@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 nfp_net_rss_reta_write(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint32_t reta, mask;
 	int i, j;
@@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
-		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift,
-			      reta);
+		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
 	return 0;
 }
@@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */
 int
 nfp_net_reta_update(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
  /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
-		   struct rte_eth_rss_reta_entry64 *reta_conf,
-		   uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint8_t i, j, mask;
 	int idx, shift;
@@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		if (mask == 0)
 			continue;
 
-		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) +
-				    shift);
+		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift);
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
 			reta_conf[idx].reta[shift + j] =
-				(uint8_t)((reta >> (8 * j)) & 0xF);
+					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
 	return 0;
@@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	struct nfp_net_hw *hw;
 	uint64_t rss_hf;
@@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint32_t update;
 	uint64_t rss_hf;
@@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			  struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
@@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	int i, j, ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-		rx_queues);
+			rx_queues);
 
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
@@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 
 	for (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) {
 		nn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port),
-			(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
+				(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
 	}
 
 	rte_spinlock_lock(&hw->reconfig_lock);
@@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
  * than 40 bits
  */
 int
-nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
+nfp_net_check_dma_mask(struct nfp_net_hw *hw,
+		char *name)
 {
 	if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&
 			rte_mem_check_dma_mask(40) != 0) {
@@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw)
 }
 
 static void
-nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
+nfp_net_get_nsp_info(struct nfp_net_hw *hw,
+		char *nsp_version)
 {
 	struct nfp_nsp *nsp;
 
@@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
 }
 
 static void
-nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
+nfp_net_get_mip_name(struct nfp_net_hw *hw,
+		char *mip_name)
 {
 	struct nfp_mip *mip;
 
@@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
 }
 
 static void
-nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name)
+nfp_net_get_app_name(struct nfp_net_hw *hw,
+		char *app_name)
 {
 	switch (hw->pf_dev->app_fw_id) {
 	case NFP_APP_FW_CORE_NIC:
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index bc3a948231..e4fd394868 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -180,37 +180,47 @@ struct nfp_net_adapter {
 	struct nfp_net_hw hw;
 };
 
-static inline uint8_t nn_readb(volatile const void *addr)
+static inline uint8_t
+nn_readb(volatile const void *addr)
 {
 	return rte_read8(addr);
 }
 
-static inline void nn_writeb(uint8_t val, volatile void *addr)
+static inline void
+nn_writeb(uint8_t val,
+		volatile void *addr)
 {
 	rte_write8(val, addr);
 }
 
-static inline uint32_t nn_readl(volatile const void *addr)
+static inline uint32_t
+nn_readl(volatile const void *addr)
 {
 	return rte_read32(addr);
 }
 
-static inline void nn_writel(uint32_t val, volatile void *addr)
+static inline void
+nn_writel(uint32_t val,
+		volatile void *addr)
 {
 	rte_write32(val, addr);
 }
 
-static inline uint16_t nn_readw(volatile const void *addr)
+static inline uint16_t
+nn_readw(volatile const void *addr)
 {
 	return rte_read16(addr);
 }
 
-static inline void nn_writew(uint16_t val, volatile void *addr)
+static inline void
+nn_writew(uint16_t val,
+		volatile void *addr)
 {
 	rte_write16(val, addr);
 }
 
-static inline uint64_t nn_readq(volatile void *addr)
+static inline uint64_t
+nn_readq(volatile void *addr)
 {
 	const volatile uint32_t *p = addr;
 	uint32_t low, high;
@@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr)
 	return low + ((uint64_t)high << 32);
 }
 
-static inline void nn_writeq(uint64_t val, volatile void *addr)
+static inline void
+nn_writeq(uint64_t val,
+		volatile void *addr)
 {
 	nn_writel(val >> 32, (volatile char *)addr + 4);
 	nn_writel(val, addr);
@@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr)
  * Performs any endian conversion necessary.
  */
 static inline uint8_t
-nn_cfg_readb(struct nfp_net_hw *hw, int off)
+nn_cfg_readb(struct nfp_net_hw *hw,
+		int off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
-nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val)
+nn_cfg_writeb(struct nfp_net_hw *hw,
+		int off,
+		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
 }
 
 static inline uint16_t
-nn_cfg_readw(struct nfp_net_hw *hw, int off)
+nn_cfg_readw(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val)
+nn_cfg_writew(struct nfp_net_hw *hw,
+		int off,
+		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
 }
 
 static inline uint32_t
-nn_cfg_readl(struct nfp_net_hw *hw, int off)
+nn_cfg_readl(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val)
+nn_cfg_writel(struct nfp_net_hw *hw,
+		int off,
+		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
 }
 
 static inline uint64_t
-nn_cfg_readq(struct nfp_net_hw *hw, int off)
+nn_cfg_readq(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
+nn_cfg_writeq(struct nfp_net_hw *hw,
+		int off,
+		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
@@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
  * @val: Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
+nfp_qcp_ptr_add(uint8_t *q,
+		enum nfp_qcp_ptr ptr,
+		uint32_t val)
 {
 	uint32_t off;
 
@@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
  * @ptr: Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr)
+nfp_qcp_read(uint8_t *q,
+		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
 	uint32_t val;
@@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw);
 void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac);
 int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr);
 int nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			       struct rte_intr_handle *intr_handle);
+		struct rte_intr_handle *intr_handle);
 uint32_t nfp_check_offloads(struct rte_eth_dev *dev);
 int nfp_net_promisc_enable(struct rte_eth_dev *dev);
 int nfp_net_promisc_disable(struct rte_eth_dev *dev);
 int nfp_net_link_update(struct rte_eth_dev *dev,
-			__rte_unused int wait_to_complete);
+		__rte_unused int wait_to_complete);
 int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
 int nfp_net_stats_reset(struct rte_eth_dev *dev);
 uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev);
@@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		unsigned int n);
 int nfp_net_xstats_reset(struct rte_eth_dev *dev);
 int nfp_net_infos_get(struct rte_eth_dev *dev,
-		      struct rte_eth_dev_info *dev_info);
+		struct rte_eth_dev_info *dev_info);
 const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev);
 int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
 int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id);
@@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param);
 int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 int nfp_net_reta_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_reta_entry64 *reta_conf,
-			uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_reta_query(struct rte_eth_dev *dev,
-		       struct rte_eth_rss_reta_entry64 *reta_conf,
-		       uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			    struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			      struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_config_default(struct rte_eth_dev *dev);
 void nfp_net_stop_rx_queue(struct rte_eth_dev *dev);
 void nfp_net_close_rx_queue(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 34764a8a32..85a8bf9235 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)
  * of CPP interface handler configured by the PMD setup.
  */
 static int
-nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_write(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-		offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-		cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc fail");
 			return -EIO;
@@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 				len = sizeof(tmpbuf);
 
 			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when receiving, %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when receiving, %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			 NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 
 	return 0;
@@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
  * data is sent to the requester using the same socket.
  */
 static int
-nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_read(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-			   offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-			   cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc failed");
 			return -EIO;
@@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 				return -EIO;
 			}
 			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when sending: %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when sending: %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 	return 0;
 }
@@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
  * does not require any CPP access at all.
  */
 static int
-nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_ioctl(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	uint32_t cmd, ident_size, tmp;
 	int err;
@@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args)
 	strcpy(address.sa_data, "/tmp/nfp_cpp");
 
 	ret = bind(sockfd, (const struct sockaddr *)&address,
-		   sizeof(struct sockaddr));
+			sizeof(struct sockaddr));
 	if (ret < 0) {
 		PMD_CPP_LOG(ERR, "bind error (%d). Service failed", errno);
 		close(sockfd);
@@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n",
-						   __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
 				break;
 			}
 
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 12feec8eb4..65473d87e8 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -22,7 +22,8 @@
 #include "nfp_logs.h"
 
 static int
-nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port)
+nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
+		int port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
-					  "with NFP multiport PF");
+					"with NFP multiport PF");
 				return -EINVAL;
 		}
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 		/* Configure the physical port up */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 1);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 
 	hw->ctrl = new_ctrl;
 
@@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 0);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 
 	return 0;
 }
@@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 1);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 }
 
 /* Set the link down. */
@@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 0);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 }
 
 /* Reset and stop device. The device can not be restarted. */
@@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	nfp_ipsec_uninit(dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
 	/* Mark this port as unused and free device priv resources*/
@@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_private = hw;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		return -ENOMEM;
@@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	/* Registering LSC interrupt handler */
 	rte_intr_callback_register(pci_dev->intr_handle,
@@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 #define DEFAULT_FW_PATH       "/lib/firmware/netronome"
 
 static int
-nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
+nfp_fw_upload(struct rte_pci_device *dev,
+		struct nfp_nsp *nsp,
+		char *card)
 {
 	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
@@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* First try to find a firmware image specific for this device */
 	snprintf(serial, sizeof(serial),
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
-		cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
-		cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
+			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
+			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
 
-	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH,
-			serial);
+	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
@@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
-		fw_name, fsize);
+			fw_name, fsize);
 	PMD_DRV_LOG(INFO, "Uploading the firmware ...");
 	nfp_nsp_load_fw(nsp, fw_buf, fsize);
 	PMD_DRV_LOG(INFO, "Done");
@@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 
 	if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) {
 		PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u",
-			nfp_eth_table->count);
+				nfp_eth_table->count);
 		return -EIO;
 	}
 
@@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	numa_node = rte_socket_id();
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		snprintf(port_name, sizeof(port_name), "%s_port%d",
-			 pf_dev->pci_dev->device.name, i);
+				pf_dev->pci_dev->device.name, i);
 
 		/* Allocate a eth_dev for this phyport */
 		eth_dev = rte_eth_dev_allocate(port_name);
@@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		}
 
 		/* Allocate memory for this phyport */
-		eth_dev->data->dev_private =
-			rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw),
+		eth_dev->data->dev_private = rte_zmalloc_socket(port_name,
+				sizeof(struct nfp_net_hw),
 				RTE_CACHE_LINE_SIZE, numa_node);
 		if (eth_dev->data->dev_private == NULL) {
 			ret = -ENOMEM;
@@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	/* Now the symbol table should be there */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		ret = -EIO;
 		goto eth_table_cleanup;
 	}
@@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	 */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		return -EIO;
 	}
 
@@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		.vendor_id = 0,
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c8d6b0461b..ac6a10685d 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	/* check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	/* unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
-				     nfp_net_dev_interrupt_handler,
-				     (void *)dev);
+			nfp_net_dev_interrupt_handler, (void *)dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/*
 	 * The ixgbe PMD disables the pcie master on the
@@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-	hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) *
-			nfp_net_xstats_size(eth_dev), 0);
+	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
+			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
 		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
 				pci_dev->device.name);
@@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
 	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
 
-	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     tx_bar_off;
-	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     rx_bar_off;
+	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
+	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		err = -ENOMEM;
@@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
-				   port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		/* Registering LSC interrupt handler */
 		rte_intr_callback_register(pci_dev->intr_handle,
-					   nfp_net_dev_interrupt_handler,
-					   (void *)eth_dev);
+				nfp_net_dev_interrupt_handler, (void *)eth_dev);
 		/* Telling the firmware about the LSC interrupt entry */
 		nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
 		/* Recording current stats counters values */
@@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 static const struct rte_pci_id pci_id_nfp_vf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		.vendor_id = 0,
 	},
 };
 
-static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
+static int
+nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 {
 	/* VF cleanup, just free private port data */
 	return nfp_netvf_close(eth_dev);
 }
 
-static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
-		sizeof(struct nfp_net_adapter), nfp_netvf_init);
+			sizeof(struct nfp_net_adapter), nfp_netvf_init);
 }
 
-static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 3ea6813d9a..6d9a1c249f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -156,7 +156,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
 }
 
 static int
-nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
+nfp_mask_id_alloc(struct nfp_flow_priv *priv,
+		uint8_t *mask_id)
 {
 	uint8_t temp_id;
 	uint8_t freed_id;
@@ -188,7 +189,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
 }
 
 static int
-nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id)
+nfp_mask_id_free(struct nfp_flow_priv *priv,
+		uint8_t mask_id)
 {
 	struct circ_buf *ring;
 
@@ -703,7 +705,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 }
 
 static void
-nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_meta_tci(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_meta_tci *tci_meta;
 
@@ -714,7 +717,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
 }
 
 static void
-nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
+nfp_flower_update_meta_tci(char *exact,
+		uint8_t mask_id)
 {
 	struct nfp_flower_meta_tci *meta_tci;
 
@@ -723,7 +727,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
 }
 
 static void
-nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_ext_meta(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_ext_meta *ext_meta;
 
@@ -1436,14 +1441,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
 	} else { /* IPv6 */
 		ipv6  = (struct nfp_flower_ipv6 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv6));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv6));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
 	}
 
 	mask = item->mask ? item->mask : proc->mask_default;
@@ -1514,10 +1519,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1557,10 +1562,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1951,9 +1956,8 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 		return 0;
 	}
 
-	mask = item->mask ?
-		(const uint8_t *)item->mask :
-		(const uint8_t *)proc->mask_default;
+	mask = item->mask ? (const uint8_t *)item->mask :
+			(const uint8_t *)proc->mask_default;
 
 	/*
 	 * Single-pass check to make sure that:
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 4528417559..7885166753 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -158,8 +158,9 @@ struct nfp_ptype_parsed {
 
 /* set mbuf checksum flags based on RX descriptor flags */
 void
-nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb)
+nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
+		struct nfp_net_rx_desc *rxd,
+		struct rte_mbuf *mb)
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
@@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	unsigned int i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
-		   rxq->rx_count);
+			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
 		struct nfp_net_rx_desc *rxd;
@@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u",
-		   rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
  *   Mbuf to set the packet type.
  */
 static void
-nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb)
+nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype,
+		struct rte_mbuf *mb)
 {
 	uint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER;
 	uint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype;
@@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  */
 
 uint16_t
-nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+nfp_net_recv_pkts(void *rx_queue,
+		struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
@@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(DEBUG,
-			"RX mbuf alloc failed port_id=%u queue_id=%hu",
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%hu",
 					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
@@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxb->mbuf = new_mb;
 
 		PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u",
-			   rxds->rxd.data_len, rxq->mbuf_size);
+				rxds->rxd.data_len, rxq->mbuf_size);
 
 		/* Size of this segment */
 		mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 		/* Size of the whole packet. We just support 1 segment */
 		mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 
-		if (unlikely((mb->data_len + hw->rx_offset) >
-			     rxq->mbuf_size)) {
+		if (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) {
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
 			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+					"mbuf overflow likely due to the RX offset.\n"
+					"\t\tYour mbuf size should have extra space for"
+					" RX offset=%u bytes.\n"
+					"\t\tCurrently you just have %u bytes available"
+					" but the received packet is %u bytes long",
+					hw->rx_offset,
+					rxq->mbuf_size - hw->rx_offset,
+					mb->data_len);
 			rte_pktmbuf_free(mb);
 			break;
 		}
@@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
-			mb->data_off = RTE_PKTMBUF_HEADROOM +
-				       NFP_DESC_META_LEN(rxds);
+			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
 
 		/* No scatter mode supported */
 		mb->nb_segs = 1;
@@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return nb_hold;
 
 	PMD_RX_LOG(DEBUG, "RX  port_id=%hu queue_id=%hu, %hu packets received",
-		   rxq->port_id, rxq->qidx, avail);
+			rxq->port_id, rxq->qidx, avail);
 
 	nb_hold += rxq->nb_rx_hold;
 
@@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 	if (nb_hold > rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			   rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
@@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 }
 
 void
-nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
@@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq)
 
 int
 nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t queue_idx, uint16_t nb_desc,
-		       unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
-		       struct rte_mempool *mp)
+		uint16_t queue_idx,
+		uint16_t nb_desc,
+		unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp)
 {
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
@@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Validating number of descriptors */
 	rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc);
 	if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 ||
-	    nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
+			nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
 		PMD_DRV_LOG(ERR, "Wrong nb_desc value");
 		return -EINVAL;
 	}
@@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Allocating rx queue data structure */
 	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq),
-				 RTE_CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (rxq == NULL)
 		return -ENOMEM;
 
@@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * resizing in later calls to the queue setup function.
 	 */
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
-				   sizeof(struct nfp_net_rx_desc) *
-				   max_rx_desc, NFP_MEMZONE_ALIGN,
-				   socket_id);
+			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
+			NFP_MEMZONE_ALIGN, socket_id);
 
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
@@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
-					 sizeof(*rxq->rxbufs) * nb_desc,
-					 RTE_CACHE_LINE_SIZE, socket_id);
+			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
+			socket_id);
 	if (rxq->rxbufs == NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
@@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	}
 
 	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-		   rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
+			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
 
 	nfp_net_reset_rx_queue(rxq);
 
@@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 	int todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
-		   " status", txq->qidx);
+			" status", txq->qidx);
 
 	/* Work out how many packets have been sent */
 	qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR);
 
 	if (qcp_rd_p == txq->rd_p) {
 		PMD_TX_LOG(DEBUG, "queue %hu: It seems harrier is not sending "
-			   "packets (%u, %u)", txq->qidx,
-			   qcp_rd_p, txq->rd_p);
+				"packets (%u, %u)", txq->qidx,
+				qcp_rd_p, txq->rd_p);
 		return 0;
 	}
 
@@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 		todo = qcp_rd_p + txq->tx_count - txq->rd_p;
 
 	PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u",
-		   qcp_rd_p, txq->rd_p, txq->rd_p);
+			qcp_rd_p, txq->rd_p, txq->rd_p);
 
 	if (todo == 0)
 		return todo;
@@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 }
 
 void
-nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_tx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 3c7138f7d6..9a30ebd89e 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq)
 }
 
 void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb);
+		struct rte_mbuf *mb);
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
 uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				  uint16_t nb_pkts);
+		uint16_t nb_pkts);
 void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq);
 int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-				  uint16_t nb_desc, unsigned int socket_id,
-				  const struct rte_eth_rxconf *rx_conf,
-				  struct rte_mempool *mp);
+		uint16_t nb_desc, unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp);
 void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_tx_queue(struct nfp_net_txq *txq);
 
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 03/11] net/nfp: unify the type of integer variable
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 02/11] net/nfp: unify the indent coding style Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 04/11] net/nfp: standard the local variable coding style Chaoyong He
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the type of integer variable to the DPDK prefer style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |  2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c | 16 +++++-----
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c       |  6 ++--
 drivers/net/nfp/nfp_common.c             | 37 +++++++++++++-----------
 drivers/net/nfp/nfp_common.h             | 16 +++++-----
 drivers/net/nfp/nfp_ethdev.c             | 24 +++++++--------
 drivers/net/nfp/nfp_ethdev_vf.c          |  2 +-
 drivers/net/nfp/nfp_flow.c               |  8 ++---
 drivers/net/nfp/nfp_rxtx.c               | 12 ++++----
 drivers/net/nfp/nfp_rxtx.h               |  2 +-
 10 files changed, 64 insertions(+), 61 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 3352693d71..7dd1423aaf 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -26,7 +26,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 6b9532f5b6..5d6912b079 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -64,10 +64,10 @@ nfp_flower_cmsg_mac_repr_init(struct rte_mbuf *mbuf,
 
 static void
 nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
-		unsigned int idx,
-		unsigned int nbi,
-		unsigned int nbi_port,
-		unsigned int phys_port)
+		uint8_t idx,
+		uint32_t nbi,
+		uint32_t nbi_port,
+		uint32_t phys_port)
 {
 	struct nfp_flower_cmsg_mac_repr *msg;
 
@@ -81,11 +81,11 @@ nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
 int
 nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower)
 {
-	int i;
+	uint8_t i;
 	uint16_t cnt;
-	unsigned int nbi;
-	unsigned int nbi_port;
-	unsigned int phys_port;
+	uint32_t nbi;
+	uint32_t nbi_port;
+	uint32_t phys_port;
 	struct rte_mbuf *mbuf;
 	struct nfp_eth_table *nfp_eth_table;
 
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 64928254d8..5a84629ed7 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -227,9 +227,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		uint16_t nb_pkts,
 		bool repr_flag)
 {
-	int i;
-	int pkt_size;
-	int dma_size;
+	uint16_t i;
+	uint32_t pkt_size;
+	uint16_t dma_size;
 	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 9719a9212b..cb2c2afbd7 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -199,7 +199,7 @@ static int
 __nfp_net_reconfig(struct nfp_net_hw *hw,
 		uint32_t update)
 {
-	int cnt;
+	uint32_t cnt;
 	uint32_t new;
 	struct timespec wait;
 
@@ -229,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %dms", update, cnt);
+					" %ums", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -466,7 +466,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -575,7 +575,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
-	int i;
+	uint16_t i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -832,7 +832,7 @@ int
 nfp_net_stats_get(struct rte_eth_dev *dev,
 		struct rte_eth_stats *stats)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
@@ -923,7 +923,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1398,7 +1398,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1419,7 +1419,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1619,9 +1619,10 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint32_t reta, mask;
-	int i, j;
-	int idx, shift;
+	uint8_t mask;
+	uint32_t reta;
+	uint16_t i, j;
+	uint16_t idx, shift;
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1695,8 +1696,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint8_t i, j, mask;
-	int idx, shift;
+	uint16_t i, j;
+	uint8_t mask;
+	uint16_t idx, shift;
 	uint32_t reta;
 	struct nfp_net_hw *hw;
 
@@ -1720,7 +1722,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		/* Handling 4 RSS entries per loop */
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
-		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
+		mask = (reta_conf[idx].mask >> shift) & 0xF;
 
 		if (mask == 0)
 			continue;
@@ -1744,7 +1746,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl = 0;
 	uint8_t key;
-	int i;
+	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1835,7 +1837,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
 	uint8_t key;
-	int i;
+	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1893,7 +1895,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	uint16_t queue;
-	int i, j, ret;
+	uint8_t i, j;
+	int ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index e4fd394868..71153ea25b 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -245,14 +245,14 @@ nn_writeq(uint64_t val,
  */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
 nn_cfg_writeb(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
@@ -260,14 +260,14 @@ nn_cfg_writeb(struct nfp_net_hw *hw,
 
 static inline uint16_t
 nn_cfg_readw(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writew(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
@@ -275,14 +275,14 @@ nn_cfg_writew(struct nfp_net_hw *hw,
 
 static inline uint32_t
 nn_cfg_readl(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writel(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
@@ -290,14 +290,14 @@ nn_cfg_writel(struct nfp_net_hw *hw,
 
 static inline uint64_t
 nn_cfg_readq(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writeq(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 65473d87e8..140d20dcf7 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -23,7 +23,7 @@
 
 static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
-		int port)
+		uint16_t port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -255,7 +255,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	int i;
+	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -487,7 +487,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct rte_ether_addr *tmp_ether_addr;
 	uint64_t rx_base;
 	uint64_t tx_base;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 
 	PMD_INIT_FUNC_TRACE();
@@ -501,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
 	port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx;
-	if (port < 0 || port > 7) {
+	if (port > 7) {
 		PMD_DRV_LOG(ERR, "Port value is wrong");
 		return -ENODEV;
 	}
@@ -761,10 +761,10 @@ static int
 nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
-	int i;
+	uint8_t i;
 	int ret;
 	int err = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
 	struct rte_eth_dev *eth_dev;
@@ -785,7 +785,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -795,7 +795,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	 * For coreNIC the number of vNICs exposed should be the same as the
 	 * number of physical ports
 	 */
-	if (total_vnics != (int)nfp_eth_table->count) {
+	if (total_vnics != nfp_eth_table->count) {
 		PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -1053,15 +1053,15 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 		struct nfp_rtsym_table *sym_tbl,
 		struct nfp_cpp *cpp)
 {
-	int i;
+	uint32_t i;
 	int err = 0;
 	int ret = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		return -ENODEV;
 	}
@@ -1069,7 +1069,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 	for (i = 0; i < total_vnics; i++) {
 		struct rte_eth_dev *eth_dev;
 		char port_name[RTE_ETH_NAME_MAX_LEN];
-		snprintf(port_name, sizeof(port_name), "%s_port%d",
+		snprintf(port_name, sizeof(port_name), "%s_port%u",
 				pci_dev->device.name, i);
 
 		PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index ac6a10685d..892300a909 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -260,7 +260,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	uint64_t tx_bar_off = 0, rx_bar_off = 0;
 	uint32_t start_q;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 	const struct nfp_dev_info *dev_info;
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 6d9a1c249f..4c9904e36c 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -121,7 +121,7 @@ struct nfp_flow_item_proc {
 	/* Bit-mask to use when @p item->mask is not provided. */
 	const void *mask_default;
 	/* Size in bytes for @p mask_support and @p mask_default. */
-	const unsigned int mask_sz;
+	const size_t mask_sz;
 	/* Merge a pattern item into a flow rule handle. */
 	int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
 			struct rte_flow *nfp_flow,
@@ -1941,8 +1941,8 @@ static int
 nfp_flow_item_check(const struct rte_flow_item *item,
 		const struct nfp_flow_item_proc *proc)
 {
+	size_t i;
 	int ret = 0;
-	unsigned int i;
 	const uint8_t *mask;
 
 	/* item->last and item->mask cannot exist without item->spec. */
@@ -2037,7 +2037,7 @@ nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
 		char **mbuf_off_mask,
 		bool is_outer_layer)
 {
-	int i;
+	uint32_t i;
 	int ret = 0;
 	bool continue_flag = true;
 	const struct rte_flow_item *item;
@@ -2271,7 +2271,7 @@ nfp_flow_action_set_ipv6(char *act_data,
 		const struct rte_flow_action *action,
 		bool ip_src_flag)
 {
-	int i;
+	uint32_t i;
 	rte_be32_t tmp;
 	size_t act_size;
 	struct nfp_fl_act_set_ipv6_addr *set_ip;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 7885166753..8cbb9b74a2 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,7 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 	uint64_t dma_addr;
-	unsigned int i;
+	uint16_t i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -229,7 +229,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 int
 nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
@@ -840,7 +840,7 @@ nfp_net_recv_pkts(void *rx_queue,
 static void
 nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 {
-	unsigned int i;
+	uint16_t i;
 
 	if (rxq->rxbufs == NULL)
 		return;
@@ -992,11 +992,11 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
  * @txq: TX queue to work with
  * Returns number of descriptors freed
  */
-int
+uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
 	uint32_t qcp_rd_p;
-	int todo;
+	uint32_t todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1032,7 +1032,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 static void
 nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 {
-	unsigned int i;
+	uint32_t i;
 
 	if (txq->txbufs == NULL)
 		return;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 9a30ebd89e..98ef6c3d93 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -253,7 +253,7 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t nb_desc,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
-int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
+uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
 void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 04/11] net/nfp: standard the local variable coding style
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (2 preceding siblings ...)
  2023-10-12  1:26   ` [PATCH v2 03/11] net/nfp: unify the type of integer variable Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:26   ` [PATCH v2 05/11] net/nfp: adjust the log statement Chaoyong He
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

There should only declare one local variable in each line, and the local
variable should obey the unify sequence.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c |  6 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c  |  4 +-
 drivers/net/nfp/nfp_common.c        | 97 ++++++++++++++++-------------
 drivers/net/nfp/nfp_common.h        |  3 +-
 drivers/net/nfp/nfp_cpp_bridge.c    | 39 ++++++++----
 drivers/net/nfp/nfp_ethdev.c        | 47 +++++++-------
 drivers/net/nfp/nfp_ethdev_vf.c     | 23 +++----
 drivers/net/nfp/nfp_flow.c          | 28 ++++-----
 drivers/net/nfp/nfp_rxtx.c          | 38 +++++------
 9 files changed, 154 insertions(+), 131 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 7dd1423aaf..7a4e671178 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -24,9 +24,9 @@
 static void
 nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
@@ -50,9 +50,9 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 static void
 nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 {
-	struct nfp_net_hw *hw;
+	uint32_t update;
 	uint32_t new_ctrl;
-	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 5a84629ed7..699f65ebef 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -228,13 +228,13 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		bool repr_flag)
 {
 	uint16_t i;
+	uint8_t offset;
 	uint32_t pkt_size;
 	uint16_t dma_size;
-	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
-	uint16_t issued_descs;
 	struct rte_mbuf *pkt;
+	uint16_t issued_descs;
 	struct nfp_net_hw *hw;
 	struct rte_mbuf **lmbuf;
 	struct nfp_net_txq *txq;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index cb2c2afbd7..18291a1cde 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -375,10 +375,10 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 int
 nfp_net_configure(struct rte_eth_dev *dev)
 {
+	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -464,9 +464,9 @@ nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
 void
 nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -488,8 +488,9 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 void
 nfp_net_disable_queues(struct rte_eth_dev *dev)
 {
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
-	uint32_t new_ctrl, update = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -528,9 +529,10 @@ void
 nfp_net_write_mac(struct nfp_net_hw *hw,
 		uint8_t *mac)
 {
-	uint32_t mac0 = *(uint32_t *)mac;
+	uint32_t mac0;
 	uint16_t mac1;
 
+	mac0 = *(uint32_t *)mac;
 	nn_writel(rte_cpu_to_be_32(mac0), hw->ctrl_bar + NFP_NET_CFG_MACADDR);
 
 	mac += 4;
@@ -543,8 +545,9 @@ int
 nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 		struct rte_ether_addr *mac_addr)
 {
+	uint32_t ctrl;
+	uint32_t update;
 	struct nfp_net_hw *hw;
-	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
@@ -574,8 +577,8 @@ int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
-	struct nfp_net_hw *hw;
 	uint16_t i;
+	struct nfp_net_hw *hw;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -615,11 +618,11 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
+	uint32_t ctrl = 0;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	uint32_t ctrl = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -682,9 +685,10 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_enable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
@@ -725,9 +729,10 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_disable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -764,8 +769,8 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	uint32_t nn_link_status;
 	struct nfp_net_hw *hw;
+	uint32_t nn_link_status;
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
@@ -988,12 +993,13 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_xstats_size(const struct rte_eth_dev *dev)
 {
-	/* If the device is a VF, then there will be no MAC stats */
-	struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t count;
+	struct nfp_net_hw *hw;
 	const uint32_t size = RTE_DIM(nfp_net_xstats);
 
+	/* If the device is a VF, then there will be no MAC stats */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hw->mac_stats == NULL) {
-		uint32_t count;
 		for (count = 0; count < size; count++) {
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
@@ -1396,9 +1402,9 @@ int
 nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1417,9 +1423,9 @@ int
 nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1436,8 +1442,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 static void
 nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_eth_link link;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
@@ -1573,16 +1579,16 @@ int
 nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 		int mask)
 {
-	uint32_t new_ctrl, update;
+	int ret;
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
+	uint32_t rxvlan_ctrl = 0;
 	struct rte_eth_conf *dev_conf;
-	uint32_t rxvlan_ctrl;
-	int ret;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	dev_conf = &dev->data->dev_conf;
 	new_ctrl = hw->ctrl;
-	rxvlan_ctrl = 0;
 
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
@@ -1619,12 +1625,15 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
 	uint32_t reta;
-	uint16_t i, j;
-	uint16_t idx, shift;
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t shift;
+	struct nfp_net_hw *hw;
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
@@ -1670,11 +1679,11 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	uint32_t update;
 	int ret;
+	uint32_t update;
+	struct nfp_net_hw *hw;
 
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1696,10 +1705,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint16_t i, j;
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
-	uint16_t idx, shift;
 	uint32_t reta;
+	uint16_t shift;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1742,11 +1753,11 @@ static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
-	struct nfp_net_hw *hw;
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
+	struct nfp_net_hw *hw;
 	uint32_t cfg_rss_ctrl = 0;
-	uint8_t key;
-	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1834,10 +1845,10 @@ int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
-	uint8_t key;
-	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1890,13 +1901,14 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 int
 nfp_net_rss_config_default(struct rte_eth_dev *dev)
 {
+	int ret;
+	uint8_t i;
+	uint8_t j;
+	uint16_t queue = 0;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rss_conf rss_conf;
-	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
-	uint16_t queue;
-	uint8_t i, j;
-	int ret;
+	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
@@ -1904,7 +1916,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
-	queue = 0;
 	for (i = 0; i < 0x40; i += 8) {
 		for (j = i; j < (i + 8); j++) {
 			nfp_reta_conf[0].reta[j] = queue;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 71153ea25b..9cb889c4a6 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -222,8 +222,9 @@ nn_writew(uint16_t val,
 static inline uint64_t
 nn_readq(volatile void *addr)
 {
+	uint32_t low;
+	uint32_t high;
 	const volatile uint32_t *p = addr;
-	uint32_t low, high;
 
 	high = nn_readl((volatile const void *)(p + 1));
 	low = nn_readl((volatile const void *)p);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 85a8bf9235..727ec7a7b2 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -119,12 +119,16 @@ static int
 nfp_cpp_bridge_serve_write(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -220,12 +224,16 @@ static int
 nfp_cpp_bridge_serve_read(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -319,8 +327,10 @@ static int
 nfp_cpp_bridge_serve_ioctl(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	uint32_t cmd, ident_size, tmp;
 	int err;
+	uint32_t cmd;
+	uint32_t tmp;
+	uint32_t ident_size;
 
 	/* Reading now the IOCTL command */
 	err = recv(sockfd, &cmd, 4, 0);
@@ -375,10 +385,13 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 static int
 nfp_cpp_bridge_service_func(void *args)
 {
-	struct sockaddr address;
+	int op;
+	int ret;
+	int sockfd;
+	int datafd;
 	struct nfp_cpp *cpp;
+	struct sockaddr address;
 	struct nfp_pf_dev *pf_dev;
-	int sockfd, datafd, op, ret;
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 140d20dcf7..7d149decfb 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -25,8 +25,8 @@ static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 		uint16_t port)
 {
+	struct nfp_net_hw *hw;
 	struct nfp_eth_table *nfp_eth_table;
-	struct nfp_net_hw *hw = NULL;
 
 	/* Grab a pointer to the correct physical port */
 	hw = app_fw_nic->ports[port];
@@ -42,18 +42,19 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 static int
 nfp_net_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
 	uint32_t cap_extend;
-	uint32_t ctrl_extend = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
+	uint32_t ctrl_extend = 0;
 	struct nfp_pf_dev *pf_dev;
-	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct nfp_app_fw_nic *app_fw_nic;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
@@ -251,11 +252,11 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 static int
 nfp_net_close(struct rte_eth_dev *dev)
 {
+	uint8_t i;
 	struct nfp_net_hw *hw;
-	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -480,15 +481,15 @@ nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_net_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
+	int err;
+	uint16_t port;
+	uint64_t rx_base;
+	uint64_t tx_base;
+	struct nfp_net_hw *hw;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	struct nfp_net_hw *hw;
 	struct rte_ether_addr *tmp_ether_addr;
-	uint64_t rx_base;
-	uint64_t tx_base;
-	uint16_t port = 0;
-	int err;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -650,14 +651,14 @@ nfp_fw_upload(struct rte_pci_device *dev,
 		struct nfp_nsp *nsp,
 		char *card)
 {
-	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
-	char fw_name[125];
-	char serial[40];
 	size_t fsize;
+	char serial[40];
+	char fw_name[125];
 	uint16_t interface;
 	uint32_t cpp_serial_len;
 	const uint8_t *cpp_serial;
+	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 
 	cpp_serial_len = nfp_cpp_serial(cpp, &cpp_serial);
 	if (cpp_serial_len != NFP_SERIAL_LEN)
@@ -713,10 +714,10 @@ nfp_fw_setup(struct rte_pci_device *dev,
 		struct nfp_eth_table *nfp_eth_table,
 		struct nfp_hwinfo *hwinfo)
 {
+	int err;
+	char card_desc[100];
 	struct nfp_nsp *nsp;
 	const char *nfp_fw_model;
-	char card_desc[100];
-	int err = 0;
 
 	nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "nffw.partno");
 	if (nfp_fw_model == NULL)
@@ -897,9 +898,9 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
-	enum nfp_app_fw_id app_fw_id;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_hwinfo *hwinfo;
+	enum nfp_app_fw_id app_fw_id;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct nfp_rtsym_table *sym_tbl;
 	struct nfp_eth_table *nfp_eth_table;
@@ -1220,8 +1221,8 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 static int
 nfp_pci_uninit(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
 	uint16_t port_id;
+	struct rte_pci_device *pci_dev;
 
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 892300a909..aaef6ea91a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -29,14 +29,15 @@ nfp_netvf_read_mac(struct nfp_net_hw *hw)
 static int
 nfp_netvf_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -254,15 +255,15 @@ nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
-	struct rte_ether_addr *tmp_ether_addr;
-
-	uint64_t tx_bar_off = 0, rx_bar_off = 0;
+	int err;
 	uint32_t start_q;
 	uint16_t port = 0;
-	int err;
+	struct nfp_net_hw *hw;
+	uint64_t tx_bar_off = 0;
+	uint64_t rx_bar_off = 0;
+	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
+	struct rte_ether_addr *tmp_ether_addr;
 
 	PMD_INIT_FUNC_TRACE();
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 4c9904e36c..84b48daf85 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -761,9 +761,9 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 		uint32_t stats_ctx,
 		uint64_t cookie)
 {
-	struct nfp_fl_rule_metadata *nfp_flow_meta;
-	char *mbuf_off_exact;
 	char *mbuf_off_mask;
+	char *mbuf_off_exact;
+	struct nfp_fl_rule_metadata *nfp_flow_meta;
 
 	/*
 	 * Convert to long words as firmware expects
@@ -974,9 +974,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 	int ret = 0;
 	bool meter_flag = false;
 	bool tc_hl_flag = false;
-	bool mac_set_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
+	bool mac_set_flag = false;
 	bool ttl_tos_flag = false;
 	const struct rte_flow_action *action;
 
@@ -3201,11 +3201,11 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv4 *ipv4;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3241,11 +3241,11 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv6 *ipv6;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3281,10 +3281,10 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv4 *ipv4;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv4 *ipv4;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3319,10 +3319,10 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv6 *ipv6;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv6 *ipv6;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3467,12 +3467,12 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
 	uint32_t count;
 	char *position;
 	char *action_data;
-	bool ttl_tos_flag = false;
-	bool tc_hl_flag = false;
 	bool drop_flag = false;
+	bool tc_hl_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
 	bool mac_set_flag = false;
+	bool ttl_tos_flag = false;
 	uint32_t total_actions = 0;
 	const struct rte_flow_action *action;
 	struct nfp_flower_meta_tci *meta_tci;
@@ -4283,10 +4283,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 	size_t stats_size;
 	uint64_t ctx_count;
 	uint64_t ctx_split;
+	struct nfp_flow_priv *priv;
 	char mask_name[RTE_HASH_NAMESIZE];
 	char flow_name[RTE_HASH_NAMESIZE];
 	char pretun_name[RTE_HASH_NAMESIZE];
-	struct nfp_flow_priv *priv;
 	struct nfp_app_fw_flower *app_fw_flower;
 	const char *pci_name = strchr(pf_dev->pci_dev->name, ':') + 1;
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 8cbb9b74a2..db6122eac3 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -188,9 +188,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 static int
 nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
-	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
-	uint64_t dma_addr;
 	uint16_t i;
+	uint64_t dma_addr;
+	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -241,17 +241,15 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_rx_queue_count(void *rx_queue)
 {
+	uint32_t idx;
+	uint32_t count = 0;
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
-	uint32_t idx;
-	uint32_t count;
 
 	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
-	count = 0;
-
 	/*
 	 * Other PMDs are just checking the DD bit in intervals of 4
 	 * descriptors and counting all four if the first has the DD
@@ -282,9 +280,9 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
 		struct nfp_meta_parsed *meta)
 {
-	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
+	uint8_t *meta_offset;
 
 	meta_info = rte_be_to_cpu_32(meta_header);
 	meta_offset = meta_base + 4;
@@ -683,15 +681,15 @@ nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts)
 {
-	struct nfp_net_rxq *rxq;
-	struct nfp_net_rx_desc *rxds;
-	struct nfp_net_dp_buf *rxb;
-	struct nfp_net_hw *hw;
+	uint64_t dma_addr;
+	uint16_t avail = 0;
 	struct rte_mbuf *mb;
+	uint16_t nb_hold = 0;
+	struct nfp_net_hw *hw;
 	struct rte_mbuf *new_mb;
-	uint16_t nb_hold;
-	uint64_t dma_addr;
-	uint16_t avail;
+	struct nfp_net_rxq *rxq;
+	struct nfp_net_dp_buf *rxb;
+	struct nfp_net_rx_desc *rxds;
 	uint16_t avail_multiplexed = 0;
 
 	rxq = rx_queue;
@@ -706,8 +704,6 @@ nfp_net_recv_pkts(void *rx_queue,
 
 	hw = rxq->hw;
 
-	avail = 0;
-	nb_hold = 0;
 	while (avail + avail_multiplexed < nb_pkts) {
 		rxb = &rxq->rxbufs[rxq->rd_p];
 		if (unlikely(rxb == NULL)) {
@@ -883,12 +879,12 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mp)
 {
+	uint32_t rx_desc_sz;
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
-	const struct rte_memzone *tz;
-	struct nfp_net_rxq *rxq;
 	struct nfp_net_hw *hw;
-	uint32_t rx_desc_sz;
+	struct nfp_net_rxq *rxq;
+	const struct rte_memzone *tz;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -995,8 +991,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
-	uint32_t qcp_rd_p;
 	uint32_t todo;
+	uint32_t qcp_rd_p;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1072,8 +1068,8 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer)
 {
-	uint16_t vlan_tci;
 	uint16_t tpid;
+	uint16_t vlan_tci;
 
 	tpid = RTE_ETHER_TYPE_VLAN;
 	vlan_tci = pkt->vlan_tci;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 05/11] net/nfp: adjust the log statement
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (3 preceding siblings ...)
  2023-10-12  1:26   ` [PATCH v2 04/11] net/nfp: standard the local variable coding style Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:38     ` Stephen Hemminger
  2023-10-12  1:26   ` [PATCH v2 06/11] net/nfp: standard the comment style Chaoyong He
                     ` (6 subsequent siblings)
  11 siblings, 1 reply; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Add log statement to the important control logic, and remove verbose
info log statement.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower_ctrl.c      | 17 +++---
 .../net/nfp/flower/nfp_flower_representor.c   |  4 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  2 -
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  2 -
 drivers/net/nfp/nfp_common.c                  | 59 ++++++++-----------
 drivers/net/nfp/nfp_cpp_bridge.c              | 28 ++++-----
 drivers/net/nfp/nfp_ethdev.c                  | 21 +------
 drivers/net/nfp/nfp_ethdev_vf.c               | 17 +-----
 drivers/net/nfp/nfp_logs.h                    |  1 -
 drivers/net/nfp/nfp_rxtx.c                    | 17 ++----
 10 files changed, 58 insertions(+), 110 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 4967cc2375..1f4c5fd7f9 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -88,15 +88,14 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
+					"\t\tYour mbuf size should have extra space for"
+					" RX offset=%u bytes.\n"
+					"\t\tCurrently you just have %u bytes available"
+					" but the received packet is %u bytes long",
+					hw->rx_offset,
+					rxq->mbuf_size - hw->rx_offset,
+					mb->data_len);
 			rte_pktmbuf_free(mb);
 			break;
 		}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 01c2c5a517..be0dfb2890 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -464,7 +464,7 @@ nfp_flower_repr_rx_burst(void *rx_queue,
 	total_dequeue = rte_ring_dequeue_burst(repr->ring, (void *)rx_pkts,
 			nb_pkts, &available);
 	if (total_dequeue != 0) {
-		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: 0x%x, "
+		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: %#x, "
 				"received: %u, available: %u", repr->name,
 				repr->port_id, total_dequeue, available);
 
@@ -510,7 +510,7 @@ nfp_flower_repr_tx_burst(void *tx_queue,
 	pf_tx_queue = dev->data->tx_queues[0];
 	sent = nfp_flower_pf_xmit_pkts(pf_tx_queue, tx_pkts, nb_pkts);
 	if (sent != 0) {
-		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: 0x%x transmitted: %u",
+		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: %#x transmitted: %hu",
 				repr->name, repr->port_id, sent);
 		repr->repr_stats.opackets += sent;
 	}
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 699f65ebef..51755f4324 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -381,8 +381,6 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index 2426ffb261..dae87ac6df 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -455,8 +455,6 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 18291a1cde..f48e1930dc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -207,7 +207,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
-		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
+		PMD_DRV_LOG(ERR, "Bad configuration queue pointer");
 		return -ENXIO;
 	}
 
@@ -224,15 +224,15 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		if (new == 0)
 			break;
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
-			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
+			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
-			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %ums", update, cnt);
+			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
+					update, cnt);
 			return -EIO;
 		}
-		nanosleep(&wait, 0); /* waiting for a 1ms */
+		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
@@ -390,8 +390,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	 * called after that internal process
 	 */
 
-	PMD_INIT_LOG(DEBUG, "Configure");
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -401,20 +399,20 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking TX mode */
 	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
-		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
+		PMD_DRV_LOG(ERR, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		PMD_INIT_LOG(INFO, "RSS not supported");
+		PMD_DRV_LOG(ERR, "RSS not supported");
 		return -EINVAL;
 	}
 
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
-		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
+		PMD_DRV_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u)",
 				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
@@ -552,8 +550,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
-		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				" port enabled");
+		PMD_DRV_LOG(ERR, "MAC address unable to change when port enabled");
 		return -EBUSY;
 	}
 
@@ -567,7 +564,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
-		PMD_INIT_LOG(INFO, "MAC address update failed");
+		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
 	return 0;
@@ -582,21 +579,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
-		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-				" intr_vec", dev->data->nb_rx_queues);
+		PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec",
+				dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
 		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with VFIO");
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			/*
 			 * The first msix vector is reserved for non
@@ -605,8 +602,6 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
-			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -691,8 +686,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
-	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
-
 	if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) {
 		repr = dev->data->dev_private;
 		hw = repr->app_fw_flower->pf_hw;
@@ -701,7 +694,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	}
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_PROMISC) == 0) {
-		PMD_INIT_LOG(INFO, "Promiscuous mode not supported");
+		PMD_DRV_LOG(ERR, "Promiscuous mode not supported");
 		return -ENOTSUP;
 	}
 
@@ -774,9 +767,6 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
-
-	PMD_DRV_LOG(DEBUG, "Link update");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Read link status */
@@ -1636,9 +1626,9 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1719,9 +1709,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1827,7 +1817,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 	}
 
 	if (rss_conf->rss_key_len > NFP_NET_CFG_RSS_KEY_SZ) {
-		PMD_DRV_LOG(ERR, "hash key too long");
+		PMD_DRV_LOG(ERR, "RSS hash key too long");
 		return -EINVAL;
 	}
 
@@ -1910,9 +1900,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
-	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-			rx_queues);
-
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
@@ -1929,7 +1916,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 
 	dev_conf = &dev->data->dev_conf;
 	if (dev_conf == NULL) {
-		PMD_DRV_LOG(INFO, "wrong rss conf");
+		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
 	rss_conf = dev_conf->rx_adv_conf.rss_conf;
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 727ec7a7b2..222cfdcbc3 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -130,7 +130,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -149,9 +149,9 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -162,7 +162,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	}
 
 	while (count > 0) {
-		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
+		/* Configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
 				nfp_offset, curlen);
 		if (area == NULL) {
@@ -170,7 +170,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			return -EIO;
 		}
 
-		/* mapping the target */
+		/* Mapping the target */
 		err = nfp_cpp_area_acquire(area);
 		if (err < 0) {
 			PMD_CPP_LOG(ERR, "area acquire failed");
@@ -183,7 +183,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			if (len > sizeof(tmpbuf))
 				len = sizeof(tmpbuf);
 
-			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu", __func__,
 					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
@@ -235,7 +235,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -254,9 +254,9 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -293,7 +293,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
-			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu", __func__,
 					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
@@ -353,7 +353,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_model(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -363,7 +363,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_interface(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -440,11 +440,11 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
 				break;
 			}
 
-			PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op);
+			PMD_CPP_LOG(DEBUG, "%s: getting op %u", __func__, op);
 
 			if (op == NFP_BRIDGE_OP_READ)
 				nfp_cpp_bridge_serve_read(datafd, cpp);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 7d149decfb..72abc4c16e 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -60,8 +60,6 @@ nfp_net_start(struct rte_eth_dev *dev)
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -194,8 +192,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	nfp_net_disable_queues(dev);
@@ -220,8 +216,6 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link up");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -237,8 +231,6 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link down");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -261,8 +253,6 @@ nfp_net_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -491,8 +481,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	/* Use backpointer here to the PF of this eth_dev */
@@ -513,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	 */
 	hw = app_fw_nic->ports[port];
 
-	PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, "
+	PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, "
 			"NFP internal port number: %d", port, hw->nfp_idx);
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
@@ -579,9 +567,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 
-	PMD_INIT_LOG(DEBUG, "tx_base: 0x%" PRIx64 "", tx_base);
-	PMD_INIT_LOG(DEBUG, "rx_base: 0x%" PRIx64 "", rx_base);
-
 	hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	eth_dev->data->dev_private = hw;
@@ -627,7 +612,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %d VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
@@ -997,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto pf_cleanup;
 	}
 
-	PMD_INIT_LOG(DEBUG, "qc_bar address: 0x%p", pf_dev->qc_bar);
+	PMD_INIT_LOG(DEBUG, "qc_bar address: %p", pf_dev->qc_bar);
 
 	/*
 	 * PF initialization has been done at this point. Call app specific
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index aaef6ea91a..d3c3c9e953 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -41,8 +41,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -136,8 +134,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 static int
 nfp_netvf_stop(struct rte_eth_dev *dev)
 {
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
@@ -170,8 +166,6 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	/*
@@ -265,8 +259,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	const struct nfp_dev_info *dev_info;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -301,7 +293,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
 			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
-		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
+		PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!",
 				pci_dev->device.name);
 		return -ENOMEM;
 	}
@@ -312,9 +304,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 	rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q);
 
-	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
-	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
-
 	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
 	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
@@ -345,7 +334,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -359,7 +348,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 315a57811c..16ff61700b 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -12,7 +12,6 @@ extern int nfp_logtype_init;
 #define PMD_INIT_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, nfp_logtype_init, \
 		"%s(): " fmt "\n", __func__, ## args)
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 extern int nfp_logtype_rx;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index db6122eac3..5bfdfd28b3 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -192,7 +192,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	uint64_t dma_addr;
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
-	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
+	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %hu descriptors",
 			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
@@ -212,14 +212,13 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
 		rxe[i].mbuf = mbuf;
-		PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr);
 	}
 
 	/* Make sure all writes are flushed before telling the hardware */
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %hu", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -432,7 +431,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
-	PMD_RX_LOG(DEBUG, "Received outer vlan is %u inter vlan is %u",
+	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
@@ -754,12 +753,11 @@ nfp_net_recv_pkts(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-					"mbuf overflow likely due to the RX offset.\n"
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
 					"\t\tYour mbuf size should have extra space for"
 					" RX offset=%u bytes.\n"
 					"\t\tCurrently you just have %u bytes available"
-					" but the received packet is %u bytes long",
+					" but the received packet is %hu bytes long",
 					hw->rx_offset,
 					rxq->mbuf_size - hw->rx_offset,
 					mb->data_len);
@@ -888,8 +886,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_rx_desc_limits(hw, &min_rx_desc, &max_rx_desc);
 
 	/* Validating number of descriptors */
@@ -965,9 +961,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		return -ENOMEM;
 	}
 
-	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
-
 	nfp_net_reset_rx_queue(rxq);
 
 	rxq->hw = hw;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 06/11] net/nfp: standard the comment style
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (4 preceding siblings ...)
  2023-10-12  1:26   ` [PATCH v2 05/11] net/nfp: adjust the log statement Chaoyong He
@ 2023-10-12  1:26   ` Chaoyong He
  2023-10-12  1:27   ` [PATCH v2 07/11] net/nfp: standard the blank character Chaoyong He
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:26 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Follow the DPDK coding style, use the kdoc comment style.
Also delete some comment which are not valid anymore and add some
comment to help understand logic.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_conntrack.c        |   4 +-
 drivers/net/nfp/flower/nfp_flower.c           |  10 +-
 drivers/net/nfp/flower/nfp_flower.h           |  28 ++--
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |   2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  56 +++----
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |  16 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  42 +++--
 .../net/nfp/flower/nfp_flower_representor.h   |   2 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  33 ++--
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  24 ++-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  41 ++---
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   8 +-
 drivers/net/nfp/nfp_common.c                  | 152 ++++++++----------
 drivers/net/nfp/nfp_common.h                  |  61 +++----
 drivers/net/nfp/nfp_cpp_bridge.c              |   6 +-
 drivers/net/nfp/nfp_ctrl.h                    |  34 ++--
 drivers/net/nfp/nfp_ethdev.c                  |  40 +++--
 drivers/net/nfp/nfp_ethdev_vf.c               |  15 +-
 drivers/net/nfp/nfp_flow.c                    |  62 +++----
 drivers/net/nfp/nfp_flow.h                    |  10 +-
 drivers/net/nfp/nfp_ipsec.h                   |  12 +-
 drivers/net/nfp/nfp_rxtx.c                    | 125 ++++++--------
 drivers/net/nfp/nfp_rxtx.h                    |  18 +--
 23 files changed, 354 insertions(+), 447 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_conntrack.c b/drivers/net/nfp/flower/nfp_conntrack.c
index 7b84b12546..f89003be8b 100644
--- a/drivers/net/nfp/flower/nfp_conntrack.c
+++ b/drivers/net/nfp/flower/nfp_conntrack.c
@@ -667,8 +667,8 @@ nfp_ct_flow_entry_get(struct nfp_ct_zone_entry *ze,
 {
 	bool ret;
 	uint8_t loop;
-	uint8_t item_cnt = 1;      /* the RTE_FLOW_ITEM_TYPE_END */
-	uint8_t action_cnt = 1;    /* the RTE_FLOW_ACTION_TYPE_END */
+	uint8_t item_cnt = 1;      /* The RTE_FLOW_ITEM_TYPE_END */
+	uint8_t action_cnt = 1;    /* The RTE_FLOW_ACTION_TYPE_END */
 	struct nfp_flow_priv *priv;
 	struct nfp_ct_map_entry *me;
 	struct nfp_ct_flow_entry *fe;
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 7a4e671178..4453ae7b5e 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -208,7 +208,7 @@ nfp_flower_pf_close(struct rte_eth_dev *dev)
 		nfp_net_reset_rx_queue(this_rx_q);
 	}
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff);
@@ -488,7 +488,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Tracking mbuf size for detecting a potential mbuf overflow due to
-		 * RX offset
+		 * RX offset.
 		 */
 		rxq->mem_pool = mp;
 		rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -535,7 +535,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Telling the HW about the physical address of the RX ring and number
-		 * of descriptors in log2 format
+		 * of descriptors in log2 format.
 		 */
 		nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(i), rxq->dma);
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC));
@@ -600,7 +600,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Telling the HW about the physical address of the TX ring and number
-		 * of descriptors in log2 format
+		 * of descriptors in log2 format.
 		 */
 		nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(i), txq->dma);
 		nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC));
@@ -758,7 +758,7 @@ nfp_flower_enable_services(struct nfp_app_fw_flower *app_fw_flower)
 	app_fw_flower->ctrl_vnic_id = service_id;
 	PMD_INIT_LOG(INFO, "%s registered", flower_service.name);
 
-	/* Map them to available service cores*/
+	/* Map them to available service cores */
 	ret = nfp_map_service(service_id);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Could not map %s", flower_service.name);
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 244b6daa37..0b4e38cedd 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -53,49 +53,49 @@ struct nfp_flower_nfd_func {
 
 /* The flower application's private structure */
 struct nfp_app_fw_flower {
-	/* switch domain for this app */
+	/** Switch domain for this app */
 	uint16_t switch_domain_id;
 
-	/* Number of VF representors */
+	/** Number of VF representors */
 	uint8_t num_vf_reprs;
 
-	/* Number of phyport representors */
+	/** Number of phyport representors */
 	uint8_t num_phyport_reprs;
 
-	/* Pointer to the PF vNIC */
+	/** Pointer to the PF vNIC */
 	struct nfp_net_hw *pf_hw;
 
-	/* Pointer to a mempool for the ctrlvNIC */
+	/** Pointer to a mempool for the Ctrl vNIC */
 	struct rte_mempool *ctrl_pktmbuf_pool;
 
-	/* Pointer to the ctrl vNIC */
+	/** Pointer to the ctrl vNIC */
 	struct nfp_net_hw *ctrl_hw;
 
-	/* Ctrl vNIC Rx counter */
+	/** Ctrl vNIC Rx counter */
 	uint64_t ctrl_vnic_rx_count;
 
-	/* Ctrl vNIC Tx counter */
+	/** Ctrl vNIC Tx counter */
 	uint64_t ctrl_vnic_tx_count;
 
-	/* Array of phyport representors */
+	/** Array of phyport representors */
 	struct nfp_flower_representor *phy_reprs[MAX_FLOWER_PHYPORTS];
 
-	/* Array of VF representors */
+	/** Array of VF representors */
 	struct nfp_flower_representor *vf_reprs[MAX_FLOWER_VFS];
 
-	/* PF representor */
+	/** PF representor */
 	struct nfp_flower_representor *pf_repr;
 
-	/* service id of ctrl vnic service */
+	/** Service id of Ctrl vNIC service */
 	uint32_t ctrl_vnic_id;
 
-	/* Flower extra features */
+	/** Flower extra features */
 	uint64_t ext_features;
 
 	struct nfp_flow_priv *flow_priv;
 	struct nfp_mtr_priv *mtr_priv;
 
-	/* Function pointers for different NFD version */
+	/** Function pointers for different NFD version */
 	struct nfp_flower_nfd_func nfd_func;
 };
 
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 5d6912b079..2ec9498d22 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -230,7 +230,7 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
 		return -ENOMEM;
 	}
 
-	/* copy the flow to mbuf */
+	/* Copy the flow to mbuf */
 	nfp_flow_meta = flow->payload.meta;
 	msg_len = (nfp_flow_meta->key_len + nfp_flow_meta->mask_len +
 			nfp_flow_meta->act_len) << NFP_FL_LW_SIZ;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 9449760145..cb019171b6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -348,7 +348,7 @@ struct nfp_flower_stats_frame {
 	rte_be64_t stats_cookie;
 };
 
-/**
+/*
  * See RFC 2698 for more details.
  * Word[0](Flag options):
  * [15] p(pps) 1 for pps, 0 for bps
@@ -378,40 +378,24 @@ struct nfp_cfg_head {
 	rte_be32_t profile_id;
 };
 
-/**
- * Struct nfp_profile_conf - profile config, offload to NIC
- * @head:        config head information
- * @bkt_tkn_p:   token bucket peak
- * @bkt_tkn_c:   token bucket committed
- * @pbs:         peak burst size
- * @cbs:         committed burst size
- * @pir:         peak information rate
- * @cir:         committed information rate
- */
+/* Profile config, offload to NIC */
 struct nfp_profile_conf {
-	struct nfp_cfg_head head;
-	rte_be32_t bkt_tkn_p;
-	rte_be32_t bkt_tkn_c;
-	rte_be32_t pbs;
-	rte_be32_t cbs;
-	rte_be32_t pir;
-	rte_be32_t cir;
-};
-
-/**
- * Struct nfp_mtr_stats_reply - meter stats, read from firmware
- * @head:          config head information
- * @pass_bytes:    count of passed bytes
- * @pass_pkts:     count of passed packets
- * @drop_bytes:    count of dropped bytes
- * @drop_pkts:     count of dropped packets
- */
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be32_t bkt_tkn_p;        /**< Token bucket peak */
+	rte_be32_t bkt_tkn_c;        /**< Token bucket committed */
+	rte_be32_t pbs;              /**< Peak burst size */
+	rte_be32_t cbs;              /**< Committed burst size */
+	rte_be32_t pir;              /**< Peak information rate */
+	rte_be32_t cir;              /**< Committed information rate */
+};
+
+/* Meter stats, read from firmware */
 struct nfp_mtr_stats_reply {
-	struct nfp_cfg_head head;
-	rte_be64_t pass_bytes;
-	rte_be64_t pass_pkts;
-	rte_be64_t drop_bytes;
-	rte_be64_t drop_pkts;
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be64_t pass_bytes;       /**< Count of passed bytes */
+	rte_be64_t pass_pkts;        /**< Count of passed packets */
+	rte_be64_t drop_bytes;       /**< Count of dropped bytes */
+	rte_be64_t drop_pkts;        /**< Count of dropped packets */
 };
 
 enum nfp_flower_cmsg_port_type {
@@ -851,7 +835,7 @@ struct nfp_fl_act_set_ipv6_addr {
 };
 
 /*
- * ipv6 tc hl fl
+ * Ipv6 tc hl fl
  *    3                   2                   1
  *  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -954,9 +938,9 @@ struct nfp_fl_act_set_tun {
 	uint8_t    tos;
 	rte_be16_t outer_vlan_tpid;
 	rte_be16_t outer_vlan_tci;
-	uint8_t    tun_len;      /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	uint8_t    tun_len;      /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 	uint8_t    reserved2;
-	rte_be16_t tun_proto;    /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	rte_be16_t tun_proto;    /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 } __rte_packed;
 
 /*
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 1f4c5fd7f9..15d27f2ac7 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -34,7 +34,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 	if (unlikely(rxq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_RX_LOG(ERR, "RX Bad queue");
 		return 0;
@@ -60,7 +60,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 
 		/*
 		 * We got a packet. Let's alloc a new mbuf for refilling the
-		 * free descriptor ring as soon as possible
+		 * free descriptor ring as soon as possible.
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
@@ -72,7 +72,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 
 		/*
 		 * Grab the mbuf and refill the descriptor with the
-		 * previously allocated mbuf
+		 * previously allocated mbuf.
 		 */
 		mb = rxb->mbuf;
 		rxb->mbuf = new_mb;
@@ -86,7 +86,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
-			 * to give some info about the error
+			 * to give some info about the error.
 			 */
 			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
 					"\t\tYour mbuf size should have extra space for"
@@ -123,7 +123,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -170,7 +170,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	if (unlikely(txq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_TX_LOG(ERR, "ctrl dev TX Bad queue");
 		goto xmit_end;
@@ -206,7 +206,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	txds->offset_eop = FLOWER_PKT_DATA_OFFSET | NFD3_DESC_TX_EOP;
 
 	txq->wr_p++;
-	if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/
+	if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 		txq->wr_p = 0;
 
 	cnt++;
@@ -520,7 +520,7 @@ nfp_flower_ctrl_vnic_poll(struct nfp_app_fw_flower *app_fw_flower)
 	ctrl_hw = app_fw_flower->ctrl_hw;
 	ctrl_eth_dev = ctrl_hw->eth_dev;
 
-	/* ctrl vNIC only has a single Rx queue */
+	/* Ctrl vNIC only has a single Rx queue */
 	rxq = ctrl_eth_dev->data->rx_queues[0];
 
 	while (rte_service_runstate_get(app_fw_flower->ctrl_vnic_id) != 0) {
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index be0dfb2890..e023a7d8dc 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -10,18 +10,12 @@
 #include "../nfp_logs.h"
 #include "../nfp_mtr.h"
 
-/*
- * enum nfp_repr_type - type of representor
- * @NFP_REPR_TYPE_PHYS_PORT:   external NIC port
- * @NFP_REPR_TYPE_PF:          physical function
- * @NFP_REPR_TYPE_VF:          virtual function
- * @NFP_REPR_TYPE_MAX:         number of representor types
- */
+/* Type of representor */
 enum nfp_repr_type {
-	NFP_REPR_TYPE_PHYS_PORT,
-	NFP_REPR_TYPE_PF,
-	NFP_REPR_TYPE_VF,
-	NFP_REPR_TYPE_MAX,
+	NFP_REPR_TYPE_PHYS_PORT,    /*<< External NIC port */
+	NFP_REPR_TYPE_PF,           /*<< Physical function */
+	NFP_REPR_TYPE_VF,           /*<< Virtual function */
+	NFP_REPR_TYPE_MAX,          /*<< Number of representor types */
 };
 
 static int
@@ -55,7 +49,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Tracking mbuf size for detecting a potential mbuf overflow due to
-	 * RX offset
+	 * RX offset.
 	 */
 	rxq->mem_pool = mp;
 	rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -86,7 +80,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -101,7 +95,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the RX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -159,7 +153,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -170,7 +164,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = (uint64_t)tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -185,7 +179,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -603,7 +597,7 @@ nfp_flower_pf_repr_init(struct rte_eth_dev *eth_dev,
 	/* Memory has been allocated in the eth_dev_create() function */
 	repr = eth_dev->data->dev_private;
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->repr_type        = init_repr_data->repr_type;
@@ -672,7 +666,7 @@ nfp_flower_repr_init(struct rte_eth_dev *eth_dev,
 		return -ENOMEM;
 	}
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->port_id          = init_repr_data->port_id;
@@ -752,7 +746,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 	nfp_eth_table = app_fw_flower->pf_hw->pf_dev->nfp_eth_table;
 	eth_dev = app_fw_flower->ctrl_hw->eth_dev;
 
-	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware*/
+	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware */
 	ret = nfp_flower_cmsg_mac_repr(app_fw_flower);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Cloud not send mac repr cmsgs");
@@ -795,8 +789,8 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 				"%s_repr_p%d", pci_name, i);
 
 		/*
-		 * Create a eth_dev for this representor
-		 * This will also allocate private memory for the device
+		 * Create a eth_dev for this representor.
+		 * This will also allocate private memory for the device.
 		 */
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
@@ -812,7 +806,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 
 	/*
 	 * Now allocate eth_dev's for VF representors.
-	 * Also send reify messages
+	 * Also send reify messages.
 	 */
 	for (i = 0; i < app_fw_flower->num_vf_reprs; i++) {
 		flower_repr.repr_type = NFP_REPR_TYPE_VF;
@@ -826,7 +820,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		/* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device */
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index 5ac5e38186..eda19cbb16 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -13,7 +13,7 @@ struct nfp_flower_representor {
 	uint16_t switch_domain_id;
 	uint32_t repr_type;
 	uint32_t port_id;
-	uint32_t nfp_idx;    /* only valid for the repr of physical port */
+	uint32_t nfp_idx;    /**< Only valid for the repr of physical port */
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct rte_ether_addr mac_addr;
 	struct nfp_app_fw_flower *app_fw_flower;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 7c56ca4908..0b0ca361f4 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -17,24 +17,24 @@
 struct nfp_net_nfd3_tx_desc {
 	union {
 		struct {
-			uint8_t dma_addr_hi; /* High bits of host buf address */
-			uint16_t dma_len;    /* Length to DMA for this desc */
-			/* Offset in buf where pkt starts + highest bit is eop flag */
+			uint8_t dma_addr_hi; /**< High bits of host buf address */
+			uint16_t dma_len;    /**< Length to DMA for this desc */
+			/** Offset in buf where pkt starts + highest bit is eop flag */
 			uint8_t offset_eop;
-			uint32_t dma_addr_lo; /* Low 32bit of host buf addr */
+			uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */
 
-			uint16_t mss;         /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;   /* LSO, where the data starts */
-			uint8_t flags;        /* TX Flags, see @NFD3_DESC_TX_* */
+			uint16_t mss;         /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;   /**< LSO, where the data starts */
+			uint8_t flags;        /**< TX Flags, see @NFD3_DESC_TX_* */
 
 			union {
 				struct {
-					uint8_t l3_offset; /* L3 header offset */
-					uint8_t l4_offset; /* L4 header offset */
+					uint8_t l3_offset; /**< L3 header offset */
+					uint8_t l4_offset; /**< L4 header offset */
 				};
-				uint16_t vlan; /* VLAN tag to add if indicated */
+				uint16_t vlan; /**< VLAN tag to add if indicated */
 			};
-			uint16_t data_len;     /* Length of frame + meta data */
+			uint16_t data_len;     /**< Length of frame + meta data */
 		} __rte_packed;
 		uint32_t vals[4];
 	};
@@ -54,13 +54,14 @@ nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq)
 	return (free_desc > 8) ? (free_desc - 8) : 0;
 }
 
-/*
- * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfd3
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware with nfd3
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfd3_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 51755f4324..4df2c5d4d2 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -113,14 +113,12 @@ nfp_flower_nfd3_pkt_add_metadata(struct rte_mbuf *mbuf,
 }
 
 /*
- * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc
+ * Set vlan info in the nfd3 tx desc
  *
  * If enable NFP_NET_CFG_CTRL_TXVLAN_V2
- *	Vlan_info is stored in the meta and
- *	is handled in the nfp_net_nfd3_set_meta_vlan()
+ *   Vlan_info is stored in the meta and is handled in the @nfp_net_nfd3_set_meta_vlan()
  * else if enable NFP_NET_CFG_CTRL_TXVLAN
- *	Vlan_info is stored in the tx_desc and
- *	is handled in the nfp_net_nfd3_tx_vlan()
+ *   Vlan_info is stored in the tx_desc and is handled in the @nfp_net_nfd3_tx_vlan()
  */
 static inline void
 nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq,
@@ -299,9 +297,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		nfp_net_nfd3_tx_vlan(txq, &txd, pkt);
 
 		/*
-		 * mbuf data_len is the data in one segment and pkt_len data
+		 * Mbuf data_len is the data in one segment and pkt_len data
 		 * in the whole packet. When the packet is just one segment,
-		 * then data_len = pkt_len
+		 * then data_len = pkt_len.
 		 */
 		pkt_size = pkt->pkt_len;
 
@@ -315,7 +313,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 
 			/*
 			 * Linking mbuf with descriptor for being released
-			 * next time descriptor is used
+			 * next time descriptor is used.
 			 */
 			*lmbuf = pkt;
 
@@ -330,14 +328,14 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 			free_descs--;
 
 			txq->wr_p++;
-			if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping */
+			if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 				txq->wr_p = 0;
 
 			pkt_size -= dma_size;
 
 			/*
 			 * Making the EOP, packets with just one segment
-			 * the priority
+			 * the priority.
 			 */
 			if (likely(pkt_size == 0))
 				txds->offset_eop = NFD3_DESC_TX_EOP;
@@ -439,7 +437,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFD3_TX_DESC_PER_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -449,7 +447,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -465,7 +463,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 99675b6bd7..04bd3c7600 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -75,7 +75,7 @@
  * dma_addr_hi - bits [47:32] of host memory address
  * dma_addr_lo - bits [31:0] of host memory address
  *
- * --> metadata descriptor
+ * --> Metadata descriptor
  * Bit     3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
  * -----\  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * Word   +-------+-----------------------+---------------------+---+-----+
@@ -104,27 +104,27 @@
  */
 struct nfp_net_nfdk_tx_desc {
 	union {
-		/* Address descriptor */
+		/** Address descriptor */
 		struct {
-			uint16_t dma_addr_hi;  /* High bits of host buf address */
-			uint16_t dma_len_type; /* Length to DMA for this desc */
-			uint32_t dma_addr_lo;  /* Low 32bit of host buf addr */
+			uint16_t dma_addr_hi;  /**< High bits of host buf address */
+			uint16_t dma_len_type; /**< Length to DMA for this desc */
+			uint32_t dma_addr_lo;  /**< Low 32bit of host buf addr */
 		};
 
-		/* TSO descriptor */
+		/** TSO descriptor */
 		struct {
-			uint16_t mss;          /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;    /* LSO, TCP payload offset */
-			uint8_t lso_totsegs;   /* LSO, total segments */
-			uint8_t l3_offset;     /* L3 header offset */
-			uint8_t l4_offset;     /* L4 header offset */
-			uint16_t lso_meta_res; /* Rsvd bits in TSO metadata */
+			uint16_t mss;          /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;    /**< LSO, TCP payload offset */
+			uint8_t lso_totsegs;   /**< LSO, total segments */
+			uint8_t l3_offset;     /**< L3 header offset */
+			uint8_t l4_offset;     /**< L4 header offset */
+			uint16_t lso_meta_res; /**< Rsvd bits in TSO metadata */
 		};
 
-		/* Metadata descriptor */
+		/** Metadata descriptor */
 		struct {
-			uint8_t flags;         /* TX Flags, see @NFDK_DESC_TX_* */
-			uint8_t reserved[7];   /* meta byte placeholder */
+			uint8_t flags;         /**< TX Flags, see @NFDK_DESC_TX_* */
+			uint8_t reserved[7];   /**< Meta byte place holder */
 		};
 
 		uint32_t vals[2];
@@ -146,13 +146,14 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
-/*
- * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfdk
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware of nfdk
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfdk_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index dae87ac6df..1289ba1d65 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -478,7 +478,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Free memory prior to re-allocation if needed. This is the case after
-	 * calling nfp_net_stop
+	 * calling nfp_net_stop().
 	 */
 	if (dev->data->tx_queues[queue_idx] != NULL) {
 		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
@@ -513,7 +513,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -523,7 +523,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->ktxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -539,7 +539,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index f48e1930dc..130f004b4d 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -55,7 +55,7 @@ struct nfp_xstat {
 }
 
 static const struct nfp_xstat nfp_net_xstats[] = {
-	/**
+	/*
 	 * Basic xstats available on both VF and PF.
 	 * Note that in case new statistics of group NFP_XSTAT_GROUP_NET
 	 * are added to this array, they must appear before any statistics
@@ -80,7 +80,7 @@ static const struct nfp_xstat nfp_net_xstats[] = {
 	NFP_XSTAT_NET("bpf_app2_bytes", APP2_BYTES),
 	NFP_XSTAT_NET("bpf_app3_pkts", APP3_FRAMES),
 	NFP_XSTAT_NET("bpf_app3_bytes", APP3_BYTES),
-	/**
+	/*
 	 * MAC xstats available only on PF. These statistics are not available for VFs as the
 	 * PF is not initialized when the VF is initialized as it is still bound to the kernel
 	 * driver. As such, the PMD cannot obtain a CPP handle and access the rtsym_table in order
@@ -175,7 +175,7 @@ static void
 nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		struct rte_eth_link *link)
 {
-	/**
+	/*
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
 	 * then write the link speed NFP_NET_CFG_STS_LINK_RATE_UNKNOWN to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -184,7 +184,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
-	/**
+	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
@@ -214,7 +214,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 	nfp_qcp_ptr_add(hw->qcp_cfg, NFP_QCP_WRITE_PTR, 1);
 
 	wait.tv_sec = 0;
-	wait.tv_nsec = 1000000;
+	wait.tv_nsec = 1000000; /* 1ms */
 
 	PMD_DRV_LOG(DEBUG, "Polling for update ack...");
 
@@ -253,7 +253,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_reconfig(struct nfp_net_hw *hw,
@@ -297,7 +297,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_ext_reconfig(struct nfp_net_hw *hw,
@@ -368,9 +368,15 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 }
 
 /*
- * Configure an Ethernet device. This function must be invoked first
- * before any other function in the Ethernet API. This function can
- * also be re-invoked when a device is in the stopped state.
+ * Configure an Ethernet device.
+ *
+ * This function must be invoked first before any other function in the Ethernet API.
+ * This function can also be re-invoked when a device is in the stopped state.
+ *
+ * A DPDK app sends info about how many queues to use and how  those queues
+ * need to be configured. This is used by the DPDK core and it makes sure no
+ * more queues than those advertised by the driver are requested.
+ * This function is called after that internal process.
  */
 int
 nfp_net_configure(struct rte_eth_dev *dev)
@@ -382,14 +388,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * A DPDK app sends info about how many queues to use and how
-	 * those queues need to be configured. This is used by the
-	 * DPDK core and it makes sure no more queues than those
-	 * advertised by the driver are requested. This function is
-	 * called after that internal process
-	 */
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -557,12 +555,12 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	/* Writing new MAC to the specific port BAR address */
 	nfp_net_write_mac(hw, (uint8_t *)mac_addr);
 
-	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
@@ -588,7 +586,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 
 	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO");
-		/* UIO just supports one queue and no LSC*/
+		/* UIO just supports one queue and no LSC */
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
 		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
@@ -597,8 +595,8 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			/*
 			 * The first msix vector is reserved for non
-			 * efd interrupts
-			*/
+			 * efd interrupts.
+			 */
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
@@ -706,10 +704,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode on just after this call assuming
-	 * it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -737,10 +731,6 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode off just before this call
-	 * assuming it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -751,7 +741,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 }
 
 /*
- * return 0 means link status changed, -1 means not changed
+ * Return 0 means link status changed, -1 means not changed
  *
  * Wait to complete is needed as it can take up to 9 seconds to get the Link
  * status.
@@ -793,7 +783,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 				}
 			}
 		} else {
-			/**
+			/*
 			 * Shift and mask nn_link_status so that it is effectively the value
 			 * at offset NFP_NET_CFG_STS_NSP_LINK_RATE.
 			 */
@@ -812,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
 	}
 
-	/**
+	/*
 	 * Notify the port to update the speed value in the CTRL BAR from NSP.
 	 * Not applicable for VFs as the associated PF is still attached to the
 	 * kernel driver.
@@ -833,11 +823,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* RTE_ETHDEV_QUEUE_STAT_CNTRS default value is 16 */
-
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
 
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -855,7 +843,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 				hw->eth_stats_base.q_ibytes[i];
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -889,7 +877,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -915,6 +903,10 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+/*
+ * hw->eth_stats_base records the per counter starting point.
+ * Lets update it now.
+ */
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
@@ -923,12 +915,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * hw->eth_stats_base records the per counter starting point.
-	 * Lets update it now
-	 */
-
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -940,7 +927,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -964,7 +951,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 	hw->eth_stats_base.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	hw->eth_stats_base.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -1032,7 +1019,7 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	if (raw)
 		return value;
 
-	/**
+	/*
 	 * A baseline value of each statistic counter is recorded when stats are "reset".
 	 * Thus, the value returned by this function need to be decremented by this
 	 * baseline value. The result is the count of this statistic since the last time
@@ -1041,12 +1028,12 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	return value - hw->eth_xstats_base[index].value;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1066,12 +1053,12 @@ nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get(struct rte_eth_dev *dev,
 		struct rte_eth_xstat *xstats,
 		unsigned int n)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1092,16 +1079,16 @@ nfp_net_xstats_get(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
+ * ids, xstats_names and size are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
-	 * ids, xstats_names and size are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1123,16 +1110,16 @@ nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
+ * ids, values and n are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		uint64_t *values,
 		unsigned int n)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
-	 * ids, values and n are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1167,10 +1154,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
-	/**
-	 * Successfully reset xstats, now call function to reset basic stats
-	 * return value is then based on the success of that function
-	 */
+	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
 
@@ -1217,7 +1201,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
-	/*
+	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
 	 * includes layer 2 headers, CRC and other metadata that can
@@ -1358,7 +1342,7 @@ nfp_net_common_init(struct rte_pci_device *pci_dev,
 
 	nfp_net_init_metadata_format(hw);
 
-	/* read the Rx offset configured from firmware */
+	/* Read the Rx offset configured from firmware */
 	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
@@ -1375,7 +1359,6 @@ const uint32_t *
 nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
-		/* refers to nfp_net_set_hash() */
 		RTE_PTYPE_INNER_L3_IPV4,
 		RTE_PTYPE_INNER_L3_IPV6,
 		RTE_PTYPE_INNER_L3_IPV6_EXT,
@@ -1449,10 +1432,8 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
-/* Interrupt configuration and handling */
-
 /*
- * nfp_net_irq_unmask - Unmask an interrupt
+ * Unmask an interrupt
  *
  * If MSI-X auto-masking is enabled clear the mask bit, otherwise
  * clear the ICR for the entry.
@@ -1478,16 +1459,14 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	}
 }
 
-/*
+/**
  * Interrupt handler which shall be registered for alarm callback for delayed
  * handling specific interrupt to wait for the stable nic state. As the NIC
  * interrupt state is not stable for nfp after link is just down, it needs
  * to wait 4 seconds to get the stable status.
  *
- * @param handle   Pointer to interrupt handle.
- * @param param    The address of parameter (struct rte_eth_dev *)
- *
- * @return  void
+ * @param param
+ *   The address of parameter (struct rte_eth_dev *)
  */
 void
 nfp_net_dev_interrupt_delayed_handler(void *param)
@@ -1516,13 +1495,12 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	nfp_net_link_update(dev, 0);
 
-	/* likely to up */
+	/* Likely to up */
 	if (link.link_status == 0) {
-		/* handle it 1 sec later, wait it being stable */
+		/* Handle it 1 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_UP_CHECK_TIMEOUT;
-		/* likely to down */
-	} else {
-		/* handle it 4 sec later, wait it being stable */
+	} else {  /* Likely to down */
+		/* Handle it 4 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_DOWN_CHECK_TIMEOUT;
 	}
 
@@ -1543,7 +1521,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* mtu setting is forbidden if port is started */
+	/* MTU setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
 				dev->data->port_id);
@@ -1557,7 +1535,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 		return -ERANGE;
 	}
 
-	/* writing to configuration space */
+	/* Writing to configuration space */
 	nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
 
 	hw->mtu = mtu;
@@ -1634,7 +1612,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 
 	/*
 	 * Update Redirection Table. There are 128 8bit-entries which can be
-	 * manage as 32 32bit-entries
+	 * manage as 32 32bit-entries.
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
@@ -1653,8 +1631,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+			/* Clearing the entry bits */
 			if (mask != 0xF)
-				/* Clearing the entry bits */
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
@@ -1689,7 +1667,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
- /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
+/* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -1717,7 +1695,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	/*
 	 * Reading Redirection Table. There are 128 8bit-entries which can be
-	 * manage as 32 32bit-entries
+	 * manage as 32 32bit-entries.
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
@@ -1751,7 +1729,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Writing the key byte a byte */
+	/* Writing the key byte by byte */
 	for (i = 0; i < rss_conf->rss_key_len; i++) {
 		memcpy(&key, &rss_conf->rss_key[i], 1);
 		nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key);
@@ -1786,7 +1764,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ;
 
-	/* configuring where to apply the RSS hash */
+	/* Configuring where to apply the RSS hash */
 	nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl);
 
 	/* Writing the key size */
@@ -1809,7 +1787,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	/* Checking if RSS is enabled */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		if (rss_hf != 0) { /* Enable RSS? */
+		if (rss_hf != 0) {
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
@@ -2010,7 +1988,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 
 /*
  * The firmware with NFD3 can not handle DMA address requiring more
- * than 40 bits
+ * than 40 bits.
  */
 int
 nfp_net_check_dma_mask(struct nfp_net_hw *hw,
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 9cb889c4a6..6a36e2b04c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -53,7 +53,7 @@ enum nfp_app_fw_id {
 	NFP_APP_FW_FLOWER_NIC             = 0x3,
 };
 
-/* nfp_qcp_ptr - Read or Write Pointer of a queue */
+/* Read or Write Pointer of a queue */
 enum nfp_qcp_ptr {
 	NFP_QCP_READ_PTR = 0,
 	NFP_QCP_WRITE_PTR
@@ -72,15 +72,15 @@ struct nfp_net_tlv_caps {
 };
 
 struct nfp_pf_dev {
-	/* Backpointer to associated pci device */
+	/** Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
 
 	enum nfp_app_fw_id app_fw_id;
 
-	/* Pointer to the app running on the PF */
+	/** Pointer to the app running on the PF */
 	void *app_fw_priv;
 
-	/* The eth table reported by firmware */
+	/** The eth table reported by firmware */
 	struct nfp_eth_table *nfp_eth_table;
 
 	uint8_t *ctrl_bar;
@@ -94,17 +94,17 @@ struct nfp_pf_dev {
 	struct nfp_hwinfo *hwinfo;
 	struct nfp_rtsym_table *sym_tbl;
 
-	/* service id of cpp bridge service */
+	/** Service id of cpp bridge service */
 	uint32_t cpp_bridge_id;
 };
 
 struct nfp_app_fw_nic {
-	/* Backpointer to the PF device */
+	/** Backpointer to the PF device */
 	struct nfp_pf_dev *pf_dev;
 
-	/*
-	 * Array of physical ports belonging to the this CoreNIC app
-	 * This is really a list of vNIC's. One for each physical port
+	/**
+	 * Array of physical ports belonging to this CoreNIC app.
+	 * This is really a list of vNIC's, one for each physical port.
 	 */
 	struct nfp_net_hw *ports[NFP_MAX_PHYPORTS];
 
@@ -113,13 +113,13 @@ struct nfp_app_fw_nic {
 };
 
 struct nfp_net_hw {
-	/* Backpointer to the PF this port belongs to */
+	/** Backpointer to the PF this port belongs to */
 	struct nfp_pf_dev *pf_dev;
 
-	/* Backpointer to the eth_dev of this port*/
+	/** Backpointer to the eth_dev of this port */
 	struct rte_eth_dev *eth_dev;
 
-	/* Info from the firmware */
+	/** Info from the firmware */
 	struct nfp_net_fw_ver ver;
 	uint32_t cap;
 	uint32_t max_mtu;
@@ -130,7 +130,7 @@ struct nfp_net_hw {
 	/** NFP ASIC params */
 	const struct nfp_dev_info *dev_info;
 
-	/* Current values for control */
+	/** Current values for control */
 	uint32_t ctrl;
 
 	uint8_t *ctrl_bar;
@@ -156,7 +156,7 @@ struct nfp_net_hw {
 
 	struct rte_ether_addr mac_addr;
 
-	/* Records starting point for counters */
+	/** Records starting point for counters */
 	struct rte_eth_stats eth_stats_base;
 	struct rte_eth_xstat *eth_xstats_base;
 
@@ -166,9 +166,9 @@ struct nfp_net_hw {
 	uint8_t *mac_stats_bar;
 	uint8_t *mac_stats;
 
-	/* Sequential physical port number, only valid for CoreNIC firmware */
+	/** Sequential physical port number, only valid for CoreNIC firmware */
 	uint8_t idx;
-	/* Internal port number as seen from NFP */
+	/** Internal port number as seen from NFP */
 	uint8_t nfp_idx;
 
 	struct nfp_net_tlv_caps tlv_caps;
@@ -240,10 +240,6 @@ nn_writeq(uint64_t val,
 	nn_writel(val, addr);
 }
 
-/*
- * Functions to read/write from/to Config BAR
- * Performs any endian conversion necessary.
- */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
 		uint32_t off)
@@ -304,11 +300,15 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
 
-/*
- * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue
- * @q: Base address for queue structure
- * @ptr: Add to the Read or Write pointer
- * @val: Value to add to the queue pointer
+/**
+ * Add the value to the selected pointer of a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Add to the read or write pointer
+ * @param val
+ *   Value to add to the queue pointer
  */
 static inline void
 nfp_qcp_ptr_add(uint8_t *q,
@@ -325,10 +325,13 @@ nfp_qcp_ptr_add(uint8_t *q,
 	nn_writel(rte_cpu_to_le_32(val), q + off);
 }
 
-/*
- * nfp_qcp_read - Read the current Read/Write pointer value for a queue
- * @q:  Base address for queue structure
- * @ptr: Read or Write pointer
+/**
+ * Read the current read/write pointer value for a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Read or Write pointer
  */
 static inline uint32_t
 nfp_qcp_read(uint8_t *q,
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 222cfdcbc3..8f5271cde9 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -1,8 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
 #include "nfp_cpp_bridge.h"
@@ -48,7 +46,7 @@ nfp_map_service(uint32_t service_id)
 
 	/*
 	 * Find a service core with the least number of services already
-	 * registered to it
+	 * registered to it.
 	 */
 	while (slcore_count--) {
 		service_count = rte_service_lcore_count_services(slcore_array[slcore_count]);
@@ -100,7 +98,7 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)
 	pf_dev->cpp_bridge_id = service_id;
 	PMD_INIT_LOG(INFO, "NFP cpp service registered");
 
-	/* Map it to available service core*/
+	/* Map it to available service core */
 	ret = nfp_map_service(service_id);
 	if (ret != 0) {
 		PMD_INIT_LOG(DEBUG, "Could not map nfp cpp service");
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 55073c3cea..cd0a2f92a8 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -20,7 +20,7 @@
 /* Offset in Freelist buffer where packet starts on RX */
 #define NFP_NET_RX_OFFSET               32
 
-/* working with metadata api (NFD version > 3.0) */
+/* Working with metadata api (NFD version > 3.0) */
 #define NFP_NET_META_FIELD_SIZE         4
 #define NFP_NET_META_FIELD_MASK ((1 << NFP_NET_META_FIELD_SIZE) - 1)
 #define NFP_NET_META_HEADER_SIZE        4
@@ -36,14 +36,14 @@
 						NFP_NET_META_VLAN_TPID_MASK)
 
 /* Prepend field types */
-#define NFP_NET_META_HASH               1 /* next field carries hash type */
+#define NFP_NET_META_HASH               1 /* Next field carries hash type */
 #define NFP_NET_META_VLAN               4
 #define NFP_NET_META_PORTID             5
 #define NFP_NET_META_IPSEC              9
 
 #define NFP_META_PORT_ID_CTRL           ~0U
 
-/* Hash type pre-pended when a RSS hash was computed */
+/* Hash type prepended when a RSS hash was computed */
 #define NFP_NET_RSS_NONE                0
 #define NFP_NET_RSS_IPV4                1
 #define NFP_NET_RSS_IPV6                2
@@ -102,7 +102,7 @@
 #define   NFP_NET_CFG_CTRL_IRQMOD         (0x1 << 18) /* Interrupt moderation */
 #define   NFP_NET_CFG_CTRL_RINGPRIO       (0x1 << 19) /* Ring priorities */
 #define   NFP_NET_CFG_CTRL_MSIXAUTO       (0x1 << 20) /* MSI-X auto-masking */
-#define   NFP_NET_CFG_CTRL_TXRWB          (0x1 << 21) /* Write-back of TX ring*/
+#define   NFP_NET_CFG_CTRL_TXRWB          (0x1 << 21) /* Write-back of TX ring */
 #define   NFP_NET_CFG_CTRL_L2SWITCH       (0x1 << 22) /* L2 Switch */
 #define   NFP_NET_CFG_CTRL_TXVLAN_V2      (0x1 << 23) /* Enable VLAN insert with metadata */
 #define   NFP_NET_CFG_CTRL_VXLAN          (0x1 << 24) /* Enable VXLAN */
@@ -111,7 +111,7 @@
 #define   NFP_NET_CFG_CTRL_LSO2           (0x1 << 28) /* LSO/TSO (version 2) */
 #define   NFP_NET_CFG_CTRL_RSS2           (0x1 << 29) /* RSS (version 2) */
 #define   NFP_NET_CFG_CTRL_CSUM_COMPLETE  (0x1 << 30) /* Checksum complete */
-#define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31)/* live MAC addr change */
+#define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31) /* Live MAC addr change */
 #define NFP_NET_CFG_UPDATE              0x0004
 #define   NFP_NET_CFG_UPDATE_GEN          (0x1 <<  0) /* General update */
 #define   NFP_NET_CFG_UPDATE_RING         (0x1 <<  1) /* Ring config change */
@@ -124,7 +124,7 @@
 #define   NFP_NET_CFG_UPDATE_IRQMOD       (0x1 <<  8) /* IRQ mod change */
 #define   NFP_NET_CFG_UPDATE_VXLAN        (0x1 <<  9) /* VXLAN port change */
 #define   NFP_NET_CFG_UPDATE_MACADDR      (0x1 << 11) /* MAC address change */
-#define   NFP_NET_CFG_UPDATE_MBOX         (0x1 << 12) /**< Mailbox update */
+#define   NFP_NET_CFG_UPDATE_MBOX         (0x1 << 12) /* Mailbox update */
 #define   NFP_NET_CFG_UPDATE_ERR          (0x1U << 31) /* A error occurred */
 #define NFP_NET_CFG_TXRS_ENABLE         0x0008
 #define NFP_NET_CFG_RXRS_ENABLE         0x0010
@@ -205,7 +205,7 @@ struct nfp_net_fw_ver {
  * @NFP_NET_CFG_SPARE_ADDR:  DMA address for ME code to use (e.g. YDS-155 fix)
  */
 #define NFP_NET_CFG_SPARE_ADDR          0x0050
-/**
+/*
  * NFP6000/NFP4000 - Prepend configuration
  */
 #define NFP_NET_CFG_RX_OFFSET		0x0050
@@ -280,7 +280,7 @@ struct nfp_net_fw_ver {
  * @NFP_NET_CFG_TXR_BASE:    Base offset for TX ring configuration
  * @NFP_NET_CFG_TXR_ADDR:    Per TX ring DMA address (8B entries)
  * @NFP_NET_CFG_TXR_WB_ADDR: Per TX ring write back DMA address (8B entries)
- * @NFP_NET_CFG_TXR_SZ:      Per TX ring ring size (1B entries)
+ * @NFP_NET_CFG_TXR_SZ:      Per TX ring size (1B entries)
  * @NFP_NET_CFG_TXR_VEC:     Per TX ring MSI-X table entry (1B entries)
  * @NFP_NET_CFG_TXR_PRIO:    Per TX ring priority (1B entries)
  * @NFP_NET_CFG_TXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries)
@@ -299,7 +299,7 @@ struct nfp_net_fw_ver {
  * RX ring configuration (0x0800 - 0x0c00)
  * @NFP_NET_CFG_RXR_BASE:    Base offset for RX ring configuration
  * @NFP_NET_CFG_RXR_ADDR:    Per TX ring DMA address (8B entries)
- * @NFP_NET_CFG_RXR_SZ:      Per TX ring ring size (1B entries)
+ * @NFP_NET_CFG_RXR_SZ:      Per TX ring size (1B entries)
  * @NFP_NET_CFG_RXR_VEC:     Per TX ring MSI-X table entry (1B entries)
  * @NFP_NET_CFG_RXR_PRIO:    Per TX ring priority (1B entries)
  * @NFP_NET_CFG_RXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries)
@@ -330,7 +330,7 @@ struct nfp_net_fw_ver {
 
 /*
  * General device stats (0x0d00 - 0x0d90)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_NET_CFG_STATS_BASE          0x0d00
 #define NFP_NET_CFG_STATS_RX_DISCARDS   (NFP_NET_CFG_STATS_BASE + 0x00)
@@ -364,7 +364,7 @@ struct nfp_net_fw_ver {
 
 /*
  * Per ring stats (0x1000 - 0x1800)
- * options, 64bit per entry
+ * Options, 64bit per entry
  * @NFP_NET_CFG_TXR_STATS:   TX ring statistics (Packet and Byte count)
  * @NFP_NET_CFG_RXR_STATS:   RX ring statistics (Packet and Byte count)
  */
@@ -375,9 +375,9 @@ struct nfp_net_fw_ver {
 #define NFP_NET_CFG_RXR_STATS(_x)       (NFP_NET_CFG_RXR_STATS_BASE + \
 					 ((_x) * 0x10))
 
-/**
+/*
  * Mac stats (0x0000 - 0x0200)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_MAC_STATS_BASE                0x0000
 #define NFP_MAC_STATS_SIZE                0x0200
@@ -558,9 +558,11 @@ struct nfp_net_fw_ver {
 
 int nfp_net_tlv_caps_parse(struct rte_eth_dev *dev);
 
-/*
- * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
- * @hw_cap: The firmware's capabilities
+/**
+ * Get RSS flag based on firmware's capability
+ *
+ * @param hw_cap
+ *   The firmware's capabilities
  */
 static inline uint32_t
 nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 72abc4c16e..1651ac2455 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -66,7 +66,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
@@ -76,7 +76,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
-			 * Unregistering LSC interrupt handler
+			 * Unregistering LSC interrupt handler.
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
@@ -150,7 +150,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
-	 * This requires queues being enabled before
+	 * This requires queues being enabled before.
 	 */
 	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
@@ -273,11 +273,11 @@ nfp_net_close(struct rte_eth_dev *dev)
 	/* Clear ipsec */
 	nfp_ipsec_uninit(dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
-	/* Mark this port as unused and free device priv resources*/
+	/* Mark this port as unused and free device priv resources */
 	nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff);
 	app_fw_nic->ports[hw->idx] = NULL;
 	rte_eth_dev_release_port(dev);
@@ -300,15 +300,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
@@ -497,7 +492,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	/*
 	 * Use PF array of physical ports to get pointer to
-	 * this specific port
+	 * this specific port.
 	 */
 	hw = app_fw_nic->ports[port];
 
@@ -779,7 +774,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 	/*
 	 * For coreNIC the number of vNICs exposed should be the same as the
-	 * number of physical ports
+	 * number of physical ports.
 	 */
 	if (total_vnics != nfp_eth_table->count) {
 		PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs");
@@ -787,7 +782,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		goto app_cleanup;
 	}
 
-	/* Populate coreNIC app properties*/
+	/* Populate coreNIC app properties */
 	app_fw_nic->total_phyports = total_vnics;
 	app_fw_nic->pf_dev = pf_dev;
 	if (total_vnics > 1)
@@ -842,8 +837,9 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 		eth_dev->device = &pf_dev->pci_dev->device;
 
-		/* ctrl/tx/rx BAR mappings and remaining init happens in
-		 * nfp_net_init
+		/*
+		 * Ctrl/tx/rx BAR mappings and remaining init happens in
+		 * @nfp_net_init()
 		 */
 		ret = nfp_net_init(eth_dev);
 		if (ret != 0) {
@@ -970,7 +966,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	pf_dev->pci_dev = pci_dev;
 	pf_dev->nfp_eth_table = nfp_eth_table;
 
-	/* configure access to tx/rx vNIC BARs */
+	/* Configure access to tx/rx vNIC BARs */
 	addr = nfp_qcp_queue_offset(dev_info, 0);
 	cpp_id = NFP_CPP_ISLAND_ID(0, NFP_CPP_ACTION_RW, 0, 0);
 
@@ -986,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 
 	/*
 	 * PF initialization has been done at this point. Call app specific
-	 * init code now
+	 * init code now.
 	 */
 	switch (pf_dev->app_fw_id) {
 	case NFP_APP_FW_CORE_NIC:
@@ -1011,7 +1007,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwqueues_cleanup;
 	}
 
-	/* register the CPP bridge service here for primary use */
+	/* Register the CPP bridge service here for primary use */
 	ret = nfp_enable_cpp_service(pf_dev);
 	if (ret != 0)
 		PMD_INIT_LOG(INFO, "Enable cpp service failed.");
@@ -1079,7 +1075,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 /*
  * When attaching to the NFP4000/6000 PF on a secondary process there
  * is no need to initialise the PF again. Only minimal work is required
- * here
+ * here.
  */
 static int
 nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
@@ -1119,7 +1115,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 
 	/*
 	 * We don't have access to the PF created in the primary process
-	 * here so we have to read the number of ports from firmware
+	 * here so we have to read the number of ports from firmware.
 	 */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
@@ -1216,7 +1212,7 @@ nfp_pci_uninit(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_close(port_id);
 	/*
 	 * Ports can be closed and freed but hotplugging is not
-	 * currently supported
+	 * currently supported.
 	 */
 	return -ENOTSUP;
 }
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index d3c3c9e953..c9e72dd953 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -47,12 +47,12 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
-			 * Unregistering LSC interrupt handler
+			 * Unregistering LSC interrupt handler.
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
-	 * This requires queues being enabled before
+	 * This requires queues being enabled before.
 	 */
 	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
@@ -182,18 +182,13 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 84b48daf85..fbcdb3d19e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -108,21 +108,21 @@
 #define NVGRE_V4_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv4_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 #define NVGRE_V6_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv6_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 
 /* Process structure associated with a flow item */
 struct nfp_flow_item_proc {
-	/* Bit-mask for fields supported by this PMD. */
+	/** Bit-mask for fields supported by this PMD. */
 	const void *mask_support;
-	/* Bit-mask to use when @p item->mask is not provided. */
+	/** Bit-mask to use when @p item->mask is not provided. */
 	const void *mask_default;
-	/* Size in bytes for @p mask_support and @p mask_default. */
+	/** Size in bytes for @p mask_support and @p mask_default. */
 	const size_t mask_sz;
-	/* Merge a pattern item into a flow rule handle. */
+	/** Merge a pattern item into a flow rule handle. */
 	int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
 			struct rte_flow *nfp_flow,
 			char **mbuf_off,
@@ -130,7 +130,7 @@ struct nfp_flow_item_proc {
 			const struct nfp_flow_item_proc *proc,
 			bool is_mask,
 			bool is_outer_layer);
-	/* List of possible subsequent items. */
+	/** List of possible subsequent items. */
 	const enum rte_flow_item_type *const next_item;
 };
 
@@ -308,12 +308,12 @@ nfp_check_mask_add(struct nfp_flow_priv *priv,
 
 	mask_entry = nfp_mask_table_search(priv, mask_data, mask_len);
 	if (mask_entry == NULL) {
-		/* mask entry does not exist, let's create one */
+		/* Mask entry does not exist, let's create one */
 		ret = nfp_mask_table_add(priv, mask_data, mask_len, mask_id);
 		if (ret != 0)
 			return false;
 	} else {
-		/* mask entry already exist */
+		/* Mask entry already exist */
 		mask_entry->ref_cnt++;
 		*mask_id = mask_entry->mask_id;
 	}
@@ -818,7 +818,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_ETH detected");
 			/*
-			 * eth is set with no specific params.
+			 * Eth is set with no specific params.
 			 * NFP does not need this.
 			 */
 			if (item->spec == NULL)
@@ -879,7 +879,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_udp_tun`
+				 * in `struct nfp_flower_ipv4_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -889,7 +889,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_udp_tun`
+				 * in `struct nfp_flower_ipv6_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -910,7 +910,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_udp_tun`
+				 * in `struct nfp_flower_ipv4_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -918,7 +918,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_udp_tun`
+				 * in `struct nfp_flower_ipv6_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -939,7 +939,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_gre_tun`
+				 * in `struct nfp_flower_ipv4_gre_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -947,7 +947,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_gre_tun`
+				 * in `struct nfp_flower_ipv6_gre_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -1309,8 +1309,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
-		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
+		 * Reserve space for L4 info.
+		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4.
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
@@ -1392,8 +1392,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
-		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
+		 * Reserve space for L4 info.
+		 * rte_flow has ipv6 before L4 but NFP flower fw requires L4 before ipv6.
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
@@ -2127,7 +2127,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor,
 	if (nfp_flow_tcp_flag_check(items))
 		nfp_flow->tcp_flag = true;
 
-	/* Check if this is a tunnel flow and get the inner item*/
+	/* Check if this is a tunnel flow and get the inner item */
 	is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
 	if (is_tun_flow)
 		is_outer_layer = false;
@@ -3366,9 +3366,9 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
 		return -EINVAL;
 	}
 
-	/* Pre_tunnel action must be the first on action list.
-	 * If other actions already exist, they need to be
-	 * pushed forward.
+	/*
+	 * Pre_tunnel action must be the first on action list.
+	 * If other actions already exist, they need to be pushed forward.
 	 */
 	act_len = act_data - actions;
 	if (act_len != 0) {
@@ -4384,7 +4384,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_id;
 	}
 
-	/* flow stats */
+	/* Flow stats */
 	rte_spinlock_init(&priv->stats_lock);
 	stats_size = (ctx_count & NFP_FL_STAT_ID_STAT) |
 			((ctx_split - 1) & NFP_FL_STAT_ID_MU_NUM);
@@ -4398,7 +4398,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats_id;
 	}
 
-	/* mask table */
+	/* Mask table */
 	mask_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->mask_table = rte_hash_create(&mask_hash_params);
 	if (priv->mask_table == NULL) {
@@ -4407,7 +4407,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats;
 	}
 
-	/* flow table */
+	/* Flow table */
 	flow_hash_params.hash_func_init_val = priv->hash_seed;
 	flow_hash_params.entries = ctx_count;
 	priv->flow_table = rte_hash_create(&flow_hash_params);
@@ -4417,7 +4417,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_table;
 	}
 
-	/* pre tunnel table */
+	/* Pre tunnel table */
 	priv->pre_tun_cnt = 1;
 	pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
@@ -4446,15 +4446,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_ct_zone_table;
 	}
 
-	/* ipv4 off list */
+	/* IPv4 off list */
 	rte_spinlock_init(&priv->ipv4_off_lock);
 	LIST_INIT(&priv->ipv4_off_list);
 
-	/* ipv6 off list */
+	/* IPv6 off list */
 	rte_spinlock_init(&priv->ipv6_off_lock);
 	LIST_INIT(&priv->ipv6_off_list);
 
-	/* neighbor next list */
+	/* Neighbor next list */
 	LIST_INIT(&priv->nn_list);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index ed06eca371..ab38dbe1f4 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -126,19 +126,19 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
-	/* mask hash table */
+	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
-	/* flow hash table */
+	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
-	/* flow stats */
+	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
 	uint32_t stats_ring_size; /**< The size of stats id ring. */
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
-	/* pre tunnel rule */
+	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
@@ -148,7 +148,7 @@ struct nfp_flow_priv {
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
-	/* neighbor next */
+	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 	/* Conntrack */
 	struct rte_hash *ct_zone_table; /**< Hash table to store ct zone entry */
diff --git a/drivers/net/nfp/nfp_ipsec.h b/drivers/net/nfp/nfp_ipsec.h
index aaebb80fe1..d7a729398a 100644
--- a/drivers/net/nfp/nfp_ipsec.h
+++ b/drivers/net/nfp/nfp_ipsec.h
@@ -82,7 +82,7 @@ struct ipsec_discard_stats {
 	uint32_t discards_alignment;             /**< Alignment error */
 	uint32_t discards_hard_bytelimit;        /**< Hard byte Count limit */
 	uint32_t discards_seq_num_wrap;          /**< Sequ Number wrap */
-	uint32_t discards_pmtu_exceeded;         /**< PMTU Limit exceeded*/
+	uint32_t discards_pmtu_exceeded;         /**< PMTU Limit exceeded */
 	uint32_t discards_arw_old_seq;           /**< Anti-Replay seq small */
 	uint32_t discards_arw_replay;            /**< Anti-Replay seq rcvd */
 	uint32_t discards_ctrl_word;             /**< Bad SA Control word */
@@ -99,16 +99,16 @@ struct ipsec_discard_stats {
 
 struct ipsec_get_sa_stats {
 	uint32_t seq_lo;                         /**< Sequence Number (low 32bits) */
-	uint32_t seq_high;                       /**< Sequence Number (high 32bits)*/
+	uint32_t seq_high;                       /**< Sequence Number (high 32bits) */
 	uint32_t arw_counter_lo;                 /**< Anti-replay wndw cntr */
 	uint32_t arw_counter_high;               /**< Anti-replay wndw cntr */
 	uint32_t arw_bitmap_lo;                  /**< Anti-replay wndw bitmap */
 	uint32_t arw_bitmap_high;                /**< Anti-replay wndw bitmap */
 	uint32_t spare:1;
-	uint32_t soft_byte_exceeded :1;          /**< Soft lifetime byte cnt exceeded*/
-	uint32_t hard_byte_exceeded :1;          /**< Hard lifetime byte cnt exceeded*/
-	uint32_t soft_time_exceeded :1;          /**< Soft lifetime time limit exceeded*/
-	uint32_t hard_time_exceeded :1;          /**< Hard lifetime time limit exceeded*/
+	uint32_t soft_byte_exceeded :1;          /**< Soft lifetime byte cnt exceeded */
+	uint32_t hard_byte_exceeded :1;          /**< Hard lifetime byte cnt exceeded */
+	uint32_t soft_time_exceeded :1;          /**< Soft lifetime time limit exceeded */
+	uint32_t hard_time_exceeded :1;          /**< Hard lifetime time limit exceeded */
 	uint32_t spare1:27;
 	uint32_t lifetime_byte_count;
 	uint32_t pkt_count;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 5bfdfd28b3..d506682b56 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -20,43 +20,22 @@
 /* Maximum number of supported VLANs in parsed form packet metadata. */
 #define NFP_META_MAX_VLANS       2
 
-/*
- * struct nfp_meta_parsed - Record metadata parsed from packet
- *
- * Parsed NFP packet metadata are recorded in this struct. The content is
- * read-only after it have been recorded during parsing by nfp_net_parse_meta().
- *
- * @port_id: Port id value
- * @sa_idx: IPsec SA index
- * @hash: RSS hash value
- * @hash_type: RSS hash type
- * @ipsec_type: IPsec type
- * @vlan_layer: The layers of VLAN info which are passed from nic.
- *              Only this number of entries of the @vlan array are valid.
- *
- * @vlan: Holds information parses from NFP_NET_META_VLAN. The inner most vlan
- *        starts at position 0 and only @vlan_layer entries contain valid
- *        information.
- *
- *        Currently only 2 layers of vlan are supported,
- *        vlan[0] - vlan strip info
- *        vlan[1] - qinq strip info
- *
- * @vlan.offload:  Flag indicates whether VLAN is offloaded
- * @vlan.tpid: Vlan TPID
- * @vlan.tci: Vlan TCI including PCP + Priority + VID
- */
+/* Record metadata parsed from packet */
 struct nfp_meta_parsed {
-	uint32_t port_id;
-	uint32_t sa_idx;
-	uint32_t hash;
-	uint8_t hash_type;
-	uint8_t ipsec_type;
-	uint8_t vlan_layer;
+	uint32_t port_id;         /**< Port id value */
+	uint32_t sa_idx;          /**< IPsec SA index */
+	uint32_t hash;            /**< RSS hash value */
+	uint8_t hash_type;        /**< RSS hash type */
+	uint8_t ipsec_type;       /**< IPsec type */
+	uint8_t vlan_layer;       /**< The valid number of value in @vlan[] */
+	/**
+	 * Holds information parses from NFP_NET_META_VLAN.
+	 * The inner most vlan starts at position 0
+	 */
 	struct {
-		uint8_t offload;
-		uint8_t tpid;
-		uint16_t tci;
+		uint8_t offload;  /**< Flag indicates whether VLAN is offloaded */
+		uint8_t tpid;     /**< Vlan TPID */
+		uint16_t tci;     /**< Vlan TCI (PCP + Priority + VID) */
 	} vlan[NFP_META_MAX_VLANS];
 };
 
@@ -156,7 +135,7 @@ struct nfp_ptype_parsed {
 	uint8_t outer_l3_ptype; /**< Packet type of outer layer 3. */
 };
 
-/* set mbuf checksum flags based on RX descriptor flags */
+/* Set mbuf checksum flags based on RX descriptor flags */
 void
 nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 		struct nfp_net_rx_desc *rxd,
@@ -254,7 +233,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * descriptors and counting all four if the first has the DD
 	 * bit on. Of course, this is not accurate but can be good for
 	 * performance. But ideally that should be done in descriptors
-	 * chunks belonging to the same cache line
+	 * chunks belonging to the same cache line.
 	 */
 
 	while (count < rxq->rx_count) {
@@ -265,7 +244,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 		count++;
 		idx++;
 
-		/* Wrapping? */
+		/* Wrapping */
 		if ((idx) == rxq->rx_count)
 			idx = 0;
 	}
@@ -273,7 +252,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+/* Parse the chained metadata from packet */
 static bool
 nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
@@ -320,12 +299,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 	return true;
 }
 
-/*
- * nfp_net_parse_meta_hash() - Set mbuf hash data based on the metadata info
- *
- * The RSS hash and hash-type are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf hash data based on the metadata info */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rxq *rxq,
@@ -341,7 +315,7 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_single_meta() - Parse the single metadata
+ * Parse the single metadata
  *
  * The RSS hash and hash-type are prepended to the packet data.
  * Get it from metadata area.
@@ -355,12 +329,7 @@ nfp_net_parse_single_meta(uint8_t *meta_base,
 	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
 }
 
-/*
- * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
- *
- * The VLAN info TPID and TCI are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf vlan_strip data based on metadata info */
 static void
 nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rx_desc *rxd,
@@ -369,19 +338,14 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	/* Skip if hardware don't support setting vlan. */
+	/* Skip if firmware don't support setting vlan. */
 	if ((hw->ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0)
 		return;
 
 	/*
-	 * The nic support the two way to send the VLAN info,
-	 * 1. According the metadata to send the VLAN info when NFP_NET_CFG_CTRL_RXVLAN_V2
-	 * is set
-	 * 2. According the descriptor to sned the VLAN info when NFP_NET_CFG_CTRL_RXVLAN
-	 * is set
-	 *
-	 * If the nic doesn't send the VLAN info, it is not necessary
-	 * to do anything.
+	 * The firmware support two ways to send the VLAN info (with priority) :
+	 * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set,
+	 * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
 		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
@@ -397,7 +361,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_meta_qinq() - Set mbuf qinq_strip data based on metadata info
+ * Set mbuf qinq_strip data based on metadata info
  *
  * The out VLAN tci are prepended to the packet data.
  * Extract and decode it and set the mbuf fields.
@@ -469,7 +433,7 @@ nfp_net_parse_meta_ipsec(struct nfp_meta_parsed *meta,
 	}
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
+/* Parse the metadata from packet */
 static void
 nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
 		struct nfp_net_rxq *rxq,
@@ -672,7 +636,7 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * doing now have any benefit at all. Again, tests with this change have not
  * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing
  * so looking at the implications of this type of allocation should be studied
- * deeply
+ * deeply.
  */
 
 uint16_t
@@ -695,7 +659,7 @@ nfp_net_recv_pkts(void *rx_queue,
 	if (unlikely(rxq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_RX_LOG(ERR, "RX Bad queue");
 		return 0;
@@ -722,7 +686,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 		/*
 		 * We got a packet. Let's alloc a new mbuf for refilling the
-		 * free descriptor ring as soon as possible
+		 * free descriptor ring as soon as possible.
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
@@ -734,7 +698,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 		/*
 		 * Grab the mbuf and refill the descriptor with the
-		 * previously allocated mbuf
+		 * previously allocated mbuf.
 		 */
 		mb = rxb->mbuf;
 		rxb->mbuf = new_mb;
@@ -751,7 +715,7 @@ nfp_net_recv_pkts(void *rx_queue,
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
-			 * to give some info about the error
+			 * to give some info about the error.
 			 */
 			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
 					"\t\tYour mbuf size should have extra space for"
@@ -803,7 +767,7 @@ nfp_net_recv_pkts(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -817,7 +781,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 	/*
 	 * FL descriptors needs to be written before incrementing the
-	 * FL queue WR pointer
+	 * FL queue WR pointer.
 	 */
 	rte_wmb();
 	if (nb_hold > rxq->rx_free_thresh) {
@@ -898,7 +862,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Free memory prior to re-allocation if needed. This is the case after
-	 * calling nfp_net_stop
+	 * calling @nfp_net_stop().
 	 */
 	if (dev->data->rx_queues[queue_idx] != NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
@@ -920,7 +884,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Tracking mbuf size for detecting a potential mbuf overflow due to
-	 * RX offset
+	 * RX offset.
 	 */
 	rxq->mem_pool = mp;
 	rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -951,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
 			socket_id);
@@ -967,7 +931,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the RX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -975,11 +939,14 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
-/*
- * nfp_net_tx_free_bufs - Check for descriptors with a complete
- * status
- * @txq: TX queue to work with
- * Returns number of descriptors freed
+/**
+ * Check for descriptors with a complete status
+ *
+ * @param txq
+ *   TX queue to work with
+ *
+ * @return
+ *   Number of descriptors freed
  */
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 98ef6c3d93..899cc42c97 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -19,21 +19,11 @@
 /* Maximum number of NFP packet metadata fields. */
 #define NFP_META_MAX_FIELDS      8
 
-/*
- * struct nfp_net_meta_raw - Raw memory representation of packet metadata
- *
- * Describe the raw metadata format, useful when preparing metadata for a
- * transmission mbuf.
- *
- * @header: NFD3 or NFDk field type header (see format in nfp.rst)
- * @data: Array of each fields data member
- * @length: Keep track of number of valid fields in @header and data. Not part
- *          of the raw metadata.
- */
+/* Describe the raw metadata format. */
 struct nfp_net_meta_raw {
-	uint32_t header;
-	uint32_t data[NFP_META_MAX_FIELDS];
-	uint8_t length;
+	uint32_t header; /**< Field type header (see format in nfp.rst) */
+	uint32_t data[NFP_META_MAX_FIELDS]; /**< Array of each fields data member */
+	uint8_t length; /**< Number of valid fields in @header */
 };
 
 /* Descriptor alignment */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 07/11] net/nfp: standard the blank character
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (5 preceding siblings ...)
  2023-10-12  1:26   ` [PATCH v2 06/11] net/nfp: standard the comment style Chaoyong He
@ 2023-10-12  1:27   ` Chaoyong He
  2023-10-12  1:27   ` [PATCH v2 08/11] net/nfp: unify the guide line of header file Chaoyong He
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:27 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Use space character to align instead of TAB character.
There should one blank line to split the block of logic, no more no less.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c           | 39 +++++++++--------
 drivers/net/nfp/nfp_common.h           |  6 +--
 drivers/net/nfp/nfp_cpp_bridge.c       |  5 +++
 drivers/net/nfp/nfp_ctrl.h             |  6 +--
 drivers/net/nfp/nfp_ethdev.c           | 58 +++++++++++++-------------
 drivers/net/nfp/nfp_ethdev_vf.c        | 49 +++++++++++-----------
 drivers/net/nfp/nfp_flow.c             | 27 +++++++-----
 drivers/net/nfp/nfp_flow.h             |  7 ++++
 drivers/net/nfp/nfp_rxtx.c             |  7 ++--
 drivers/net/nfp/nfpcore/nfp_resource.h |  2 +-
 10 files changed, 114 insertions(+), 92 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 130f004b4d..a102c6f272 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -36,6 +36,7 @@ enum nfp_xstat_group {
 	NFP_XSTAT_GROUP_NET,
 	NFP_XSTAT_GROUP_MAC
 };
+
 struct nfp_xstat {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
 	int offset;
@@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
+
 	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
+
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
+
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
 					update, cnt);
 			return -EIO;
 		}
+
 		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
+
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
 }
@@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+
 	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
+
 	return 0;
 }
 
@@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_ipackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
-
 		nfp_dev_stats.q_ipackets[i] -=
 				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_ibytes[i] -=
 				hw->eth_stats_base.q_ibytes[i];
 	}
@@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_opackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
-
 		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
 	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
-
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
 	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
-
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
-
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
-
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
-
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
-
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
 	/* RX ring mbuf allocation failures */
@@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.imissed =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
-
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
 	if (stats != NULL) {
@@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev)
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
 		}
+
 		return count;
 	}
 
@@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
+
 	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
@@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
+
 	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
@@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 
 	if (dev->rx_pkt_burst == nfp_net_recv_pkts)
 		return ptypes;
+
 	return NULL;
 }
 
@@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
-
 		if (mask == 0)
 			continue;
 
 		reta = 0;
+
 		/* If all 4 entries were set, don't need read RETA register */
 		if (mask != 0xF)
 			reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i);
@@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			/* Clearing the entry bits */
 			if (mask != 0xF)
 				reta &= ~(0xFF << (8 * j));
+
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
+
 		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
+
 	return 0;
 }
 
@@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			reta_conf[idx].reta[shift + j] =
 					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
+
 	return 0;
 }
 
@@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
+
 		return 0; /* Nothing to do */
 	}
 
@@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 			queue %= rx_queues;
 		}
 	}
+
 	ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80);
 	if (ret != 0)
 		return ret;
@@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
-	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 
+	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 	ret = nfp_net_rss_hash_write(dev, &rss_conf);
 
 	return ret;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 6a36e2b04c..5439865c5e 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -32,7 +32,7 @@
 #define DEFAULT_RX_HTHRESH      8
 #define DEFAULT_RX_WTHRESH      0
 
-#define DEFAULT_TX_RS_THRESH	32
+#define DEFAULT_TX_RS_THRESH    32
 #define DEFAULT_TX_FREE_THRESH  32
 #define DEFAULT_TX_PTHRESH      32
 #define DEFAULT_TX_HTHRESH      0
@@ -40,12 +40,12 @@
 #define DEFAULT_TX_RSBIT_THRESH 32
 
 /* Alignment for dma zones */
-#define NFP_MEMZONE_ALIGN	128
+#define NFP_MEMZONE_ALIGN       128
 
 #define NFP_QCP_QUEUE_ADDR_SZ   (0x800)
 
 /* Number of supported physical ports */
-#define NFP_MAX_PHYPORTS	12
+#define NFP_MAX_PHYPORTS        12
 
 /* Firmware application ID's */
 enum nfp_app_fw_id {
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 8f5271cde9..bb2a6fdcda 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
+
 			err = nfp_cpp_area_write(area, pos, tmpbuf, len);
 			if (err < 0) {
 				PMD_CPP_LOG(ERR, "nfp_cpp_area_write error");
@@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
 				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
+
 	return 0;
 }
 
@@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args)
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
+
 	sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (sockfd < 0) {
 		PMD_CPP_LOG(ERR, "socket creation error. Service failed");
@@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args)
 			if (op == 0)
 				break;
 		}
+
 		close(datafd);
 	}
+
 	close(sockfd);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index cd0a2f92a8..5cc83ff3e6 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -208,8 +208,8 @@ struct nfp_net_fw_ver {
 /*
  * NFP6000/NFP4000 - Prepend configuration
  */
-#define NFP_NET_CFG_RX_OFFSET		0x0050
-#define NFP_NET_CFG_RX_OFFSET_DYNAMIC		0	/* Prepend mode */
+#define NFP_NET_CFG_RX_OFFSET           0x0050
+#define NFP_NET_CFG_RX_OFFSET_DYNAMIC          0    /* Prepend mode */
 
 /* Start anchor of the TLV area */
 #define NFP_NET_CFG_TLV_BASE            0x0058
@@ -442,7 +442,7 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
+#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
 
 /*
  * General use mailbox area (0x1800 - 0x19ff)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1651ac2455..b65c2c1fe0 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 	rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr);
 
 	free(nfp_eth_table);
+
 	return 0;
 }
 
@@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 					"with NFP multiport PF");
 				return -EINVAL;
 		}
+
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
@@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 
 	/* Clear queues */
 	nfp_net_stop_tx_queue(dev);
-
 	nfp_net_stop_rx_queue(dev);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	/* Clear ipsec */
@@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev,
 
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nfp_net_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_net_start,
-	.dev_stop		= nfp_net_stop,
-	.dev_set_link_up	= nfp_net_set_link_up,
-	.dev_set_link_down	= nfp_net_set_link_down,
-	.dev_close		= nfp_net_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_net_start,
+	.dev_stop               = nfp_net_stop,
+	.dev_set_link_up        = nfp_net_set_link_up,
+	.dev_set_link_down      = nfp_net_set_link_down,
+	.dev_close              = nfp_net_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 	.udp_tunnel_port_add    = nfp_udp_tunnel_port_add,
@@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
 		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
@@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar");
 			return -EIO;
 		}
+
 		hw->mac_stats = hw->mac_stats_bar;
 	} else {
 		if (pf_dev->ctrl_bar == NULL)
 			return -ENODEV;
+
 		/* Use port offset in pf ctrl_bar for this ports control bar */
 		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
@@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 		return -ENOMEM;
 	}
 
-
 	/* Work out where in the BAR the queues start. */
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
@@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev,
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
 			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
 			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
-
 	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
 		goto load_fw;
+
 	/* Then try the PCI name */
 	snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH,
 			dev->name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c9e72dd953..7096695de6 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	rte_intr_disable(pci_dev->intr_handle);
@@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 /* Initialise and register VF driver with DPDK Application */
 static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_netvf_start,
-	.dev_stop		= nfp_netvf_stop,
-	.dev_set_link_up	= nfp_netvf_set_link_up,
-	.dev_set_link_down	= nfp_netvf_set_link_down,
-	.dev_close		= nfp_netvf_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_netvf_start,
+	.dev_stop               = nfp_netvf_stop,
+	.dev_set_link_up        = nfp_netvf_set_link_up,
+	.dev_set_link_down      = nfp_netvf_set_link_down,
+	.dev_close              = nfp_netvf_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 };
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index fbcdb3d19e..1bf31146fc 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -496,6 +496,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx)
 			priv->stats_ids.init_unallocated--;
 			priv->active_mem_unit = 0;
 		}
+
 		return 0;
 	}
 
@@ -622,6 +623,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 		PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
 		return -ENOMEM;
 	}
+
 	memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
 	tmp_entry->ref_count = 1;
 
@@ -1796,7 +1798,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_eth){
+		.mask_support = &(const struct rte_flow_item_eth) {
 			.hdr = {
 				.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 				.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -1811,7 +1813,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_vlan){
+		.mask_support = &(const struct rte_flow_item_vlan) {
 			.hdr = {
 				.vlan_tci  = RTE_BE16(0xefff),
 				.eth_proto = RTE_BE16(0xffff),
@@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv4){
+		.mask_support = &(const struct rte_flow_item_ipv4) {
 			.hdr = {
 				.type_of_service = 0xff,
 				.fragment_offset = RTE_BE16(0xffff),
@@ -1846,7 +1848,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv6){
+		.mask_support = &(const struct rte_flow_item_ipv6) {
 			.hdr = {
 				.vtc_flow   = RTE_BE32(0x0ff00000),
 				.proto      = 0xff,
@@ -1863,7 +1865,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_ipv6,
 	},
 	[RTE_FLOW_ITEM_TYPE_TCP] = {
-		.mask_support = &(const struct rte_flow_item_tcp){
+		.mask_support = &(const struct rte_flow_item_tcp) {
 			.hdr = {
 				.tcp_flags = 0xff,
 				.src_port  = RTE_BE16(0xffff),
@@ -1877,7 +1879,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_UDP] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
 			RTE_FLOW_ITEM_TYPE_GENEVE),
-		.mask_support = &(const struct rte_flow_item_udp){
+		.mask_support = &(const struct rte_flow_item_udp) {
 			.hdr = {
 				.src_port = RTE_BE16(0xffff),
 				.dst_port = RTE_BE16(0xffff),
@@ -1888,7 +1890,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_udp,
 	},
 	[RTE_FLOW_ITEM_TYPE_SCTP] = {
-		.mask_support = &(const struct rte_flow_item_sctp){
+		.mask_support = &(const struct rte_flow_item_sctp) {
 			.hdr = {
 				.src_port  = RTE_BE16(0xffff),
 				.dst_port  = RTE_BE16(0xffff),
@@ -1900,7 +1902,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_vxlan){
+		.mask_support = &(const struct rte_flow_item_vxlan) {
 			.hdr = {
 				.vx_vni = RTE_BE32(0xffffff00),
 			},
@@ -1911,7 +1913,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_geneve){
+		.mask_support = &(const struct rte_flow_item_geneve) {
 			.vni = "\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_geneve_mask,
@@ -1920,7 +1922,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
-		.mask_support = &(const struct rte_flow_item_gre){
+		.mask_support = &(const struct rte_flow_item_gre) {
 			.c_rsvd0_ver = RTE_BE16(0xa000),
 			.protocol = RTE_BE16(0xffff),
 		},
@@ -1952,6 +1954,7 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 					" without a corresponding 'spec'.");
 			return -EINVAL;
 		}
+
 		/* No spec, no mask, no problem. */
 		return 0;
 	}
@@ -3031,6 +3034,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
@@ -3057,6 +3061,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 
 	*index = entry->mac_index;
 	priv->pre_tun_cnt++;
+
 	return 0;
 }
 
@@ -3091,12 +3096,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
 			find_entry->ref_cnt--;
 			if (find_entry->ref_cnt != 0)
 				goto free_entry;
+
 			priv->pre_tun_bitmap[i] = 0;
 			break;
 		}
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index ab38dbe1f4..991629e6ed 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -126,11 +126,14 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
+
 	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
+
 	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
+
 	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
@@ -138,16 +141,20 @@ struct nfp_flow_priv {
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+
 	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+
 	/* IPv4 off */
 	LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
 	rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
+
 	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 	/* Conntrack */
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index d506682b56..e284a67d7c 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dd = 0;
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
+
 		rxe[i].mbuf = mbuf;
 	}
 
@@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
+
 	return 0;
 }
 
@@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	struct nfp_net_rx_desc *rxds;
 
 	rxq = rx_queue;
-
 	idx = rxq->rd_p;
 
 	/*
@@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * performance. But ideally that should be done in descriptors
 	 * chunks belonging to the same cache line.
 	 */
-
 	while (count < rxq->rx_count) {
 		rxds = &rxq->rxds[idx];
 		if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0)
@@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
+
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
 	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
@@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * so looking at the implications of this type of allocation should be studied
  * deeply.
  */
-
 uint16_t
 nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
@@ -903,7 +903,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
 			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
 			NFP_MEMZONE_ALIGN, socket_id);
-
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
 		nfp_net_rx_queue_release(dev, queue_idx);
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 18196d273c..f49c99e462 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -15,7 +15,7 @@
 #define NFP_RESOURCE_NFP_HWINFO         "nfp.info"
 
 /* Service Processor */
-#define NFP_RESOURCE_NSP		"nfp.sp"
+#define NFP_RESOURCE_NSP                "nfp.sp"
 
 /* Opaque handle to a NFP Resource */
 struct nfp_resource;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 08/11] net/nfp: unify the guide line of header file
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (6 preceding siblings ...)
  2023-10-12  1:27   ` [PATCH v2 07/11] net/nfp: standard the blank character Chaoyong He
@ 2023-10-12  1:27   ` Chaoyong He
  2023-10-12  1:27   ` [PATCH v2 09/11] net/nfp: rename some parameter and variable Chaoyong He
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:27 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the guide line of header file, we choose '__FOO_BAR_H__' style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.h             | 6 +++---
 drivers/net/nfp/flower/nfp_flower_cmsg.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_ctrl.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_representor.h | 6 +++---
 drivers/net/nfp/nfd3/nfp_nfd3.h                 | 6 +++---
 drivers/net/nfp/nfdk/nfp_nfdk.h                 | 6 +++---
 drivers/net/nfp/nfp_common.h                    | 6 +++---
 drivers/net/nfp/nfp_cpp_bridge.h                | 8 +++-----
 drivers/net/nfp/nfp_ctrl.h                      | 6 +++---
 drivers/net/nfp/nfp_flow.h                      | 6 +++---
 drivers/net/nfp/nfp_logs.h                      | 6 +++---
 drivers/net/nfp/nfp_rxtx.h                      | 6 +++---
 12 files changed, 36 insertions(+), 38 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 0b4e38cedd..b7ea830209 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_H_
-#define _NFP_FLOWER_H_
+#ifndef __NFP_FLOWER_H__
+#define __NFP_FLOWER_H__
 
 #include "../nfp_common.h"
 
@@ -118,4 +118,4 @@ int nfp_flower_pf_stop(struct rte_eth_dev *dev);
 uint32_t nfp_flower_pkt_add_metadata(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf, uint32_t port_id);
 
-#endif /* _NFP_FLOWER_H_ */
+#endif /* __NFP_FLOWER_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index cb019171b6..c2938fb6f6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CMSG_H_
-#define _NFP_CMSG_H_
+#ifndef __NFP_CMSG_H__
+#define __NFP_CMSG_H__
 
 #include "../nfp_flow.h"
 #include "nfp_flower.h"
@@ -989,4 +989,4 @@ int nfp_flower_cmsg_qos_delete(struct nfp_app_fw_flower *app_fw_flower,
 int nfp_flower_cmsg_qos_stats(struct nfp_app_fw_flower *app_fw_flower,
 		struct nfp_cfg_head *head);
 
-#endif /* _NFP_CMSG_H_ */
+#endif /* __NFP_CMSG_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.h b/drivers/net/nfp/flower/nfp_flower_ctrl.h
index f73a024266..4c94d36847 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.h
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_CTRL_H_
-#define _NFP_FLOWER_CTRL_H_
+#ifndef __NFP_FLOWER_CTRL_H__
+#define __NFP_FLOWER_CTRL_H__
 
 #include "nfp_flower.h"
 
@@ -13,4 +13,4 @@ uint16_t nfp_flower_ctrl_vnic_xmit(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf);
 void nfp_flower_ctrl_vnic_xmit_register(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_CTRL_H_ */
+#endif /* __NFP_FLOWER_CTRL_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index eda19cbb16..bcb4c3cdb5 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_REPRESENTOR_H_
-#define _NFP_FLOWER_REPRESENTOR_H_
+#ifndef __NFP_FLOWER_REPRESENTOR_H__
+#define __NFP_FLOWER_REPRESENTOR_H__
 
 #include "nfp_flower.h"
 
@@ -24,4 +24,4 @@ struct nfp_flower_representor {
 
 int nfp_flower_repr_create(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_REPRESENTOR_H_ */
+#endif /* __NFP_FLOWER_REPRESENTOR_H__ */
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 0b0ca361f4..3ba562cc3f 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_NFD3_H_
-#define _NFP_NFD3_H_
+#ifndef __NFP_NFD3_H__
+#define __NFP_NFD3_H__
 
 #include "../nfp_rxtx.h"
 
@@ -84,4 +84,4 @@ int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-#endif /* _NFP_NFD3_H_ */
+#endif /* __NFP_NFD3_H__ */
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 04bd3c7600..2767fd51cd 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_NFDK_H_
-#define _NFP_NFDK_H_
+#ifndef __NFP_NFDK_H__
+#define __NFP_NFDK_H__
 
 #include "../nfp_rxtx.h"
 
@@ -178,4 +178,4 @@ int nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 int nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq,
 		struct rte_mbuf *pkt);
 
-#endif /* _NFP_NFDK_H_ */
+#endif /* __NFP_NFDK_H__ */
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 5439865c5e..cd0ca50c6b 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_COMMON_H_
-#define _NFP_COMMON_H_
+#ifndef __NFP_COMMON_H__
+#define __NFP_COMMON_H__
 
 #include <bus_pci_driver.h>
 #include <ethdev_driver.h>
@@ -450,4 +450,4 @@ bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version);
 #define NFP_PRIV_TO_APP_FW_FLOWER(app_fw_priv)\
 	((struct nfp_app_fw_flower *)app_fw_priv)
 
-#endif /* _NFP_COMMON_H_ */
+#endif /* __NFP_COMMON_H__ */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h
index e6a957a090..a1103e85e4 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.h
+++ b/drivers/net/nfp/nfp_cpp_bridge.h
@@ -1,16 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
-#ifndef _NFP_CPP_BRIDGE_H_
-#define _NFP_CPP_BRIDGE_H_
+#ifndef __NFP_CPP_BRIDGE_H__
+#define __NFP_CPP_BRIDGE_H__
 
 #include "nfp_common.h"
 
 int nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev);
 int nfp_map_service(uint32_t service_id);
 
-#endif /* _NFP_CPP_BRIDGE_H_ */
+#endif /* __NFP_CPP_BRIDGE_H__ */
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 5cc83ff3e6..5c2065a537 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CTRL_H_
-#define _NFP_CTRL_H_
+#ifndef __NFP_CTRL_H__
+#define __NFP_CTRL_H__
 
 #include <stdint.h>
 
@@ -573,4 +573,4 @@ nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
 	return NFP_NET_CFG_CTRL_RSS;
 }
 
-#endif /* _NFP_CTRL_H_ */
+#endif /* __NFP_CTRL_H__ */
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 991629e6ed..aeb24458f3 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOW_H_
-#define _NFP_FLOW_H_
+#ifndef __NFP_FLOW_H__
+#define __NFP_FLOW_H__
 
 #include "nfp_common.h"
 
@@ -202,4 +202,4 @@ int nfp_flow_destroy(struct rte_eth_dev *dev,
 		struct rte_flow *nfp_flow,
 		struct rte_flow_error *error);
 
-#endif /* _NFP_FLOW_H_ */
+#endif /* __NFP_FLOW_H__ */
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 16ff61700b..690adabffd 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_LOGS_H_
-#define _NFP_LOGS_H_
+#ifndef __NFP_LOGS_H__
+#define __NFP_LOGS_H__
 
 #include <rte_log.h>
 
@@ -41,4 +41,4 @@ extern int nfp_logtype_driver;
 	rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \
 		"%s(): " fmt "\n", __func__, ## args)
 
-#endif /* _NFP_LOGS_H_ */
+#endif /* __NFP_LOGS_H__ */
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 899cc42c97..956cc7a0d2 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_RXTX_H_
-#define _NFP_RXTX_H_
+#ifndef __NFP_RXTX_H__
+#define __NFP_RXTX_H__
 
 #include <ethdev_driver.h>
 
@@ -253,4 +253,4 @@ void nfp_net_set_meta_ipsec(struct nfp_net_meta_raw *meta_data,
 		uint8_t layer,
 		uint8_t ipsec_layer);
 
-#endif /* _NFP_RXTX_H_ */
+#endif /* __NFP_RXTX_H__ */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 09/11] net/nfp: rename some parameter and variable
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (7 preceding siblings ...)
  2023-10-12  1:27   ` [PATCH v2 08/11] net/nfp: unify the guide line of header file Chaoyong He
@ 2023-10-12  1:27   ` Chaoyong He
  2023-10-12  1:27   ` [PATCH v2 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:27 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Rename some parameter and variable to make the logic easier to
understand.
Also avoid the mix use of lowercase and uppercase in macro name.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.h    | 20 ++++++++++----------
 drivers/net/nfp/nfp_ethdev_vf.c |  8 ++++----
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index cd0ca50c6b..aad3c29ba8 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -19,9 +19,9 @@
 #define NFP_QCP_QUEUE_ADD_RPTR                  0x0000
 #define NFP_QCP_QUEUE_ADD_WPTR                  0x0004
 #define NFP_QCP_QUEUE_STS_LO                    0x0008
-#define NFP_QCP_QUEUE_STS_LO_READPTR_mask     (0x3ffff)
+#define NFP_QCP_QUEUE_STS_LO_READPTR_MASK     (0x3ffff)
 #define NFP_QCP_QUEUE_STS_HI                    0x000c
-#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask    (0x3ffff)
+#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK    (0x3ffff)
 
 /* Interrupt definitions */
 #define NFP_NET_IRQ_LSC_IDX             0
@@ -303,7 +303,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 /**
  * Add the value to the selected pointer of a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Add to the read or write pointer
@@ -311,7 +311,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
  *   Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q,
+nfp_qcp_ptr_add(uint8_t *queue,
 		enum nfp_qcp_ptr ptr,
 		uint32_t val)
 {
@@ -322,19 +322,19 @@ nfp_qcp_ptr_add(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_ADD_WPTR;
 
-	nn_writel(rte_cpu_to_le_32(val), q + off);
+	nn_writel(rte_cpu_to_le_32(val), queue + off);
 }
 
 /**
  * Read the current read/write pointer value for a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q,
+nfp_qcp_read(uint8_t *queue,
 		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
@@ -345,12 +345,12 @@ nfp_qcp_read(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_STS_HI;
 
-	val = rte_cpu_to_le_32(nn_readl(q + off));
+	val = rte_cpu_to_le_32(nn_readl(queue + off));
 
 	if (ptr == NFP_QCP_READ_PTR)
-		return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_LO_READPTR_MASK;
 	else
-		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK;
 }
 
 static inline uint32_t
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 7096695de6..7fb7b3efc5 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -396,7 +396,7 @@ nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 }
 
 static int
-eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
@@ -404,7 +404,7 @@ eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 }
 
 static int
-eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
@@ -412,8 +412,8 @@ eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 static struct rte_pci_driver rte_nfp_net_vf_pmd = {
 	.id_table = pci_id_nfp_vf_net_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
-	.probe = eth_nfp_vf_pci_probe,
-	.remove = eth_nfp_vf_pci_remove,
+	.probe = nfp_vf_pci_probe,
+	.remove = nfp_vf_pci_remove,
 };
 
 RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 10/11] net/nfp: adjust logic to make it more readable
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (8 preceding siblings ...)
  2023-10-12  1:27   ` [PATCH v2 09/11] net/nfp: rename some parameter and variable Chaoyong He
@ 2023-10-12  1:27   ` Chaoyong He
  2023-10-12  1:27   ` [PATCH v2 11/11] net/nfp: refact the meson build file Chaoyong He
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:27 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Adjust some logic to make it easier to understand.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c     | 87 +++++++++++++++++---------------
 drivers/net/nfp/nfp_cpp_bridge.c |  5 +-
 drivers/net/nfp/nfp_ctrl.h       |  2 -
 drivers/net/nfp/nfp_ethdev.c     | 23 ++++-----
 drivers/net/nfp/nfp_ethdev_vf.c  | 15 +++---
 drivers/net/nfp/nfp_rxtx.c       |  2 +-
 6 files changed, 63 insertions(+), 71 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a102c6f272..2d834b29d9 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -453,7 +453,7 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)
 }
 
 static inline void
-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
+nfp_net_enable_rxvlan_cap(struct nfp_net_hw *hw,
 		uint32_t *ctrl)
 {
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)
@@ -467,19 +467,19 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
 	struct nfp_net_hw *hw;
-	uint64_t enabled_queues = 0;
+	uint64_t enabled_queues;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Enabling the required TX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		enabled_queues |= (1 << i);
 
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues);
 
-	enabled_queues = 0;
-
 	/* Enabling the required RX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		enabled_queues |= (1 << i);
 
@@ -619,33 +619,33 @@ uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
 	uint32_t ctrl = 0;
+	uint64_t rx_offload;
+	uint64_t tx_offload;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
-	struct rte_eth_rxmode *rxmode;
-	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	dev_conf = &dev->data->dev_conf;
-	rxmode = &dev_conf->rxmode;
-	txmode = &dev_conf->txmode;
+	rx_offload = dev_conf->rxmode.offloads;
+	tx_offload = dev_conf->txmode.offloads;
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
-		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+		nfp_net_enable_rxvlan_cap(hw, &ctrl);
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
 		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
@@ -661,14 +661,14 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -676,7 +676,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -766,11 +766,10 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Read link status */
-	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
-
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
+	/* Read link status */
+	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
 	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
@@ -828,6 +827,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
+	if (stats == NULL)
+		return -EINVAL;
+
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
@@ -892,11 +894,8 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats != NULL) {
-		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
-		return 0;
-	}
-	return -EINVAL;
+	memcpy(stats, &nfp_dev_stats, sizeof(*stats));
+	return 0;
 }
 
 /*
@@ -1379,13 +1378,14 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
 			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
@@ -1399,14 +1399,16 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
-	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), 0x1);
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_RXTX);
+
 	return 0;
 }
 
@@ -1445,13 +1447,13 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
+	/* Make sure all updates are written before un-masking */
+	rte_wmb();
+
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
-		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
 	} else {
-		/* Make sure all updates are written before un-masking */
-		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
 				NFP_NET_CFG_ICR_UNMASKED);
 	}
@@ -1548,19 +1550,18 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t update;
 	uint32_t new_ctrl;
+	uint64_t rx_offload;
 	struct nfp_net_hw *hw;
 	uint32_t rxvlan_ctrl = 0;
-	struct rte_eth_conf *dev_conf;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	dev_conf = &dev->data->dev_conf;
+	rx_offload = dev->data->dev_conf.rxmode.offloads;
 	new_ctrl = hw->ctrl;
 
-	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
-
 	/* VLAN stripping setting */
 	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+		nfp_net_enable_rxvlan_cap(hw, &rxvlan_ctrl);
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
@@ -1568,7 +1569,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 
 	/* QinQ stripping setting */
 	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1580,10 +1581,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	update = NFP_NET_CFG_UPDATE_GEN;
 
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret == 0)
-		hw->ctrl = new_ctrl;
+	if (ret != 0)
+		return ret;
 
-	return ret;
+	hw->ctrl = new_ctrl;
+
+	return 0;
 }
 
 static int
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index bb2a6fdcda..36dcdca9de 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -22,9 +22,6 @@
 #define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t)
 
 /* Prototypes */
-static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp);
 static int nfp_cpp_bridge_service_func(void *args);
 
 int
@@ -438,7 +435,7 @@ nfp_cpp_bridge_service_func(void *args)
 			return -EIO;
 		}
 
-		while (1) {
+		for (;;) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
 				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 5c2065a537..9ec51e0a25 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -442,8 +442,6 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
-
 /*
  * General use mailbox area (0x1800 - 0x19ff)
  * 4B used for update command and 4B return code followed by
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index b65c2c1fe0..c550c12e01 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -80,7 +80,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler.
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -525,7 +525,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			return -ENODEV;
 
 		/* Use port offset in pf ctrl_bar for this ports control bar */
-		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
+		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
 	}
 
@@ -743,8 +743,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
 	uint8_t i;
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
@@ -765,8 +764,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	pf_dev->app_fw_priv = app_fw_nic;
 
 	/* Read the number of vNIC's created for the PF */
-	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
+	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &ret);
+	if (ret != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -874,8 +873,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 static int
 nfp_pf_init(struct rte_pci_device *pci_dev)
 {
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
@@ -943,8 +941,8 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		ret = -EIO;
 		goto sym_tbl_cleanup;
@@ -1080,7 +1078,6 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 static int
 nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 {
-	int err = 0;
 	int ret = 0;
 	struct nfp_cpp *cpp;
 	enum nfp_app_fw_id app_fw_id;
@@ -1124,8 +1121,8 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		goto sym_tbl_cleanup;
 	}
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 7fb7b3efc5..ac6e67efc6 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -39,8 +39,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -54,7 +52,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler.
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -77,6 +75,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	new_ctrl = nfp_check_offloads(dev);
 
 	/* Writing configuration parameters in the device */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nfp_net_params_setup(hw);
 
 	dev_conf = &dev->data->dev_conf;
@@ -244,15 +243,15 @@ static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
 	int err;
+	uint16_t port;
 	uint32_t start_q;
-	uint16_t port = 0;
 	struct nfp_net_hw *hw;
 	uint64_t tx_bar_off = 0;
 	uint64_t rx_bar_off = 0;
 	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
-	struct rte_ether_addr *tmp_ether_addr;
 
+	port = eth_dev->data->port_id;
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -325,9 +324,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	}
 
 	nfp_netvf_read_mac(hw);
-
-	tmp_ether_addr = &hw->mac_addr;
-	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
+	if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -344,7 +341,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
-			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			port, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
 			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index e284a67d7c..6fcdcb0be7 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -284,7 +284,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 			meta->vlan[meta->vlan_layer].tci =
 					vlan_info & NFP_NET_META_VLAN_MASK;
 			meta->vlan[meta->vlan_layer].tpid = NFP_NET_META_TPID(vlan_info);
-			++meta->vlan_layer;
+			meta->vlan_layer++;
 			break;
 		case NFP_NET_META_IPSEC:
 			meta->sa_idx = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v2 11/11] net/nfp: refact the meson build file
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (9 preceding siblings ...)
  2023-10-12  1:27   ` [PATCH v2 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
@ 2023-10-12  1:27   ` Chaoyong He
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:27 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Make the source files follow the alphabeta sequence.
Also update the copyright header line.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/meson.build | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build
index 7627c3e3f1..40e9ef8524 100644
--- a/drivers/net/nfp/meson.build
+++ b/drivers/net/nfp/meson.build
@@ -1,10 +1,11 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018 Corigine, Inc.
 
 if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
     build = false
     reason = 'only supported on 64-bit Linux'
 endif
+
 sources = files(
         'flower/nfp_conntrack.c',
         'flower/nfp_flower.c',
@@ -13,30 +14,30 @@ sources = files(
         'flower/nfp_flower_representor.c',
         'nfd3/nfp_nfd3_dp.c',
         'nfdk/nfp_nfdk_dp.c',
-        'nfpcore/nfp_nsp.c',
         'nfpcore/nfp_cppcore.c',
-        'nfpcore/nfp_resource.c',
-        'nfpcore/nfp_mip.c',
-        'nfpcore/nfp_nffw.c',
-        'nfpcore/nfp_rtsym.c',
-        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_crc.c',
         'nfpcore/nfp_dev.c',
+        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_mip.c',
         'nfpcore/nfp_mutex.c',
+        'nfpcore/nfp_nffw.c',
+        'nfpcore/nfp_nsp.c',
+        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_nsp_eth.c',
-        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_resource.c',
+        'nfpcore/nfp_rtsym.c',
         'nfpcore/nfp_target.c',
         'nfpcore/nfp6000_pcie.c',
         'nfp_common.c',
-        'nfp_ctrl.c',
-        'nfp_rxtx.c',
         'nfp_cpp_bridge.c',
-        'nfp_ethdev_vf.c',
+        'nfp_ctrl.c',
         'nfp_ethdev.c',
+        'nfp_ethdev_vf.c',
         'nfp_flow.c',
         'nfp_ipsec.c',
         'nfp_logs.c',
         'nfp_mtr.c',
+        'nfp_rxtx.c',
 )
 
 deps += ['hash', 'security']
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v2 05/11] net/nfp: adjust the log statement
  2023-10-12  1:26   ` [PATCH v2 05/11] net/nfp: adjust the log statement Chaoyong He
@ 2023-10-12  1:38     ` Stephen Hemminger
  2023-10-12  1:40       ` Chaoyong He
  0 siblings, 1 reply; 40+ messages in thread
From: Stephen Hemminger @ 2023-10-12  1:38 UTC (permalink / raw)
  To: Chaoyong He; +Cc: dev, oss-drivers, Long Wu, Peng Zhang

On Thu, 12 Oct 2023 09:26:58 +0800
Chaoyong He <chaoyong.he@corigine.com> wrote:

> +			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n"
> +					"\t\tYour mbuf size should have extra space for"
> +					" RX offset=%u bytes.\n"
> +					"\t\tCurrently you just have %u bytes available"
> +					" but the received packet is %u bytes long",
> +					hw->rx_offset,
> +					rxq->mbuf_size - hw->rx_offset,
> +					mb->data_len);

Multi-line log messages with tabs look good on command line (developer)
but don't work well when application is run as a service and logging is going to syslog.
Syslog doesn't like messages with newlines in them.
The message is way to long. Make it shorter, and put any other notes in a comment please..

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: [PATCH v2 05/11] net/nfp: adjust the log statement
  2023-10-12  1:38     ` Stephen Hemminger
@ 2023-10-12  1:40       ` Chaoyong He
  0 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-12  1:40 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, oss-drivers, Long Wu, Nole Zhang



> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, October 12, 2023 9:39 AM
> To: Chaoyong He <chaoyong.he@corigine.com>
> Cc: dev@dpdk.org; oss-drivers <oss-drivers@corigine.com>; Long Wu
> <Long.Wu@nephogine.com>; Nole Zhang <peng.zhang@corigine.com>
> Subject: Re: [PATCH v2 05/11] net/nfp: adjust the log statement
> 
> On Thu, 12 Oct 2023 09:26:58 +0800
> Chaoyong He <chaoyong.he@corigine.com> wrote:
> 
> > +			PMD_RX_LOG(ERR, "mbuf overflow likely due to the
> RX offset.\n"
> > +					"\t\tYour mbuf size should have extra
> space for"
> > +					" RX offset=%u bytes.\n"
> > +					"\t\tCurrently you just have %u bytes
> available"
> > +					" but the received packet is %u bytes
> long",
> > +					hw->rx_offset,
> > +					rxq->mbuf_size - hw->rx_offset,
> > +					mb->data_len);
> 
> Multi-line log messages with tabs look good on command line (developer) but
> don't work well when application is run as a service and logging is going to
> syslog.
> Syslog doesn't like messages with newlines in them.
> The message is way to long. Make it shorter, and put any other notes in a
> comment please..

Thanks, got it.
I will revise as you said in the next version.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 00/11] Unify the PMD coding style
  2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
                     ` (10 preceding siblings ...)
  2023-10-12  1:27   ` [PATCH v2 11/11] net/nfp: refact the meson build file Chaoyong He
@ 2023-10-13  6:06   ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
                       ` (11 more replies)
  11 siblings, 12 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He

This patch series aims to unify the coding style of NFP PMD, make the
logics following the same rules, to make it easier to understand and
extend.
Also prepare for the upcoming vDPA PMD patch series.

---
v2:
* Add some missing modification.
v3:
* Remove the '\t' character in the log statement as the advice of
  reviewer.
---

Chaoyong He (11):
  net/nfp: explicitly compare to null and 0
  net/nfp: unify the indent coding style
  net/nfp: unify the type of integer variable
  net/nfp: standard the local variable coding style
  net/nfp: adjust the log statement
  net/nfp: standard the comment style
  net/nfp: standard the blank character
  net/nfp: unify the guide line of header file
  net/nfp: rename some parameter and variable
  net/nfp: adjust logic to make it more readable
  net/nfp: refact the meson build file

 drivers/net/nfp/flower/nfp_conntrack.c        |   4 +-
 drivers/net/nfp/flower/nfp_flower.c           |  27 +-
 drivers/net/nfp/flower/nfp_flower.h           |  34 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |  18 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  62 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |  39 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.h      |   6 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  46 +-
 .../net/nfp/flower/nfp_flower_representor.h   |   8 +-
 drivers/net/nfp/meson.build                   |  23 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  39 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  34 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  49 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  14 +-
 drivers/net/nfp/nfp_common.c                  | 775 +++++++++---------
 drivers/net/nfp/nfp_common.h                  | 169 ++--
 drivers/net/nfp/nfp_cpp_bridge.c              | 139 ++--
 drivers/net/nfp/nfp_cpp_bridge.h              |   8 +-
 drivers/net/nfp/nfp_ctrl.h                    |  46 +-
 drivers/net/nfp/nfp_ethdev.c                  | 325 ++++----
 drivers/net/nfp/nfp_ethdev_vf.c               | 195 ++---
 drivers/net/nfp/nfp_flow.c                    | 251 +++---
 drivers/net/nfp/nfp_flow.h                    |  23 +-
 drivers/net/nfp/nfp_ipsec.h                   |  12 +-
 drivers/net/nfp/nfp_logs.h                    |   7 +-
 drivers/net/nfp/nfp_rxtx.c                    | 296 +++----
 drivers/net/nfp/nfp_rxtx.h                    |  36 +-
 drivers/net/nfp/nfpcore/nfp_resource.h        |   2 +-
 28 files changed, 1299 insertions(+), 1388 deletions(-)

-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 01/11] net/nfp: explicitly compare to null and 0
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 02/11] net/nfp: unify the indent coding style Chaoyong He
                       ` (10 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

To compliance with the coding standard, make the pointer variable
explicitly comparing to 'NULL' and the integer variable explicitly
comparing to '0'.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |   6 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c |   6 +-
 drivers/net/nfp/nfp_common.c             | 144 +++++++++++------------
 drivers/net/nfp/nfp_cpp_bridge.c         |   2 +-
 drivers/net/nfp/nfp_ethdev.c             |  38 +++---
 drivers/net/nfp/nfp_ethdev_vf.c          |  14 +--
 drivers/net/nfp/nfp_flow.c               |  90 +++++++-------
 drivers/net/nfp/nfp_rxtx.c               |  28 ++---
 8 files changed, 165 insertions(+), 163 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 98e6f7f927..3ddaf0f28d 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -69,7 +69,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -100,7 +100,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_RSS;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS2) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS2;
 	else
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
@@ -110,7 +110,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index c5282053cf..b564e7cd73 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -103,7 +103,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
@@ -195,7 +195,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 
 	lmbuf = &txq->txbufs[txq->wr_p].mbuf;
 	RTE_MBUF_PREFETCH_TO_FREE(*lmbuf);
-	if (*lmbuf)
+	if (*lmbuf != NULL)
 		rte_pktmbuf_free_seg(*lmbuf);
 
 	*lmbuf = mbuf;
@@ -337,7 +337,7 @@ nfp_flower_ctrl_vnic_nfdk_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	txq->wr_p = D_IDX(txq, txq->wr_p + used_descs);
-	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT)
+	if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT != 0)
 		txq->data_pending += mbuf->pkt_len;
 	else
 		txq->data_pending = 0;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 5683afc40a..36752583dd 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -221,7 +221,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
-		if (new & NFP_NET_CFG_UPDATE_ERR) {
+		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
 			return -1;
 		}
@@ -390,18 +390,18 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0)
 		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
-	if (txmode->mq_mode) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY)) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -493,11 +493,11 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
 		 NFP_NET_CFG_UPDATE_MSIX;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
 
 	/* If an error when reconfig we avoid to change hw state */
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return;
 
 	hw->ctrl = new_ctrl;
@@ -537,8 +537,8 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    !(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
 				  " port enabled");
 		return -EBUSY;
@@ -550,10 +550,10 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
-	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
+	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
-	if (nfp_net_reconfig(hw, ctrl, update) < 0) {
+	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
 		return -EIO;
 	}
@@ -568,7 +568,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues)) {
+				    dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
 			     " intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
@@ -580,7 +580,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
-		if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
 		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
@@ -591,7 +591,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1))
+							       i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
 				rte_intr_vec_list_index_get(intr_handle,
@@ -619,53 +619,53 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
 
-	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
-		if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
-		if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
-		else if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
+		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 	}
 
 	/* L2 broadcast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2BC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2BC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2BC;
 
 	/* L2 multicast */
-	if (hw->cap & NFP_NET_CFG_CTRL_L2MC)
+	if ((hw->cap & NFP_NET_CFG_CTRL_L2MC) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO ||
-	    txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) {
-		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
 			ctrl |= NFP_NET_CFG_CTRL_LSO2;
 	}
 
 	/* RX gather */
-	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -693,7 +693,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) != 0) {
 		PMD_DRV_LOG(INFO, "Promiscuous mode already enabled");
 		return 0;
 	}
@@ -706,7 +706,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	 * it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -736,7 +736,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	 * assuming it can not fail ...
 	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret < 0)
+	if (ret != 0)
 		return ret;
 
 	hw->ctrl = new_ctrl;
@@ -770,7 +770,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
-	if (nn_link_status & NFP_NET_CFG_STS_LINK)
+	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
 	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -802,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == 0) {
-		if (link.link_status)
+		if (link.link_status != 0)
 			PMD_DRV_LOG(INFO, "NIC Link is Up");
 		else
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
@@ -907,7 +907,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats) {
+	if (stats != NULL) {
 		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
 		return 0;
 	}
@@ -1229,32 +1229,32 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	/* Next should change when PF support is implemented */
 	dev_info->max_mac_addrs = 1;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0)
 		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2))
+	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
+	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
 					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
-		if (hw->cap & NFP_NET_CFG_CTRL_VXLAN)
+		if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0)
 			dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
+	if ((hw->cap & NFP_NET_CFG_CTRL_GATHER) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	cap_extend = nn_cfg_readl(hw, NFP_NET_CFG_CAP_WORD1);
@@ -1297,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG,
 	};
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) {
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
@@ -1431,7 +1431,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	struct rte_eth_link link;
 
 	rte_eth_linkstatus_get(dev, &link);
-	if (link.link_status)
+	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
 			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
@@ -1462,7 +1462,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
 		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
@@ -1524,7 +1524,7 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	if (rte_eal_alarm_set(timeout * 1000,
 			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) < 0) {
+			      (void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1577,16 +1577,16 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
 	/* VLAN stripping setting */
-	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
 	}
 
 	/* QinQ stripping setting */
-	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
-		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
+		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1674,7 +1674,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1748,28 +1748,28 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & RTE_ETH_RSS_IPV4)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_SCTP;
 
-	if (rss_hf & RTE_ETH_RSS_IPV6)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
-	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) != 0)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_SCTP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1814,7 +1814,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	update = NFP_NET_CFG_UPDATE_RSS;
 
-	if (nfp_net_reconfig(hw, hw->ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, hw->ctrl, update) != 0)
 		return -EIO;
 
 	return 0;
@@ -1838,28 +1838,28 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	rss_hf = rss_conf->rss_hf;
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV4;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) != 0)
 		rss_hf |= RTE_ETH_RSS_IPV6;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP;
 
-	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP)
+	if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) != 0)
 		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 	/* Propagate current RSS hash functions to caller */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index ed9a946b0c..34764a8a32 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -70,7 +70,7 @@ nfp_map_service(uint32_t service_id)
 	rte_service_runstate_set(service_id, 1);
 	rte_service_component_runstate_set(service_id, 1);
 	rte_service_lcore_start(slcore);
-	if (rte_service_may_be_active(slcore))
+	if (rte_service_may_be_active(slcore) != 0)
 		PMD_INIT_LOG(INFO, "The service %s is running", service_name);
 	else
 		PMD_INIT_LOG(ERR, "The service %s is not running", service_name);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index ebc5538291..12feec8eb4 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -89,7 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -113,7 +113,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -125,15 +125,15 @@ nfp_net_start(struct rte_eth_dev *dev)
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
 	/* Enable vxlan */
-	if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) {
+	if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) {
 		new_ctrl |= NFP_NET_CFG_CTRL_VXLAN;
 		update |= NFP_NET_CFG_UPDATE_VXLAN;
 	}
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/* Enable packet type offload by extend ctrl word1. */
@@ -146,14 +146,14 @@ nfp_net_start(struct rte_eth_dev *dev)
 				| NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP;
 
 	update = NFP_NET_CFG_UPDATE_GEN;
-	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) < 0)
+	if (nfp_net_ext_reconfig(hw, ctrl_extend, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -298,7 +298,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		/* Check to see if ports are still in use */
-		if (app_fw_nic->ports[i])
+		if (app_fw_nic->ports[i] != NULL)
 			return 0;
 	}
 
@@ -598,7 +598,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -618,7 +618,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -695,10 +695,11 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* Finally try the card type and media */
 	snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card);
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
-	if (rte_firmware_read(fw_name, &fw_buf, &fsize) < 0) {
-		PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name);
-		return -ENOENT;
-	}
+	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
+		goto load_fw;
+
+	PMD_DRV_LOG(ERR, "Can't find suitable firmware.");
+	return -ENOENT;
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
@@ -727,7 +728,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 	if (nfp_fw_model == NULL)
 		nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno");
 
-	if (nfp_fw_model) {
+	if (nfp_fw_model != NULL) {
 		PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model);
 	} else {
 		PMD_DRV_LOG(ERR, "firmware model NOT found");
@@ -865,7 +866,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		 * nfp_net_init
 		 */
 		ret = nfp_net_init(eth_dev);
-		if (ret) {
+		if (ret != 0) {
 			ret = -ENODEV;
 			goto port_cleanup;
 		}
@@ -878,7 +879,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 port_cleanup:
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
-		if (app_fw_nic->ports[i] && app_fw_nic->ports[i]->eth_dev) {
+		if (app_fw_nic->ports[i] != NULL &&
+				app_fw_nic->ports[i]->eth_dev != NULL) {
 			struct rte_eth_dev *tmp_dev;
 			tmp_dev = app_fw_nic->ports[i]->eth_dev;
 			nfp_ipsec_uninit(tmp_dev);
@@ -950,7 +952,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwinfo_cleanup;
 	}
 
-	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) {
+	if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo) != 0) {
 		PMD_INIT_LOG(ERR, "Error when uploading firmware");
 		ret = -EIO;
 		goto eth_table_cleanup;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 0c94fc51ad..c8d6b0461b 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -66,7 +66,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			}
 		}
 		intr_vector = dev->data->nb_rx_queues;
-		if (rte_intr_efd_enable(intr_handle, intr_vector))
+		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
 
 		nfp_configure_rx_interrupt(dev, intr_handle);
@@ -83,7 +83,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
+	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap);
@@ -94,18 +94,18 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING;
 
-	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
+	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG;
 
 	nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl);
-	if (nfp_net_reconfig(hw, new_ctrl, update) < 0)
+	if (nfp_net_reconfig(hw, new_ctrl, update) != 0)
 		return -EIO;
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
 	 * This requires queues being enabled before
 	 */
-	if (nfp_net_rx_freelist_setup(dev) < 0) {
+	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
 		goto error;
 	}
@@ -330,7 +330,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->mtu = RTE_ETHER_MTU;
 
 	/* VLAN insertion is incompatible with LSOv2 */
-	if (hw->cap & NFP_NET_CFG_CTRL_LSO2)
+	if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0)
 		hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN;
 
 	nfp_net_log_device_information(hw);
@@ -350,7 +350,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	nfp_netvf_read_mac(hw);
 
 	tmp_ether_addr = &hw->mac_addr;
-	if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) {
+	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
 				   port);
 		/* Using random mac addresses for VFs */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 020e31e9de..3ea6813d9a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -521,8 +521,8 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx)
 
 	/* Check if buffer is full */
 	ring = &priv->stats_ids.free_list;
-	if (!CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
-			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1))
+	if (CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size *
+			NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1) == 0)
 		return -ENOBUFS;
 
 	memcpy(&ring->buf[ring->head], &ctx, NFP_FL_STATS_ELEM_RS);
@@ -607,7 +607,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count++;
 			rte_spinlock_unlock(&priv->ipv6_off_lock);
 			return 0;
@@ -641,7 +641,7 @@ nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 
 	rte_spinlock_lock(&priv->ipv6_off_lock);
 	LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
-		if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+		if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) {
 			entry->ref_count--;
 			if (entry->ref_count == 0) {
 				LIST_REMOVE(entry, next);
@@ -671,14 +671,14 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (ext_meta != NULL)
 		key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
 
-	if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+	if ((key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv6_gre_tun));
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
@@ -688,7 +688,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 			ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
 		}
 	} else {
-		if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+		if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) {
 			gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
 					sizeof(struct nfp_flower_ipv4_gre_tun));
 			ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
@@ -783,7 +783,7 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 	mbuf_off_mask  += sizeof(struct nfp_flower_meta_tci);
 
 	/* Populate Extended Metadata if required */
-	if (key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		nfp_flower_compile_ext_meta(mbuf_off_exact, key_layer);
 		nfp_flower_compile_ext_meta(mbuf_off_mask, key_layer);
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
@@ -1068,7 +1068,7 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 			break;
 		case RTE_FLOW_ACTION_TYPE_SET_TTL:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_SET_TTL detected");
-			if (key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) {
+			if ((key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 				if (!ttl_tos_flag) {
 					key_ls->act_size +=
 						sizeof(struct nfp_fl_act_set_ip4_ttl_tos);
@@ -1166,15 +1166,15 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
 	struct nfp_flower_meta_tci *meta_tci;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) != 0)
 		return true;
 
-	if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) == 0)
 		return false;
 
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 	key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
-	if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
+	if ((key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) != 0)
 		return true;
 
 	return false;
@@ -1270,7 +1270,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1281,8 +1281,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
 
 			ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
@@ -1307,7 +1307,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1348,7 +1348,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	spec = item->spec;
 	mask = item->mask ? item->mask : proc->mask_default;
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
@@ -1360,8 +1360,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
 
 		vtc_flow = rte_be_to_cpu_32(hdr->vtc_flow);
-		if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-				NFP_FLOWER_LAYER2_GRE)) {
+		if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+				NFP_FLOWER_LAYER2_GRE) != 0) {
 			ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 
 			ipv6_gre_tun->ip_ext.tos = vtc_flow >> RTE_IPV6_HDR_TC_SHIFT;
@@ -1390,7 +1390,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		 * reserve space for L4 info.
 		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
 		 */
-		if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
 
 		hdr = is_mask ? &mask->hdr : &spec->hdr;
@@ -1434,7 +1434,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
 			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
@@ -1457,7 +1457,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		tcp_flags       = spec->hdr.tcp_flags;
 	}
 
-	if (ipv4) {
+	if (ipv4 != NULL) {
 		if (tcp_flags & RTE_TCP_FIN_FLAG)
 			ipv4->ip_ext.flags |= NFP_FL_TCP_FLAG_FIN;
 		if (tcp_flags & RTE_TCP_SYN_FLAG)
@@ -1512,7 +1512,7 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
@@ -1555,7 +1555,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
 			sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
@@ -1595,7 +1595,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1607,8 +1607,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	hdr = is_mask ? &mask->hdr : &spec->hdr;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = hdr->vx_vni;
 		if (!is_mask)
@@ -1621,8 +1621,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 vxlan_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6))
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1649,7 +1649,7 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	struct nfp_flower_ext_meta *ext_meta = NULL;
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0)
 		ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	spec = item->spec;
@@ -1661,8 +1661,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	geneve = is_mask ? mask : spec;
 
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
 		tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
 				(geneve->vni[1] << 8) | (geneve->vni[2]));
@@ -1677,8 +1677,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 geneve_end:
-	if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)) {
+	if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
 	} else {
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
@@ -1705,8 +1705,8 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
 
 	/* NVGRE is the only supported GRE tunnel type */
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		if (is_mask)
 			tun6->ethertype = rte_cpu_to_be_16(~0);
@@ -1753,8 +1753,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	mask = item->mask ? item->mask : proc->mask_default;
 	tun_key = is_mask ? *mask : *spec;
 
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6) {
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0) {
 		tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
 		tun6->tun_key = tun_key;
 		tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
@@ -1769,8 +1769,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
 	}
 
 gre_key_end:
-	if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
-			NFP_FLOWER_LAYER2_TUN_IPV6)
+	if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+			NFP_FLOWER_LAYER2_TUN_IPV6) != 0)
 		*mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
 	else
 		*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
@@ -2115,7 +2115,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor,
 			sizeof(struct nfp_flower_in_port);
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) {
 		mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
 		mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
 	}
@@ -2558,7 +2558,7 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -2685,7 +2685,7 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
 	port = (struct nfp_flower_in_port *)(meta_tci + 1);
 	eth = (struct nfp_flower_mac_mpls *)(port + 1);
 
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 		ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
 				sizeof(struct nfp_flower_mac_mpls) +
 				sizeof(struct nfp_flower_tp_ports));
@@ -3181,7 +3181,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
 	}
 
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
-	if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0)
 		return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
 	else
 		return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 66a5d6cb3a..4528417559 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -163,22 +163,22 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM))
+	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM) == 0)
 		return;
 
 	/* If IPv4 and IP checksum error, fail */
-	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK)))
+	if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) != 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK) == 0))
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
 
 	/* If neither UDP nor TCP return */
-	if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) &&
-			!(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM))
+	if ((rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) == 0 &&
+			(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM) == 0)
 		return;
 
-	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK))
+	if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK) != 0)
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 	else
 		mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -232,7 +232,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 	int i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
-		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0)
+		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
 	return 0;
@@ -387,7 +387,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 	 * to do anything.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
-		if (meta->vlan_layer >= 1 && meta->vlan[0].offload != 0) {
+		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
 			mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci);
 			mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
 		}
@@ -771,7 +771,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		}
 
 		/* Filling the received mbuf with packet info */
-		if (hw->rx_offset)
+		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
 			mb->data_off = RTE_PKTMBUF_HEADROOM +
@@ -846,7 +846,7 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 		return;
 
 	for (i = 0; i < rxq->rx_count; i++) {
-		if (rxq->rxbufs[i].mbuf) {
+		if (rxq->rxbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf);
 			rxq->rxbufs[i].mbuf = NULL;
 		}
@@ -858,7 +858,7 @@ nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
-	if (rxq) {
+	if (rxq != NULL) {
 		nfp_net_rx_queue_release_mbufs(rxq);
 		rte_eth_dma_zone_free(dev, "rx_ring", queue_idx);
 		rte_free(rxq->rxbufs);
@@ -906,7 +906,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * Free memory prior to re-allocation if needed. This is the case after
 	 * calling nfp_net_stop
 	 */
-	if (dev->data->rx_queues[queue_idx]) {
+	if (dev->data->rx_queues[queue_idx] != NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
 	}
@@ -1037,7 +1037,7 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 		return;
 
 	for (i = 0; i < txq->tx_count; i++) {
-		if (txq->txbufs[i].mbuf) {
+		if (txq->txbufs[i].mbuf != NULL) {
 			rte_pktmbuf_free_seg(txq->txbufs[i].mbuf);
 			txq->txbufs[i].mbuf = NULL;
 		}
@@ -1049,7 +1049,7 @@ nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
-	if (txq) {
+	if (txq != NULL) {
 		nfp_net_tx_queue_release_mbufs(txq);
 		rte_eth_dma_zone_free(dev, "tx_ring", queue_idx);
 		rte_free(txq->txbufs);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 02/11] net/nfp: unify the indent coding style
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 03/11] net/nfp: unify the type of integer variable Chaoyong He
                       ` (9 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Each parameter of function should occupy one line, and indent two TAB
character.
All the statement which span multi line should indent two TAB character.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c           |   5 +-
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |   7 +-
 .../net/nfp/flower/nfp_flower_representor.c   |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |   2 +-
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   4 +-
 drivers/net/nfp/nfp_common.c                  | 250 +++++++++---------
 drivers/net/nfp/nfp_common.h                  |  81 ++++--
 drivers/net/nfp/nfp_cpp_bridge.c              |  56 ++--
 drivers/net/nfp/nfp_ethdev.c                  |  82 +++---
 drivers/net/nfp/nfp_ethdev_vf.c               |  66 +++--
 drivers/net/nfp/nfp_flow.c                    |  36 +--
 drivers/net/nfp/nfp_rxtx.c                    |  86 +++---
 drivers/net/nfp/nfp_rxtx.h                    |  10 +-
 13 files changed, 358 insertions(+), 329 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 3ddaf0f28d..3352693d71 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -63,7 +63,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;
 	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
-		 NFP_NET_CFG_UPDATE_MSIX;
+			NFP_NET_CFG_UPDATE_MSIX;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
@@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue,
 }
 
 static int
-nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)
+nfp_flower_init_vnic_common(struct nfp_net_hw *hw,
+		const char *vnic_type)
 {
 	int err;
 	uint32_t start_q;
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index b564e7cd73..4967cc2375 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(ERR,
-				"RX mbuf alloc failed port_id=%u queue_id=%hu",
-				rxq->port_id, rxq->qidx);
+			PMD_RX_LOG(ERR, "RX mbuf alloc failed port_id=%u queue_id=%hu",
+					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
 		}
@@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 	rte_wmb();
 	if (nb_hold >= rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index e4c5d765e7..013ecbc998 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -830,7 +830,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		 /* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device*/
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 75ecb361ee..99675b6bd7 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 		free_desc = txq->rd_p - txq->wr_p;
 
 	return (free_desc > NFDK_TX_DESC_STOP_CNT) ?
-		(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
+			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
 /*
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index d4bd5edb0a..2426ffb261 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -101,9 +101,7 @@ static inline uint16_t
 nfp_net_nfdk_headlen_to_segs(uint16_t headlen)
 {
 	/* First descriptor fits less data, so adjust for that */
-	return DIV_ROUND_UP(headlen +
-			NFDK_TX_MAX_DATA_PER_DESC -
-			NFDK_TX_MAX_DATA_PER_HEAD,
+	return DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD,
 			NFDK_TX_MAX_DATA_PER_DESC);
 }
 
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 36752583dd..9719a9212b 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed)
 }
 
 static void
-nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
+nfp_net_notify_port_speed(struct nfp_net_hw *hw,
+		struct rte_eth_link *link)
 {
 	/**
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
@@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
 	nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE,
-		      nfp_net_link_speed_rte2nfp(link->link_speed));
+			nfp_net_link_speed_rte2nfp(link->link_speed));
 }
 
 /* The length of firmware version string */
 #define FW_VER_LEN        32
 
 static int
-__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
+__nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t update)
 {
 	int cnt;
 	uint32_t new;
 	struct timespec wait;
 
 	PMD_DRV_LOG(DEBUG, "Writing to the configuration queue (%p)...",
-		    hw->qcp_cfg);
+			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
 		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
@@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					  " %dms", update, cnt);
+					" %dms", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
+nfp_net_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl,
+		uint32_t update)
 {
 	int ret;
 
@@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)
  *   - (EIO) if I/O err and fail to reconfigure the device.
  */
 int
-nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update)
+nfp_net_ext_reconfig(struct nfp_net_hw *hw,
+		uint32_t ctrl_ext,
+		uint32_t update)
 {
 	int ret;
 
@@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
 	}
@@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
 		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
-				    rxmode->mtu, NFP_FRAME_SIZE_MAX);
+				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
 
@@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)
 }
 
 static inline void
-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl)
+nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
+		uint32_t *ctrl)
 {
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)
 		*ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2;
@@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0);
 
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;
-	update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |
-		 NFP_NET_CFG_UPDATE_MSIX;
+	update = NFP_NET_CFG_UPDATE_GEN |
+			NFP_NET_CFG_UPDATE_RING |
+			NFP_NET_CFG_UPDATE_MSIX;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)
 		new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;
@@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw)
 }
 
 void
-nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
+nfp_net_write_mac(struct nfp_net_hw *hw,
+		uint8_t *mac)
 {
 	uint32_t mac0 = *(uint32_t *)mac;
 	uint16_t mac1;
@@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)
 	mac += 4;
 	mac1 = *(uint16_t *)mac;
 	nn_writew(rte_cpu_to_be_16(mac1),
-		  hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
+			hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);
 }
 
 int
-nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
+nfp_net_set_mac_addr(struct rte_eth_dev *dev,
+		struct rte_ether_addr *mac_addr)
 {
 	struct nfp_net_hw *hw;
 	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
 		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				  " port enabled");
+				" port enabled");
 		return -EBUSY;
 	}
 
@@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
-	    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
+			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_INIT_LOG(INFO, "MAC address update failed");
@@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)
 
 int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			   struct rte_intr_handle *intr_handle)
+		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
 	int i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
-				    dev->data->nb_rx_queues) != 0) {
+				dev->data->nb_rx_queues) != 0) {
 		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-			     " intr_vec", dev->data->nb_rx_queues);
+				" intr_vec", dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
@@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			 * efd interrupts
 			*/
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
-			if (rte_intr_vec_list_index_set(intr_handle, i,
-							       i + 1) != 0)
+			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
 			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-				rte_intr_vec_list_index_get(intr_handle,
-								   i));
+					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 
 	/* TX checksum offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
 	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
  * status.
  */
 int
-nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
+nfp_net_link_update(struct rte_eth_dev *dev,
+		__rte_unused int wait_to_complete)
 {
 	int ret;
 	uint32_t i;
@@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 }
 
 int
-nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+nfp_net_stats_get(struct rte_eth_dev *dev,
+		struct rte_eth_stats *stats)
 {
 	int i;
 	struct nfp_net_hw *hw;
@@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		nfp_dev_stats.q_ipackets[i] -=
-			hw->eth_stats_base.q_ipackets[i];
+				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 
 		nfp_dev_stats.q_ibytes[i] -=
-			hw->eth_stats_base.q_ibytes[i];
+				hw->eth_stats_base.q_ibytes[i];
 	}
 
 	/* reading per TX ring stats */
@@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 			break;
 
 		nfp_dev_stats.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
-		nfp_dev_stats.q_opackets[i] -=
-			hw->eth_stats_base.q_opackets[i];
+		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 
-		nfp_dev_stats.q_obytes[i] -=
-			hw->eth_stats_base.q_obytes[i];
+		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
-	nfp_dev_stats.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
-	nfp_dev_stats.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* reading general device stats */
 	nfp_dev_stats.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
@@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	nfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed;
 
 	nfp_dev_stats.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
@@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_ipackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
 
 		hw->eth_stats_base.q_ibytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
 	/* reading per TX ring stats */
@@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 			break;
 
 		hw->eth_stats_base.q_opackets[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
 
 		hw->eth_stats_base.q_obytes[i] =
-			nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
+				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
 	}
 
 	hw->eth_stats_base.ipackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
 
 	hw->eth_stats_base.ibytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
 
 	hw->eth_stats_base.opackets =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
 
 	hw->eth_stats_base.obytes =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
 	/* reading general device stats */
 	hw->eth_stats_base.ierrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
 	hw->eth_stats_base.oerrors =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
 
 	/* RX ring mbuf allocation failures */
 	dev->data->rx_mbuf_alloc_failed = 0;
 
 	hw->eth_stats_base.imissed =
-		nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
+			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 
 	return 0;
 }
@@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)
 		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
-					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {
 		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
@@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
-						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
-						   RTE_ETH_RSS_IPV6 |
-						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_UDP |
-						   RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				RTE_ETH_RSS_IPV6 |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
-			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
-			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_50G |
+			RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 }
 
 int
-nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
-		      NFP_NET_CFG_ICR_UNMASKED);
+			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
 }
 
 int
-nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
+		uint16_t queue_id)
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
@@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
-	if (rte_intr_type_get(pci_dev->intr_handle) !=
-							RTE_INTR_HANDLE_UIO)
+	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
@@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
-			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
-			    ? "full-duplex" : "half-duplex");
+				dev->data->port_id, link.link_speed,
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
+				"full-duplex" : "half-duplex");
 	else
-		PMD_DRV_LOG(INFO, " Port %d: Link Down",
-			    dev->data->port_id);
+		PMD_DRV_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
 
 	PMD_DRV_LOG(INFO, "PCI Address: " PCI_PRI_FMT,
-		    pci_dev->addr.domain, pci_dev->addr.bus,
-		    pci_dev->addr.devid, pci_dev->addr.function);
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
 /* Interrupt configuration and handling */
@@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 		/* Make sure all updates are written before un-masking */
 		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
-			      NFP_NET_CFG_ICR_UNMASKED);
+				NFP_NET_CFG_ICR_UNMASKED);
 	}
 }
 
@@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param)
 	}
 
 	if (rte_eal_alarm_set(timeout * 1000,
-			      nfp_net_dev_interrupt_delayed_handler,
-			      (void *)dev) != 0) {
+			nfp_net_dev_interrupt_delayed_handler,
+			(void *)dev) != 0) {
 		PMD_INIT_LOG(ERR, "Error setting alarm");
 		/* Unmasking */
 		nfp_net_irq_unmask(dev);
@@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param)
 }
 
 int
-nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
+		uint16_t mtu)
 {
 	struct nfp_net_hw *hw;
 
@@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* mtu setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
-			    dev->data->port_id);
+				dev->data->port_id);
 		return -EBUSY;
 	}
 
 	/* MTU larger than current mbufsize not supported */
 	if (mtu > hw->flbufsz) {
 		PMD_DRV_LOG(ERR, "MTU (%u) larger than current mbufsize (%u) not supported",
-			    mtu, hw->flbufsz);
+				mtu, hw->flbufsz);
 		return -ERANGE;
 	}
 
@@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 }
 
 int
-nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
+nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
+		int mask)
 {
 	uint32_t new_ctrl, update;
 	struct nfp_net_hw *hw;
@@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 static int
 nfp_net_rss_reta_write(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint32_t reta, mask;
 	int i, j;
@@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
-		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift,
-			      reta);
+		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
 	return 0;
 }
@@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */
 int
 nfp_net_reta_update(struct rte_eth_dev *dev,
-		    struct rte_eth_rss_reta_entry64 *reta_conf,
-		    uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
  /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
-		   struct rte_eth_rss_reta_entry64 *reta_conf,
-		   uint16_t reta_size)
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size)
 {
 	uint8_t i, j, mask;
 	int idx, shift;
@@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+				"(%d) doesn't match the number hardware can supported "
+				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		if (mask == 0)
 			continue;
 
-		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) +
-				    shift);
+		reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift);
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
 			reta_conf[idx].reta[shift + j] =
-				(uint8_t)((reta >> (8 * j)) & 0xF);
+					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
 	return 0;
@@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	struct nfp_net_hw *hw;
 	uint64_t rss_hf;
@@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint32_t update;
 	uint64_t rss_hf;
@@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			  struct rte_eth_rss_conf *rss_conf)
+		struct rte_eth_rss_conf *rss_conf)
 {
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
@@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	int i, j, ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-		rx_queues);
+			rx_queues);
 
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
@@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 
 	for (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) {
 		nn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port),
-			(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
+				(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);
 	}
 
 	rte_spinlock_lock(&hw->reconfig_lock);
@@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
  * than 40 bits
  */
 int
-nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)
+nfp_net_check_dma_mask(struct nfp_net_hw *hw,
+		char *name)
 {
 	if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&
 			rte_mem_check_dma_mask(40) != 0) {
@@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw)
 }
 
 static void
-nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
+nfp_net_get_nsp_info(struct nfp_net_hw *hw,
+		char *nsp_version)
 {
 	struct nfp_nsp *nsp;
 
@@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)
 }
 
 static void
-nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
+nfp_net_get_mip_name(struct nfp_net_hw *hw,
+		char *mip_name)
 {
 	struct nfp_mip *mip;
 
@@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)
 }
 
 static void
-nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name)
+nfp_net_get_app_name(struct nfp_net_hw *hw,
+		char *app_name)
 {
 	switch (hw->pf_dev->app_fw_id) {
 	case NFP_APP_FW_CORE_NIC:
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index bc3a948231..e4fd394868 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -180,37 +180,47 @@ struct nfp_net_adapter {
 	struct nfp_net_hw hw;
 };
 
-static inline uint8_t nn_readb(volatile const void *addr)
+static inline uint8_t
+nn_readb(volatile const void *addr)
 {
 	return rte_read8(addr);
 }
 
-static inline void nn_writeb(uint8_t val, volatile void *addr)
+static inline void
+nn_writeb(uint8_t val,
+		volatile void *addr)
 {
 	rte_write8(val, addr);
 }
 
-static inline uint32_t nn_readl(volatile const void *addr)
+static inline uint32_t
+nn_readl(volatile const void *addr)
 {
 	return rte_read32(addr);
 }
 
-static inline void nn_writel(uint32_t val, volatile void *addr)
+static inline void
+nn_writel(uint32_t val,
+		volatile void *addr)
 {
 	rte_write32(val, addr);
 }
 
-static inline uint16_t nn_readw(volatile const void *addr)
+static inline uint16_t
+nn_readw(volatile const void *addr)
 {
 	return rte_read16(addr);
 }
 
-static inline void nn_writew(uint16_t val, volatile void *addr)
+static inline void
+nn_writew(uint16_t val,
+		volatile void *addr)
 {
 	rte_write16(val, addr);
 }
 
-static inline uint64_t nn_readq(volatile void *addr)
+static inline uint64_t
+nn_readq(volatile void *addr)
 {
 	const volatile uint32_t *p = addr;
 	uint32_t low, high;
@@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr)
 	return low + ((uint64_t)high << 32);
 }
 
-static inline void nn_writeq(uint64_t val, volatile void *addr)
+static inline void
+nn_writeq(uint64_t val,
+		volatile void *addr)
 {
 	nn_writel(val >> 32, (volatile char *)addr + 4);
 	nn_writel(val, addr);
@@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr)
  * Performs any endian conversion necessary.
  */
 static inline uint8_t
-nn_cfg_readb(struct nfp_net_hw *hw, int off)
+nn_cfg_readb(struct nfp_net_hw *hw,
+		int off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
-nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val)
+nn_cfg_writeb(struct nfp_net_hw *hw,
+		int off,
+		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
 }
 
 static inline uint16_t
-nn_cfg_readw(struct nfp_net_hw *hw, int off)
+nn_cfg_readw(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val)
+nn_cfg_writew(struct nfp_net_hw *hw,
+		int off,
+		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
 }
 
 static inline uint32_t
-nn_cfg_readl(struct nfp_net_hw *hw, int off)
+nn_cfg_readl(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val)
+nn_cfg_writel(struct nfp_net_hw *hw,
+		int off,
+		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
 }
 
 static inline uint64_t
-nn_cfg_readq(struct nfp_net_hw *hw, int off)
+nn_cfg_readq(struct nfp_net_hw *hw,
+		int off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
-nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
+nn_cfg_writeq(struct nfp_net_hw *hw,
+		int off,
+		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
@@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)
  * @val: Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
+nfp_qcp_ptr_add(uint8_t *q,
+		enum nfp_qcp_ptr ptr,
+		uint32_t val)
 {
 	uint32_t off;
 
@@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)
  * @ptr: Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr)
+nfp_qcp_read(uint8_t *q,
+		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
 	uint32_t val;
@@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw);
 void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac);
 int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr);
 int nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
-			       struct rte_intr_handle *intr_handle);
+		struct rte_intr_handle *intr_handle);
 uint32_t nfp_check_offloads(struct rte_eth_dev *dev);
 int nfp_net_promisc_enable(struct rte_eth_dev *dev);
 int nfp_net_promisc_disable(struct rte_eth_dev *dev);
 int nfp_net_link_update(struct rte_eth_dev *dev,
-			__rte_unused int wait_to_complete);
+		__rte_unused int wait_to_complete);
 int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);
 int nfp_net_stats_reset(struct rte_eth_dev *dev);
 uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev);
@@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		unsigned int n);
 int nfp_net_xstats_reset(struct rte_eth_dev *dev);
 int nfp_net_infos_get(struct rte_eth_dev *dev,
-		      struct rte_eth_dev_info *dev_info);
+		struct rte_eth_dev_info *dev_info);
 const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev);
 int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);
 int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id);
@@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param);
 int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);
 int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask);
 int nfp_net_reta_update(struct rte_eth_dev *dev,
-			struct rte_eth_rss_reta_entry64 *reta_conf,
-			uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_reta_query(struct rte_eth_dev *dev,
-		       struct rte_eth_rss_reta_entry64 *reta_conf,
-		       uint16_t reta_size);
+		struct rte_eth_rss_reta_entry64 *reta_conf,
+		uint16_t reta_size);
 int nfp_net_rss_hash_update(struct rte_eth_dev *dev,
-			    struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
-			      struct rte_eth_rss_conf *rss_conf);
+		struct rte_eth_rss_conf *rss_conf);
 int nfp_net_rss_config_default(struct rte_eth_dev *dev);
 void nfp_net_stop_rx_queue(struct rte_eth_dev *dev);
 void nfp_net_close_rx_queue(struct rte_eth_dev *dev);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 34764a8a32..85a8bf9235 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)
  * of CPP interface handler configured by the PMD setup.
  */
 static int
-nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_write(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-		offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-		cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc fail");
 			return -EIO;
@@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 				len = sizeof(tmpbuf);
 
 			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when receiving, %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when receiving, %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			 NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 
 	return 0;
@@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
  * data is sent to the requester using the same socket.
  */
 static int
-nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_read(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	struct nfp_cpp_area *area;
 	off_t offset, nfp_offset;
@@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	int err = 0;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
-		sizeof(off_t), sizeof(size_t));
+			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
 	err = recv(sockfd, &count, sizeof(off_t), 0);
@@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 	nfp_offset = offset & ((1ull << 40) - 1);
 
 	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
-			   offset);
+			offset);
 	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
-			   cpp_id, nfp_offset);
+			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
 	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
-	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+			(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
 		curlen = NFP_CPP_MEMIO_BOUNDARY -
-			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+				(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
 	}
 
 	while (count > 0) {
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
-						    nfp_offset, curlen);
+				nfp_offset, curlen);
 		if (area == NULL) {
 			PMD_CPP_LOG(ERR, "area alloc failed");
 			return -EIO;
@@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 				return -EIO;
 			}
 			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
-					   len, count);
+					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
 			if (err != (int)len) {
-				PMD_CPP_LOG(ERR,
-					"error when sending: %d of %zu",
-					err, count);
+				PMD_CPP_LOG(ERR, "error when sending: %d of %zu",
+						err, count);
 				nfp_cpp_area_release(area);
 				nfp_cpp_area_free(area);
 				return -EIO;
@@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
 
 		count -= pos;
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
-			NFP_CPP_MEMIO_BOUNDARY : count;
+				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
 	return 0;
 }
@@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
  * does not require any CPP access at all.
  */
 static int
-nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp)
+nfp_cpp_bridge_serve_ioctl(int sockfd,
+		struct nfp_cpp *cpp)
 {
 	uint32_t cmd, ident_size, tmp;
 	int err;
@@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args)
 	strcpy(address.sa_data, "/tmp/nfp_cpp");
 
 	ret = bind(sockfd, (const struct sockaddr *)&address,
-		   sizeof(struct sockaddr));
+			sizeof(struct sockaddr));
 	if (ret < 0) {
 		PMD_CPP_LOG(ERR, "bind error (%d). Service failed", errno);
 		close(sockfd);
@@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n",
-						   __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
 				break;
 			}
 
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 12feec8eb4..65473d87e8 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -22,7 +22,8 @@
 #include "nfp_logs.h"
 
 static int
-nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port)
+nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
+		int port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev)
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
-					  "with NFP multiport PF");
+					"with NFP multiport PF");
 				return -EINVAL;
 		}
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 		/* Configure the physical port up */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 1);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 
 	hw->ctrl = new_ctrl;
 
@@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		nfp_eth_set_configured(dev->process_private,
-				       hw->nfp_idx, 0);
+		nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 
 	return 0;
 }
@@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 1);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
 }
 
 /* Set the link down. */
@@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 		/* Configure the physical port down */
 		return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
 	else
-		return nfp_eth_set_configured(dev->process_private,
-					      hw->nfp_idx, 0);
+		return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
 }
 
 /* Reset and stop device. The device can not be restarted. */
@@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	nfp_ipsec_uninit(dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
 	/* Mark this port as unused and free device priv resources*/
@@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_private = hw;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		return -ENOMEM;
@@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	/* Registering LSC interrupt handler */
 	rte_intr_callback_register(pci_dev->intr_handle,
@@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 #define DEFAULT_FW_PATH       "/lib/firmware/netronome"
 
 static int
-nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
+nfp_fw_upload(struct rte_pci_device *dev,
+		struct nfp_nsp *nsp,
+		char *card)
 {
 	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
@@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 	/* First try to find a firmware image specific for this device */
 	snprintf(serial, sizeof(serial),
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
-		cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
-		cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
+			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
+			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
 
-	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH,
-			serial);
+	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
@@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
 
 load_fw:
 	PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu",
-		fw_name, fsize);
+			fw_name, fsize);
 	PMD_DRV_LOG(INFO, "Uploading the firmware ...");
 	nfp_nsp_load_fw(nsp, fw_buf, fsize);
 	PMD_DRV_LOG(INFO, "Done");
@@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev,
 
 	if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) {
 		PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u",
-			nfp_eth_table->count);
+				nfp_eth_table->count);
 		return -EIO;
 	}
 
@@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	numa_node = rte_socket_id();
 	for (i = 0; i < app_fw_nic->total_phyports; i++) {
 		snprintf(port_name, sizeof(port_name), "%s_port%d",
-			 pf_dev->pci_dev->device.name, i);
+				pf_dev->pci_dev->device.name, i);
 
 		/* Allocate a eth_dev for this phyport */
 		eth_dev = rte_eth_dev_allocate(port_name);
@@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		}
 
 		/* Allocate memory for this phyport */
-		eth_dev->data->dev_private =
-			rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw),
+		eth_dev->data->dev_private = rte_zmalloc_socket(port_name,
+				sizeof(struct nfp_net_hw),
 				RTE_CACHE_LINE_SIZE, numa_node);
 		if (eth_dev->data->dev_private == NULL) {
 			ret = -ENOMEM;
@@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	/* Now the symbol table should be there */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		ret = -EIO;
 		goto eth_table_cleanup;
 	}
@@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	 */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
-		PMD_INIT_LOG(ERR, "Something is wrong with the firmware"
-				" symbol table");
+		PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table");
 		return -EIO;
 	}
 
@@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_PF_NIC)
+				PCI_DEVICE_ID_NFP3800_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP4000_PF_NIC)
+				PCI_DEVICE_ID_NFP4000_PF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_PF_NIC)
+				PCI_DEVICE_ID_NFP6000_PF_NIC)
 	},
 	{
 		.vendor_id = 0,
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c8d6b0461b..ac6a10685d 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	/* check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
-		if (rte_intr_type_get(intr_handle) ==
-						RTE_INTR_HANDLE_UIO) {
+		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
-				nfp_net_dev_interrupt_handler, (void *)dev);
+					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
 				PMD_INIT_LOG(ERR, "PMD rx interrupt only "
-					     "supports 1 queue with UIO");
+						"supports 1 queue with UIO");
 				return -EIO;
 			}
 		}
@@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	/* unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
-				     nfp_net_dev_interrupt_handler,
-				     (void *)dev);
+			nfp_net_dev_interrupt_handler, (void *)dev);
 
 	/* Cancel possible impending LSC work here before releasing the port*/
-	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,
-			     (void *)dev);
+	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/*
 	 * The ixgbe PMD disables the pcie master on the
@@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
-		PMD_DRV_LOG(ERR,
-			"hw->ctrl_bar is NULL. BAR0 not configured");
+		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
 		return -ENODEV;
 	}
 
@@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-	hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) *
-			nfp_net_xstats_size(eth_dev), 0);
+	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
+			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
 		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
 				pci_dev->device.name);
@@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
 	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
 
-	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     tx_bar_off;
-	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +
-		     rx_bar_off;
+	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
+	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
 	PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p",
-		     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
+			hw->ctrl_bar, hw->tx_bar, hw->rx_bar);
 
 	nfp_net_cfg_queue_setup(hw);
 	hw->mtu = RTE_ETHER_MTU;
@@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	rte_spinlock_init(&hw->reconfig_lock);
 
 	/* Allocating memory for mac addr */
-	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr",
-					       RTE_ETHER_ADDR_LEN, 0);
+	eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_INIT_LOG(ERR, "Failed to space for MAC address");
 		err = -ENOMEM;
@@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d",
-				   port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
 	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
-		     "mac=" RTE_ETHER_ADDR_PRT_FMT,
-		     eth_dev->data->port_id, pci_dev->id.vendor_id,
-		     pci_dev->id.device_id,
-		     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
+			"mac=" RTE_ETHER_ADDR_PRT_FMT,
+			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			pci_dev->id.device_id,
+			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		/* Registering LSC interrupt handler */
 		rte_intr_callback_register(pci_dev->intr_handle,
-					   nfp_net_dev_interrupt_handler,
-					   (void *)eth_dev);
+				nfp_net_dev_interrupt_handler, (void *)eth_dev);
 		/* Telling the firmware about the LSC interrupt entry */
 		nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
 		/* Recording current stats counters values */
@@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 static const struct rte_pci_id pci_id_nfp_vf_net_map[] = {
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP3800_VF_NIC)
+				PCI_DEVICE_ID_NFP3800_VF_NIC)
 	},
 	{
 		RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,
-			       PCI_DEVICE_ID_NFP6000_VF_NIC)
+				PCI_DEVICE_ID_NFP6000_VF_NIC)
 	},
 	{
 		.vendor_id = 0,
 	},
 };
 
-static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
+static int
+nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 {
 	/* VF cleanup, just free private port data */
 	return nfp_netvf_close(eth_dev);
 }
 
-static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
-		sizeof(struct nfp_net_adapter), nfp_netvf_init);
+			sizeof(struct nfp_net_adapter), nfp_netvf_init);
 }
 
-static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+static int
+eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 3ea6813d9a..6d9a1c249f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -156,7 +156,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
 }
 
 static int
-nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
+nfp_mask_id_alloc(struct nfp_flow_priv *priv,
+		uint8_t *mask_id)
 {
 	uint8_t temp_id;
 	uint8_t freed_id;
@@ -188,7 +189,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)
 }
 
 static int
-nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id)
+nfp_mask_id_free(struct nfp_flow_priv *priv,
+		uint8_t mask_id)
 {
 	struct circ_buf *ring;
 
@@ -703,7 +705,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
 }
 
 static void
-nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_meta_tci(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_meta_tci *tci_meta;
 
@@ -714,7 +717,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
 }
 
 static void
-nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
+nfp_flower_update_meta_tci(char *exact,
+		uint8_t mask_id)
 {
 	struct nfp_flower_meta_tci *meta_tci;
 
@@ -723,7 +727,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)
 }
 
 static void
-nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
+nfp_flower_compile_ext_meta(char *mbuf_off,
+		struct nfp_fl_key_ls *key_layer)
 {
 	struct nfp_flower_ext_meta *ext_meta;
 
@@ -1436,14 +1441,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ipv4  = (struct nfp_flower_ipv4 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv4));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv4));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));
 	} else { /* IPv6 */
 		ipv6  = (struct nfp_flower_ipv6 *)
-			(*mbuf_off - sizeof(struct nfp_flower_ipv6));
+				(*mbuf_off - sizeof(struct nfp_flower_ipv6));
 		ports = (struct nfp_flower_tp_ports *)
-			((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
+				((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));
 	}
 
 	mask = item->mask ? item->mask : proc->mask_default;
@@ -1514,10 +1519,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else {/* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1557,10 +1562,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 	meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
 	if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	} else { /* IPv6 */
 		ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -
-			sizeof(struct nfp_flower_tp_ports);
+				sizeof(struct nfp_flower_tp_ports);
 	}
 	ports = (struct nfp_flower_tp_ports *)ports_off;
 
@@ -1951,9 +1956,8 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 		return 0;
 	}
 
-	mask = item->mask ?
-		(const uint8_t *)item->mask :
-		(const uint8_t *)proc->mask_default;
+	mask = item->mask ? (const uint8_t *)item->mask :
+			(const uint8_t *)proc->mask_default;
 
 	/*
 	 * Single-pass check to make sure that:
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 4528417559..7885166753 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -158,8 +158,9 @@ struct nfp_ptype_parsed {
 
 /* set mbuf checksum flags based on RX descriptor flags */
 void
-nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb)
+nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
+		struct nfp_net_rx_desc *rxd,
+		struct rte_mbuf *mb)
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
@@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	unsigned int i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
-		   rxq->rx_count);
+			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
 		struct nfp_net_rx_desc *rxd;
@@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u",
-		   rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
  *   Mbuf to set the packet type.
  */
 static void
-nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb)
+nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype,
+		struct rte_mbuf *mb)
 {
 	uint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER;
 	uint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype;
@@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  */
 
 uint16_t
-nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+nfp_net_recv_pkts(void *rx_queue,
+		struct rte_mbuf **rx_pkts,
+		uint16_t nb_pkts)
 {
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
@@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
-			PMD_RX_LOG(DEBUG,
-			"RX mbuf alloc failed port_id=%u queue_id=%hu",
+			PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%hu",
 					rxq->port_id, rxq->qidx);
 			nfp_net_mbuf_alloc_failed(rxq);
 			break;
@@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		rxb->mbuf = new_mb;
 
 		PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u",
-			   rxds->rxd.data_len, rxq->mbuf_size);
+				rxds->rxd.data_len, rxq->mbuf_size);
 
 		/* Size of this segment */
 		mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 		/* Size of the whole packet. We just support 1 segment */
 		mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);
 
-		if (unlikely((mb->data_len + hw->rx_offset) >
-			     rxq->mbuf_size)) {
+		if (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) {
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
 			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+					"mbuf overflow likely due to the RX offset.\n"
+					"\t\tYour mbuf size should have extra space for"
+					" RX offset=%u bytes.\n"
+					"\t\tCurrently you just have %u bytes available"
+					" but the received packet is %u bytes long",
+					hw->rx_offset,
+					rxq->mbuf_size - hw->rx_offset,
+					mb->data_len);
 			rte_pktmbuf_free(mb);
 			break;
 		}
@@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		if (hw->rx_offset != 0)
 			mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;
 		else
-			mb->data_off = RTE_PKTMBUF_HEADROOM +
-				       NFP_DESC_META_LEN(rxds);
+			mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);
 
 		/* No scatter mode supported */
 		mb->nb_segs = 1;
@@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		return nb_hold;
 
 	PMD_RX_LOG(DEBUG, "RX  port_id=%hu queue_id=%hu, %hu packets received",
-		   rxq->port_id, rxq->qidx, avail);
+			rxq->port_id, rxq->qidx, avail);
 
 	nb_hold += rxq->nb_rx_hold;
 
@@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	rte_wmb();
 	if (nb_hold > rxq->rx_free_thresh) {
 		PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu",
-			   rxq->port_id, rxq->qidx, nb_hold, avail);
+				rxq->port_id, rxq->qidx, nb_hold, avail);
 		nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);
 		nb_hold = 0;
 	}
@@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 }
 
 void
-nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_rx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];
 
@@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq)
 
 int
 nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t queue_idx, uint16_t nb_desc,
-		       unsigned int socket_id,
-		       const struct rte_eth_rxconf *rx_conf,
-		       struct rte_mempool *mp)
+		uint16_t queue_idx,
+		uint16_t nb_desc,
+		unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp)
 {
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
@@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Validating number of descriptors */
 	rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc);
 	if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 ||
-	    nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
+			nb_desc > max_rx_desc || nb_desc < min_rx_desc) {
 		PMD_DRV_LOG(ERR, "Wrong nb_desc value");
 		return -EINVAL;
 	}
@@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* Allocating rx queue data structure */
 	rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq),
-				 RTE_CACHE_LINE_SIZE, socket_id);
+			RTE_CACHE_LINE_SIZE, socket_id);
 	if (rxq == NULL)
 		return -ENOMEM;
 
@@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	 * resizing in later calls to the queue setup function.
 	 */
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
-				   sizeof(struct nfp_net_rx_desc) *
-				   max_rx_desc, NFP_MEMZONE_ALIGN,
-				   socket_id);
+			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
+			NFP_MEMZONE_ALIGN, socket_id);
 
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
@@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
-					 sizeof(*rxq->rxbufs) * nb_desc,
-					 RTE_CACHE_LINE_SIZE, socket_id);
+			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
+			socket_id);
 	if (rxq->rxbufs == NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
 		dev->data->rx_queues[queue_idx] = NULL;
@@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	}
 
 	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-		   rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
+			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
 
 	nfp_net_reset_rx_queue(rxq);
 
@@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 	int todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
-		   " status", txq->qidx);
+			" status", txq->qidx);
 
 	/* Work out how many packets have been sent */
 	qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR);
 
 	if (qcp_rd_p == txq->rd_p) {
 		PMD_TX_LOG(DEBUG, "queue %hu: It seems harrier is not sending "
-			   "packets (%u, %u)", txq->qidx,
-			   qcp_rd_p, txq->rd_p);
+				"packets (%u, %u)", txq->qidx,
+				qcp_rd_p, txq->rd_p);
 		return 0;
 	}
 
@@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 		todo = qcp_rd_p + txq->tx_count - txq->rd_p;
 
 	PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u",
-		   qcp_rd_p, txq->rd_p, txq->rd_p);
+			qcp_rd_p, txq->rd_p, txq->rd_p);
 
 	if (todo == 0)
 		return todo;
@@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 }
 
 void
-nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)
+nfp_net_tx_queue_release(struct rte_eth_dev *dev,
+		uint16_t queue_idx)
 {
 	struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];
 
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 3c7138f7d6..9a30ebd89e 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq)
 }
 
 void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,
-		 struct rte_mbuf *mb);
+		struct rte_mbuf *mb);
 int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);
 uint32_t nfp_net_rx_queue_count(void *rx_queue);
 uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-				  uint16_t nb_pkts);
+		uint16_t nb_pkts);
 void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq);
 int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
-				  uint16_t nb_desc, unsigned int socket_id,
-				  const struct rte_eth_rxconf *rx_conf,
-				  struct rte_mempool *mp);
+		uint16_t nb_desc, unsigned int socket_id,
+		const struct rte_eth_rxconf *rx_conf,
+		struct rte_mempool *mp);
 void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);
 void nfp_net_reset_tx_queue(struct nfp_net_txq *txq);
 
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 03/11] net/nfp: unify the type of integer variable
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 02/11] net/nfp: unify the indent coding style Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 04/11] net/nfp: standard the local variable coding style Chaoyong He
                       ` (8 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the type of integer variable to the DPDK prefer style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c      |  2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.c | 16 +++++-----
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c       |  6 ++--
 drivers/net/nfp/nfp_common.c             | 37 +++++++++++++-----------
 drivers/net/nfp/nfp_common.h             | 16 +++++-----
 drivers/net/nfp/nfp_ethdev.c             | 24 +++++++--------
 drivers/net/nfp/nfp_ethdev_vf.c          |  2 +-
 drivers/net/nfp/nfp_flow.c               |  8 ++---
 drivers/net/nfp/nfp_rxtx.c               | 12 ++++----
 drivers/net/nfp/nfp_rxtx.h               |  2 +-
 10 files changed, 64 insertions(+), 61 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 3352693d71..7dd1423aaf 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -26,7 +26,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 6b9532f5b6..5d6912b079 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -64,10 +64,10 @@ nfp_flower_cmsg_mac_repr_init(struct rte_mbuf *mbuf,
 
 static void
 nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
-		unsigned int idx,
-		unsigned int nbi,
-		unsigned int nbi_port,
-		unsigned int phys_port)
+		uint8_t idx,
+		uint32_t nbi,
+		uint32_t nbi_port,
+		uint32_t phys_port)
 {
 	struct nfp_flower_cmsg_mac_repr *msg;
 
@@ -81,11 +81,11 @@ nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m,
 int
 nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower)
 {
-	int i;
+	uint8_t i;
 	uint16_t cnt;
-	unsigned int nbi;
-	unsigned int nbi_port;
-	unsigned int phys_port;
+	uint32_t nbi;
+	uint32_t nbi_port;
+	uint32_t phys_port;
 	struct rte_mbuf *mbuf;
 	struct nfp_eth_table *nfp_eth_table;
 
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 64928254d8..5a84629ed7 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -227,9 +227,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		uint16_t nb_pkts,
 		bool repr_flag)
 {
-	int i;
-	int pkt_size;
-	int dma_size;
+	uint16_t i;
+	uint32_t pkt_size;
+	uint16_t dma_size;
 	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 9719a9212b..cb2c2afbd7 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -199,7 +199,7 @@ static int
 __nfp_net_reconfig(struct nfp_net_hw *hw,
 		uint32_t update)
 {
-	int cnt;
+	uint32_t cnt;
 	uint32_t new;
 	struct timespec wait;
 
@@ -229,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %dms", update, cnt);
+					" %ums", update, cnt);
 			return -EIO;
 		}
 		nanosleep(&wait, 0); /* waiting for a 1ms */
@@ -466,7 +466,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	int i;
+	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -575,7 +575,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
 	struct nfp_net_hw *hw;
-	int i;
+	uint16_t i;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -832,7 +832,7 @@ int
 nfp_net_stats_get(struct rte_eth_dev *dev,
 		struct rte_eth_stats *stats)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
@@ -923,7 +923,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1398,7 +1398,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1419,7 +1419,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 {
 	struct rte_pci_device *pci_dev;
 	struct nfp_net_hw *hw;
-	int base = 0;
+	uint16_t base = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1619,9 +1619,10 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint32_t reta, mask;
-	int i, j;
-	int idx, shift;
+	uint8_t mask;
+	uint32_t reta;
+	uint16_t i, j;
+	uint16_t idx, shift;
 	struct nfp_net_hw *hw =
 		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1695,8 +1696,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint8_t i, j, mask;
-	int idx, shift;
+	uint16_t i, j;
+	uint8_t mask;
+	uint16_t idx, shift;
 	uint32_t reta;
 	struct nfp_net_hw *hw;
 
@@ -1720,7 +1722,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		/* Handling 4 RSS entries per loop */
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
-		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
+		mask = (reta_conf[idx].mask >> shift) & 0xF;
 
 		if (mask == 0)
 			continue;
@@ -1744,7 +1746,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl = 0;
 	uint8_t key;
-	int i;
+	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1835,7 +1837,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
 	uint8_t key;
-	int i;
+	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1893,7 +1895,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	uint16_t queue;
-	int i, j, ret;
+	uint8_t i, j;
+	int ret;
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index e4fd394868..71153ea25b 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -245,14 +245,14 @@ nn_writeq(uint64_t val,
  */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return nn_readb(hw->ctrl_bar + off);
 }
 
 static inline void
 nn_cfg_writeb(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint8_t val)
 {
 	nn_writeb(val, hw->ctrl_bar + off);
@@ -260,14 +260,14 @@ nn_cfg_writeb(struct nfp_net_hw *hw,
 
 static inline uint16_t
 nn_cfg_readw(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writew(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint16_t val)
 {
 	nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);
@@ -275,14 +275,14 @@ nn_cfg_writew(struct nfp_net_hw *hw,
 
 static inline uint32_t
 nn_cfg_readl(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writel(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint32_t val)
 {
 	nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);
@@ -290,14 +290,14 @@ nn_cfg_writel(struct nfp_net_hw *hw,
 
 static inline uint64_t
 nn_cfg_readq(struct nfp_net_hw *hw,
-		int off)
+		uint32_t off)
 {
 	return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));
 }
 
 static inline void
 nn_cfg_writeq(struct nfp_net_hw *hw,
-		int off,
+		uint32_t off,
 		uint64_t val)
 {
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 65473d87e8..140d20dcf7 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -23,7 +23,7 @@
 
 static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
-		int port)
+		uint16_t port)
 {
 	struct nfp_eth_table *nfp_eth_table;
 	struct nfp_net_hw *hw = NULL;
@@ -255,7 +255,7 @@ nfp_net_close(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	int i;
+	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -487,7 +487,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct rte_ether_addr *tmp_ether_addr;
 	uint64_t rx_base;
 	uint64_t tx_base;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 
 	PMD_INIT_FUNC_TRACE();
@@ -501,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
 	port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx;
-	if (port < 0 || port > 7) {
+	if (port > 7) {
 		PMD_DRV_LOG(ERR, "Port value is wrong");
 		return -ENODEV;
 	}
@@ -761,10 +761,10 @@ static int
 nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
-	int i;
+	uint8_t i;
 	int ret;
 	int err = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
 	struct rte_eth_dev *eth_dev;
@@ -785,7 +785,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -795,7 +795,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	 * For coreNIC the number of vNICs exposed should be the same as the
 	 * number of physical ports
 	 */
-	if (total_vnics != (int)nfp_eth_table->count) {
+	if (total_vnics != nfp_eth_table->count) {
 		PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -1053,15 +1053,15 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 		struct nfp_rtsym_table *sym_tbl,
 		struct nfp_cpp *cpp)
 {
-	int i;
+	uint32_t i;
 	int err = 0;
 	int ret = 0;
-	int total_vnics;
+	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 
 	/* Read the number of vNIC's created for the PF */
 	total_vnics = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics <= 0 || total_vnics > 8) {
+	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		return -ENODEV;
 	}
@@ -1069,7 +1069,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 	for (i = 0; i < total_vnics; i++) {
 		struct rte_eth_dev *eth_dev;
 		char port_name[RTE_ETH_NAME_MAX_LEN];
-		snprintf(port_name, sizeof(port_name), "%s_port%d",
+		snprintf(port_name, sizeof(port_name), "%s_port%u",
 				pci_dev->device.name, i);
 
 		PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index ac6a10685d..892300a909 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -260,7 +260,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	uint64_t tx_bar_off = 0, rx_bar_off = 0;
 	uint32_t start_q;
-	int port = 0;
+	uint16_t port = 0;
 	int err;
 	const struct nfp_dev_info *dev_info;
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 6d9a1c249f..4c9904e36c 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -121,7 +121,7 @@ struct nfp_flow_item_proc {
 	/* Bit-mask to use when @p item->mask is not provided. */
 	const void *mask_default;
 	/* Size in bytes for @p mask_support and @p mask_default. */
-	const unsigned int mask_sz;
+	const size_t mask_sz;
 	/* Merge a pattern item into a flow rule handle. */
 	int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
 			struct rte_flow *nfp_flow,
@@ -1941,8 +1941,8 @@ static int
 nfp_flow_item_check(const struct rte_flow_item *item,
 		const struct nfp_flow_item_proc *proc)
 {
+	size_t i;
 	int ret = 0;
-	unsigned int i;
 	const uint8_t *mask;
 
 	/* item->last and item->mask cannot exist without item->spec. */
@@ -2037,7 +2037,7 @@ nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
 		char **mbuf_off_mask,
 		bool is_outer_layer)
 {
-	int i;
+	uint32_t i;
 	int ret = 0;
 	bool continue_flag = true;
 	const struct rte_flow_item *item;
@@ -2271,7 +2271,7 @@ nfp_flow_action_set_ipv6(char *act_data,
 		const struct rte_flow_action *action,
 		bool ip_src_flag)
 {
-	int i;
+	uint32_t i;
 	rte_be32_t tmp;
 	size_t act_size;
 	struct nfp_fl_act_set_ipv6_addr *set_ip;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 7885166753..8cbb9b74a2 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,7 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 	uint64_t dma_addr;
-	unsigned int i;
+	uint16_t i;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -229,7 +229,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 int
 nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 {
-	int i;
+	uint16_t i;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
@@ -840,7 +840,7 @@ nfp_net_recv_pkts(void *rx_queue,
 static void
 nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)
 {
-	unsigned int i;
+	uint16_t i;
 
 	if (rxq->rxbufs == NULL)
 		return;
@@ -992,11 +992,11 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
  * @txq: TX queue to work with
  * Returns number of descriptors freed
  */
-int
+uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
 	uint32_t qcp_rd_p;
-	int todo;
+	uint32_t todo;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1032,7 +1032,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 static void
 nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)
 {
-	unsigned int i;
+	uint32_t i;
 
 	if (txq->txbufs == NULL)
 		return;
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 9a30ebd89e..98ef6c3d93 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -253,7 +253,7 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev,
 		uint16_t nb_desc,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
-int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
+uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq);
 void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 04/11] net/nfp: standard the local variable coding style
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (2 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 03/11] net/nfp: unify the type of integer variable Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 05/11] net/nfp: adjust the log statement Chaoyong He
                       ` (7 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

There should only declare one local variable in each line, and the local
variable should obey the unify sequence.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.c |  6 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c  |  4 +-
 drivers/net/nfp/nfp_common.c        | 97 ++++++++++++++++-------------
 drivers/net/nfp/nfp_common.h        |  3 +-
 drivers/net/nfp/nfp_cpp_bridge.c    | 39 ++++++++----
 drivers/net/nfp/nfp_ethdev.c        | 47 +++++++-------
 drivers/net/nfp/nfp_ethdev_vf.c     | 23 +++----
 drivers/net/nfp/nfp_flow.c          | 28 ++++-----
 drivers/net/nfp/nfp_rxtx.c          | 38 +++++------
 9 files changed, 154 insertions(+), 131 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 7dd1423aaf..7a4e671178 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -24,9 +24,9 @@
 static void
 nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
@@ -50,9 +50,9 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev)
 static void
 nfp_pf_repr_disable_queues(struct rte_eth_dev *dev)
 {
-	struct nfp_net_hw *hw;
+	uint32_t update;
 	uint32_t new_ctrl;
-	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	repr = dev->data->dev_private;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 5a84629ed7..699f65ebef 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -228,13 +228,13 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		bool repr_flag)
 {
 	uint16_t i;
+	uint8_t offset;
 	uint32_t pkt_size;
 	uint16_t dma_size;
-	uint8_t offset;
 	uint64_t dma_addr;
 	uint16_t free_descs;
-	uint16_t issued_descs;
 	struct rte_mbuf *pkt;
+	uint16_t issued_descs;
 	struct nfp_net_hw *hw;
 	struct rte_mbuf **lmbuf;
 	struct nfp_net_txq *txq;
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index cb2c2afbd7..18291a1cde 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -375,10 +375,10 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 int
 nfp_net_configure(struct rte_eth_dev *dev)
 {
+	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -464,9 +464,9 @@ nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
 void
 nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
+	uint16_t i;
 	struct nfp_net_hw *hw;
 	uint64_t enabled_queues = 0;
-	uint16_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -488,8 +488,9 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 void
 nfp_net_disable_queues(struct rte_eth_dev *dev)
 {
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
-	uint32_t new_ctrl, update = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -528,9 +529,10 @@ void
 nfp_net_write_mac(struct nfp_net_hw *hw,
 		uint8_t *mac)
 {
-	uint32_t mac0 = *(uint32_t *)mac;
+	uint32_t mac0;
 	uint16_t mac1;
 
+	mac0 = *(uint32_t *)mac;
 	nn_writel(rte_cpu_to_be_32(mac0), hw->ctrl_bar + NFP_NET_CFG_MACADDR);
 
 	mac += 4;
@@ -543,8 +545,9 @@ int
 nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 		struct rte_ether_addr *mac_addr)
 {
+	uint32_t ctrl;
+	uint32_t update;
 	struct nfp_net_hw *hw;
-	uint32_t update, ctrl;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
@@ -574,8 +577,8 @@ int
 nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		struct rte_intr_handle *intr_handle)
 {
-	struct nfp_net_hw *hw;
 	uint16_t i;
+	struct nfp_net_hw *hw;
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
@@ -615,11 +618,11 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
+	uint32_t ctrl = 0;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
 	struct rte_eth_txmode *txmode;
-	uint32_t ctrl = 0;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -682,9 +685,10 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_enable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
 	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
@@ -725,9 +729,10 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 int
 nfp_net_promisc_disable(struct rte_eth_dev *dev)
 {
-	uint32_t new_ctrl, update = 0;
-	struct nfp_net_hw *hw;
 	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -764,8 +769,8 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	uint32_t nn_link_status;
 	struct nfp_net_hw *hw;
+	uint32_t nn_link_status;
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
@@ -988,12 +993,13 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_xstats_size(const struct rte_eth_dev *dev)
 {
-	/* If the device is a VF, then there will be no MAC stats */
-	struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint32_t count;
+	struct nfp_net_hw *hw;
 	const uint32_t size = RTE_DIM(nfp_net_xstats);
 
+	/* If the device is a VF, then there will be no MAC stats */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hw->mac_stats == NULL) {
-		uint32_t count;
 		for (count = 0; count < size; count++) {
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
@@ -1396,9 +1402,9 @@ int
 nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1417,9 +1423,9 @@ int
 nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 		uint16_t queue_id)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
 	uint16_t base = 0;
+	struct nfp_net_hw *hw;
+	struct rte_pci_device *pci_dev;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -1436,8 +1442,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 static void
 nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_eth_link link;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	rte_eth_linkstatus_get(dev, &link);
 	if (link.link_status != 0)
@@ -1573,16 +1579,16 @@ int
 nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 		int mask)
 {
-	uint32_t new_ctrl, update;
+	int ret;
+	uint32_t update;
+	uint32_t new_ctrl;
 	struct nfp_net_hw *hw;
+	uint32_t rxvlan_ctrl = 0;
 	struct rte_eth_conf *dev_conf;
-	uint32_t rxvlan_ctrl;
-	int ret;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	dev_conf = &dev->data->dev_conf;
 	new_ctrl = hw->ctrl;
-	rxvlan_ctrl = 0;
 
 	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
 
@@ -1619,12 +1625,15 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
 	uint32_t reta;
-	uint16_t i, j;
-	uint16_t idx, shift;
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	uint16_t shift;
+	struct nfp_net_hw *hw;
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
@@ -1670,11 +1679,11 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	struct nfp_net_hw *hw =
-		NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	uint32_t update;
 	int ret;
+	uint32_t update;
+	struct nfp_net_hw *hw;
 
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1696,10 +1705,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
 		uint16_t reta_size)
 {
-	uint16_t i, j;
+	uint16_t i;
+	uint16_t j;
+	uint16_t idx;
 	uint8_t mask;
-	uint16_t idx, shift;
 	uint32_t reta;
+	uint16_t shift;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1742,11 +1753,11 @@ static int
 nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
-	struct nfp_net_hw *hw;
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
+	struct nfp_net_hw *hw;
 	uint32_t cfg_rss_ctrl = 0;
-	uint8_t key;
-	uint8_t i;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -1834,10 +1845,10 @@ int
 nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 		struct rte_eth_rss_conf *rss_conf)
 {
+	uint8_t i;
+	uint8_t key;
 	uint64_t rss_hf;
 	uint32_t cfg_rss_ctrl;
-	uint8_t key;
-	uint8_t i;
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -1890,13 +1901,14 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 int
 nfp_net_rss_config_default(struct rte_eth_dev *dev)
 {
+	int ret;
+	uint8_t i;
+	uint8_t j;
+	uint16_t queue = 0;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rss_conf rss_conf;
-	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 	uint16_t rx_queues = dev->data->nb_rx_queues;
-	uint16_t queue;
-	uint8_t i, j;
-	int ret;
+	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
 	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
 			rx_queues);
@@ -1904,7 +1916,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
-	queue = 0;
 	for (i = 0; i < 0x40; i += 8) {
 		for (j = i; j < (i + 8); j++) {
 			nfp_reta_conf[0].reta[j] = queue;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 71153ea25b..9cb889c4a6 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -222,8 +222,9 @@ nn_writew(uint16_t val,
 static inline uint64_t
 nn_readq(volatile void *addr)
 {
+	uint32_t low;
+	uint32_t high;
 	const volatile uint32_t *p = addr;
-	uint32_t low, high;
 
 	high = nn_readl((volatile const void *)(p + 1));
 	low = nn_readl((volatile const void *)p);
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 85a8bf9235..727ec7a7b2 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -119,12 +119,16 @@ static int
 nfp_cpp_bridge_serve_write(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -220,12 +224,16 @@ static int
 nfp_cpp_bridge_serve_read(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	struct nfp_cpp_area *area;
-	off_t offset, nfp_offset;
-	uint32_t cpp_id, pos, len;
+	int err;
+	off_t offset;
+	uint32_t pos;
+	uint32_t len;
+	size_t count;
+	size_t curlen;
+	uint32_t cpp_id;
+	off_t nfp_offset;
 	uint32_t tmpbuf[16];
-	size_t count, curlen;
-	int err = 0;
+	struct nfp_cpp_area *area;
 
 	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
 			sizeof(off_t), sizeof(size_t));
@@ -319,8 +327,10 @@ static int
 nfp_cpp_bridge_serve_ioctl(int sockfd,
 		struct nfp_cpp *cpp)
 {
-	uint32_t cmd, ident_size, tmp;
 	int err;
+	uint32_t cmd;
+	uint32_t tmp;
+	uint32_t ident_size;
 
 	/* Reading now the IOCTL command */
 	err = recv(sockfd, &cmd, 4, 0);
@@ -375,10 +385,13 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 static int
 nfp_cpp_bridge_service_func(void *args)
 {
-	struct sockaddr address;
+	int op;
+	int ret;
+	int sockfd;
+	int datafd;
 	struct nfp_cpp *cpp;
+	struct sockaddr address;
 	struct nfp_pf_dev *pf_dev;
-	int sockfd, datafd, op, ret;
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 140d20dcf7..7d149decfb 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -25,8 +25,8 @@ static int
 nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 		uint16_t port)
 {
+	struct nfp_net_hw *hw;
 	struct nfp_eth_table *nfp_eth_table;
-	struct nfp_net_hw *hw = NULL;
 
 	/* Grab a pointer to the correct physical port */
 	hw = app_fw_nic->ports[port];
@@ -42,18 +42,19 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 static int
 nfp_net_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
 	uint32_t cap_extend;
-	uint32_t ctrl_extend = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
+	uint32_t ctrl_extend = 0;
 	struct nfp_pf_dev *pf_dev;
-	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct nfp_app_fw_nic *app_fw_nic;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
@@ -251,11 +252,11 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 static int
 nfp_net_close(struct rte_eth_dev *dev)
 {
+	uint8_t i;
 	struct nfp_net_hw *hw;
-	struct rte_pci_device *pci_dev;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	uint8_t i;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -480,15 +481,15 @@ nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_net_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
+	int err;
+	uint16_t port;
+	uint64_t rx_base;
+	uint64_t tx_base;
+	struct nfp_net_hw *hw;
 	struct nfp_pf_dev *pf_dev;
+	struct rte_pci_device *pci_dev;
 	struct nfp_app_fw_nic *app_fw_nic;
-	struct nfp_net_hw *hw;
 	struct rte_ether_addr *tmp_ether_addr;
-	uint64_t rx_base;
-	uint64_t tx_base;
-	uint16_t port = 0;
-	int err;
 
 	PMD_INIT_FUNC_TRACE();
 
@@ -650,14 +651,14 @@ nfp_fw_upload(struct rte_pci_device *dev,
 		struct nfp_nsp *nsp,
 		char *card)
 {
-	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 	void *fw_buf;
-	char fw_name[125];
-	char serial[40];
 	size_t fsize;
+	char serial[40];
+	char fw_name[125];
 	uint16_t interface;
 	uint32_t cpp_serial_len;
 	const uint8_t *cpp_serial;
+	struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
 
 	cpp_serial_len = nfp_cpp_serial(cpp, &cpp_serial);
 	if (cpp_serial_len != NFP_SERIAL_LEN)
@@ -713,10 +714,10 @@ nfp_fw_setup(struct rte_pci_device *dev,
 		struct nfp_eth_table *nfp_eth_table,
 		struct nfp_hwinfo *hwinfo)
 {
+	int err;
+	char card_desc[100];
 	struct nfp_nsp *nsp;
 	const char *nfp_fw_model;
-	char card_desc[100];
-	int err = 0;
 
 	nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "nffw.partno");
 	if (nfp_fw_model == NULL)
@@ -897,9 +898,9 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
-	enum nfp_app_fw_id app_fw_id;
 	struct nfp_pf_dev *pf_dev;
 	struct nfp_hwinfo *hwinfo;
+	enum nfp_app_fw_id app_fw_id;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct nfp_rtsym_table *sym_tbl;
 	struct nfp_eth_table *nfp_eth_table;
@@ -1220,8 +1221,8 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {
 static int
 nfp_pci_uninit(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
 	uint16_t port_id;
+	struct rte_pci_device *pci_dev;
 
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 892300a909..aaef6ea91a 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -29,14 +29,15 @@ nfp_netvf_read_mac(struct nfp_net_hw *hw)
 static int
 nfp_netvf_start(struct rte_eth_dev *dev)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-	uint32_t new_ctrl, update = 0;
+	int ret;
+	uint32_t new_ctrl;
+	uint32_t update = 0;
+	uint32_t intr_vector;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
 	struct rte_eth_rxmode *rxmode;
-	uint32_t intr_vector;
-	int ret;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
+	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -254,15 +255,15 @@ nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw,
 static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
-	struct rte_pci_device *pci_dev;
-	struct nfp_net_hw *hw;
-	struct rte_ether_addr *tmp_ether_addr;
-
-	uint64_t tx_bar_off = 0, rx_bar_off = 0;
+	int err;
 	uint32_t start_q;
 	uint16_t port = 0;
-	int err;
+	struct nfp_net_hw *hw;
+	uint64_t tx_bar_off = 0;
+	uint64_t rx_bar_off = 0;
+	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
+	struct rte_ether_addr *tmp_ether_addr;
 
 	PMD_INIT_FUNC_TRACE();
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 4c9904e36c..84b48daf85 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -761,9 +761,9 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv,
 		uint32_t stats_ctx,
 		uint64_t cookie)
 {
-	struct nfp_fl_rule_metadata *nfp_flow_meta;
-	char *mbuf_off_exact;
 	char *mbuf_off_mask;
+	char *mbuf_off_exact;
+	struct nfp_fl_rule_metadata *nfp_flow_meta;
 
 	/*
 	 * Convert to long words as firmware expects
@@ -974,9 +974,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
 	int ret = 0;
 	bool meter_flag = false;
 	bool tc_hl_flag = false;
-	bool mac_set_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
+	bool mac_set_flag = false;
 	bool ttl_tos_flag = false;
 	const struct rte_flow_action *action;
 
@@ -3201,11 +3201,11 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv4 *ipv4;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3241,11 +3241,11 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
+	struct nfp_fl_act_pre_tun *pre_tun;
+	struct nfp_fl_act_set_tun *set_tun;
 	const struct rte_flow_item_udp *udp;
 	const struct rte_flow_item_ipv6 *ipv6;
 	const struct rte_flow_item_geneve *geneve;
-	struct nfp_fl_act_pre_tun *pre_tun;
-	struct nfp_fl_act_set_tun *set_tun;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3281,10 +3281,10 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
 {
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv4 *ipv4;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv4 *ipv4;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3319,10 +3319,10 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
 	uint8_t tos;
 	uint64_t tun_id;
 	const struct rte_ether_hdr *eth;
-	const struct rte_flow_item_ipv6 *ipv6;
-	const struct rte_flow_item_gre *gre;
 	struct nfp_fl_act_pre_tun *pre_tun;
 	struct nfp_fl_act_set_tun *set_tun;
+	const struct rte_flow_item_gre *gre;
+	const struct rte_flow_item_ipv6 *ipv6;
 	size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
 	size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
 
@@ -3467,12 +3467,12 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
 	uint32_t count;
 	char *position;
 	char *action_data;
-	bool ttl_tos_flag = false;
-	bool tc_hl_flag = false;
 	bool drop_flag = false;
+	bool tc_hl_flag = false;
 	bool ip_set_flag = false;
 	bool tp_set_flag = false;
 	bool mac_set_flag = false;
+	bool ttl_tos_flag = false;
 	uint32_t total_actions = 0;
 	const struct rte_flow_action *action;
 	struct nfp_flower_meta_tci *meta_tci;
@@ -4283,10 +4283,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 	size_t stats_size;
 	uint64_t ctx_count;
 	uint64_t ctx_split;
+	struct nfp_flow_priv *priv;
 	char mask_name[RTE_HASH_NAMESIZE];
 	char flow_name[RTE_HASH_NAMESIZE];
 	char pretun_name[RTE_HASH_NAMESIZE];
-	struct nfp_flow_priv *priv;
 	struct nfp_app_fw_flower *app_fw_flower;
 	const char *pci_name = strchr(pf_dev->pci_dev->name, ':') + 1;
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 8cbb9b74a2..db6122eac3 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -188,9 +188,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 static int
 nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 {
-	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
-	uint64_t dma_addr;
 	uint16_t i;
+	uint64_t dma_addr;
+	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
 	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
 			rxq->rx_count);
@@ -241,17 +241,15 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 uint32_t
 nfp_net_rx_queue_count(void *rx_queue)
 {
+	uint32_t idx;
+	uint32_t count = 0;
 	struct nfp_net_rxq *rxq;
 	struct nfp_net_rx_desc *rxds;
-	uint32_t idx;
-	uint32_t count;
 
 	rxq = rx_queue;
 
 	idx = rxq->rd_p;
 
-	count = 0;
-
 	/*
 	 * Other PMDs are just checking the DD bit in intervals of 4
 	 * descriptors and counting all four if the first has the DD
@@ -282,9 +280,9 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
 		struct nfp_meta_parsed *meta)
 {
-	uint8_t *meta_offset;
 	uint32_t meta_info;
 	uint32_t vlan_info;
+	uint8_t *meta_offset;
 
 	meta_info = rte_be_to_cpu_32(meta_header);
 	meta_offset = meta_base + 4;
@@ -683,15 +681,15 @@ nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
 		uint16_t nb_pkts)
 {
-	struct nfp_net_rxq *rxq;
-	struct nfp_net_rx_desc *rxds;
-	struct nfp_net_dp_buf *rxb;
-	struct nfp_net_hw *hw;
+	uint64_t dma_addr;
+	uint16_t avail = 0;
 	struct rte_mbuf *mb;
+	uint16_t nb_hold = 0;
+	struct nfp_net_hw *hw;
 	struct rte_mbuf *new_mb;
-	uint16_t nb_hold;
-	uint64_t dma_addr;
-	uint16_t avail;
+	struct nfp_net_rxq *rxq;
+	struct nfp_net_dp_buf *rxb;
+	struct nfp_net_rx_desc *rxds;
 	uint16_t avail_multiplexed = 0;
 
 	rxq = rx_queue;
@@ -706,8 +704,6 @@ nfp_net_recv_pkts(void *rx_queue,
 
 	hw = rxq->hw;
 
-	avail = 0;
-	nb_hold = 0;
 	while (avail + avail_multiplexed < nb_pkts) {
 		rxb = &rxq->rxbufs[rxq->rd_p];
 		if (unlikely(rxb == NULL)) {
@@ -883,12 +879,12 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		const struct rte_eth_rxconf *rx_conf,
 		struct rte_mempool *mp)
 {
+	uint32_t rx_desc_sz;
 	uint16_t min_rx_desc;
 	uint16_t max_rx_desc;
-	const struct rte_memzone *tz;
-	struct nfp_net_rxq *rxq;
 	struct nfp_net_hw *hw;
-	uint32_t rx_desc_sz;
+	struct nfp_net_rxq *rxq;
+	const struct rte_memzone *tz;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
@@ -995,8 +991,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
 {
-	uint32_t qcp_rd_p;
 	uint32_t todo;
+	uint32_t qcp_rd_p;
 
 	PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete"
 			" status", txq->qidx);
@@ -1072,8 +1068,8 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,
 		struct rte_mbuf *pkt,
 		uint8_t layer)
 {
-	uint16_t vlan_tci;
 	uint16_t tpid;
+	uint16_t vlan_tci;
 
 	tpid = RTE_ETHER_TYPE_VLAN;
 	vlan_tci = pkt->vlan_tci;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 05/11] net/nfp: adjust the log statement
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (3 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 04/11] net/nfp: standard the local variable coding style Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 06/11] net/nfp: standard the comment style Chaoyong He
                       ` (6 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Add log statement to the important control logic, and remove verbose
info log statement.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower_ctrl.c      | 10 +---
 .../net/nfp/flower/nfp_flower_representor.c   |  4 +-
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  2 -
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |  2 -
 drivers/net/nfp/nfp_common.c                  | 59 ++++++++-----------
 drivers/net/nfp/nfp_cpp_bridge.c              | 28 ++++-----
 drivers/net/nfp/nfp_ethdev.c                  | 21 +------
 drivers/net/nfp/nfp_ethdev_vf.c               | 17 +-----
 drivers/net/nfp/nfp_logs.h                    |  1 -
 drivers/net/nfp/nfp_rxtx.c                    | 22 ++-----
 10 files changed, 50 insertions(+), 116 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index 4967cc2375..d1c350ae93 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -88,15 +88,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-				"mbuf overflow likely due to the RX offset.\n"
-				"\t\tYour mbuf size should have extra space for"
-				" RX offset=%u bytes.\n"
-				"\t\tCurrently you just have %u bytes available"
-				" but the received packet is %u bytes long",
-				hw->rx_offset,
-				rxq->mbuf_size - hw->rx_offset,
-				mb->data_len);
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.");
 			rte_pktmbuf_free(mb);
 			break;
 		}
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 013ecbc998..bf794a1d70 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -464,7 +464,7 @@ nfp_flower_repr_rx_burst(void *rx_queue,
 	total_dequeue = rte_ring_dequeue_burst(repr->ring, (void *)rx_pkts,
 			nb_pkts, &available);
 	if (total_dequeue != 0) {
-		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: 0x%x, "
+		PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: %#x, "
 				"received: %u, available: %u", repr->name,
 				repr->port_id, total_dequeue, available);
 
@@ -510,7 +510,7 @@ nfp_flower_repr_tx_burst(void *tx_queue,
 	pf_tx_queue = dev->data->tx_queues[0];
 	sent = nfp_flower_pf_xmit_pkts(pf_tx_queue, tx_pkts, nb_pkts);
 	if (sent != 0) {
-		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: 0x%x transmitted: %u",
+		PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: %#x transmitted: %hu",
 				repr->name, repr->port_id, sent);
 		repr->repr_stats.opackets += sent;
 	}
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 699f65ebef..51755f4324 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -381,8 +381,6 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index 2426ffb261..dae87ac6df 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -455,8 +455,6 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);
 
 	/* Validating number of descriptors */
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 18291a1cde..f48e1930dc 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -207,7 +207,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 			hw->qcp_cfg);
 
 	if (hw->qcp_cfg == NULL) {
-		PMD_INIT_LOG(ERR, "Bad configuration queue pointer");
+		PMD_DRV_LOG(ERR, "Bad configuration queue pointer");
 		return -ENXIO;
 	}
 
@@ -224,15 +224,15 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		if (new == 0)
 			break;
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
-			PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new);
+			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
-			PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after"
-					" %ums", update, cnt);
+			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
+					update, cnt);
 			return -EIO;
 		}
-		nanosleep(&wait, 0); /* waiting for a 1ms */
+		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
@@ -390,8 +390,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	 * called after that internal process
 	 */
 
-	PMD_INIT_LOG(DEBUG, "Configure");
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -401,20 +399,20 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	/* Checking TX mode */
 	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
-		PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported");
+		PMD_DRV_LOG(ERR, "TX mq_mode DCB and VMDq not supported");
 		return -EINVAL;
 	}
 
 	/* Checking RX mode */
 	if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		PMD_INIT_LOG(INFO, "RSS not supported");
+		PMD_DRV_LOG(ERR, "RSS not supported");
 		return -EINVAL;
 	}
 
 	/* Checking MTU set */
 	if (rxmode->mtu > NFP_FRAME_SIZE_MAX) {
-		PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported",
+		PMD_DRV_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u)",
 				rxmode->mtu, NFP_FRAME_SIZE_MAX);
 		return -ERANGE;
 	}
@@ -552,8 +550,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {
-		PMD_INIT_LOG(INFO, "MAC address unable to change when"
-				" port enabled");
+		PMD_DRV_LOG(ERR, "MAC address unable to change when port enabled");
 		return -EBUSY;
 	}
 
@@ -567,7 +564,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
-		PMD_INIT_LOG(INFO, "MAC address update failed");
+		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
 	return 0;
@@ -582,21 +579,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 
 	if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
 				dev->data->nb_rx_queues) != 0) {
-		PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
-				" intr_vec", dev->data->nb_rx_queues);
+		PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec",
+				dev->data->nb_rx_queues);
 		return -ENOMEM;
 	}
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO");
 		/* UIO just supports one queue and no LSC*/
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
 		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
 	} else {
-		PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
+		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with VFIO");
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			/*
 			 * The first msix vector is reserved for non
@@ -605,8 +602,6 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
-			PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
-					rte_intr_vec_list_index_get(intr_handle, i));
 		}
 	}
 
@@ -691,8 +686,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	struct nfp_net_hw *hw;
 	struct nfp_flower_representor *repr;
 
-	PMD_DRV_LOG(DEBUG, "Promiscuous mode enable");
-
 	if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) {
 		repr = dev->data->dev_private;
 		hw = repr->app_fw_flower->pf_hw;
@@ -701,7 +694,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	}
 
 	if ((hw->cap & NFP_NET_CFG_CTRL_PROMISC) == 0) {
-		PMD_INIT_LOG(INFO, "Promiscuous mode not supported");
+		PMD_DRV_LOG(ERR, "Promiscuous mode not supported");
 		return -ENOTSUP;
 	}
 
@@ -774,9 +767,6 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link link;
 	struct nfp_eth_table *nfp_eth_table;
 
-
-	PMD_DRV_LOG(DEBUG, "Link update");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Read link status */
@@ -1636,9 +1626,9 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1719,9 +1709,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {
-		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
-				"(%d) doesn't match the number hardware can supported "
-				"(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)"
+				" doesn't match hardware can supported (%d)",
+				reta_size, NFP_NET_CFG_RSS_ITBL_SZ);
 		return -EINVAL;
 	}
 
@@ -1827,7 +1817,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 	}
 
 	if (rss_conf->rss_key_len > NFP_NET_CFG_RSS_KEY_SZ) {
-		PMD_DRV_LOG(ERR, "hash key too long");
+		PMD_DRV_LOG(ERR, "RSS hash key too long");
 		return -EINVAL;
 	}
 
@@ -1910,9 +1900,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 	uint16_t rx_queues = dev->data->nb_rx_queues;
 	struct rte_eth_rss_reta_entry64 nfp_reta_conf[2];
 
-	PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues",
-			rx_queues);
-
 	nfp_reta_conf[0].mask = ~0x0;
 	nfp_reta_conf[1].mask = ~0x0;
 
@@ -1929,7 +1916,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 
 	dev_conf = &dev->data->dev_conf;
 	if (dev_conf == NULL) {
-		PMD_DRV_LOG(INFO, "wrong rss conf");
+		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
 	rss_conf = dev_conf->rx_adv_conf.rss_conf;
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 727ec7a7b2..222cfdcbc3 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -130,7 +130,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -149,9 +149,9 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -162,7 +162,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 	}
 
 	while (count > 0) {
-		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
+		/* Configure a CPP PCIe2CPP BAR for mapping the CPP target */
 		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
 				nfp_offset, curlen);
 		if (area == NULL) {
@@ -170,7 +170,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			return -EIO;
 		}
 
-		/* mapping the target */
+		/* Mapping the target */
 		err = nfp_cpp_area_acquire(area);
 		if (err < 0) {
 			PMD_CPP_LOG(ERR, "area acquire failed");
@@ -183,7 +183,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 			if (len > sizeof(tmpbuf))
 				len = sizeof(tmpbuf);
 
-			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu", __func__,
 					len, count);
 			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
 			if (err != (int)len) {
@@ -235,7 +235,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	uint32_t tmpbuf[16];
 	struct nfp_cpp_area *area;
 
-	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__,
 			sizeof(off_t), sizeof(size_t));
 
 	/* Reading the count param */
@@ -254,9 +254,9 @@ nfp_cpp_bridge_serve_read(int sockfd,
 	cpp_id = (offset >> 40) << 8;
 	nfp_offset = offset & ((1ull << 40) - 1);
 
-	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count,
+	PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count,
 			offset);
-	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__,
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__,
 			cpp_id, nfp_offset);
 
 	/* Adjust length if not aligned */
@@ -293,7 +293,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
-			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__,
+			PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu", __func__,
 					len, count);
 
 			err = send(sockfd, tmpbuf, len, 0);
@@ -353,7 +353,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_model(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -363,7 +363,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd,
 
 	tmp = nfp_cpp_interface(cpp);
 
-	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp);
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x", __func__, tmp);
 
 	err = send(sockfd, &tmp, 4, 0);
 	if (err != 4) {
@@ -440,11 +440,11 @@ nfp_cpp_bridge_service_func(void *args)
 		while (1) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
-				PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__);
+				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
 				break;
 			}
 
-			PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op);
+			PMD_CPP_LOG(DEBUG, "%s: getting op %u", __func__, op);
 
 			if (op == NFP_BRIDGE_OP_READ)
 				nfp_cpp_bridge_serve_read(datafd, cpp);
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 7d149decfb..72abc4c16e 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -60,8 +60,6 @@ nfp_net_start(struct rte_eth_dev *dev)
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -194,8 +192,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	nfp_net_disable_queues(dev);
@@ -220,8 +216,6 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link up");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -237,8 +231,6 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)
 {
 	struct nfp_net_hw *hw;
 
-	PMD_DRV_LOG(DEBUG, "Set link down");
-
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -261,8 +253,6 @@ nfp_net_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
@@ -491,8 +481,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	struct nfp_app_fw_nic *app_fw_nic;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	/* Use backpointer here to the PF of this eth_dev */
@@ -513,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	 */
 	hw = app_fw_nic->ports[port];
 
-	PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, "
+	PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, "
 			"NFP internal port number: %d", port, hw->nfp_idx);
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
@@ -579,9 +567,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 
-	PMD_INIT_LOG(DEBUG, "tx_base: 0x%" PRIx64 "", tx_base);
-	PMD_INIT_LOG(DEBUG, "rx_base: 0x%" PRIx64 "", rx_base);
-
 	hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ;
 	eth_dev->data->dev_private = hw;
@@ -627,7 +612,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %d VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
@@ -997,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto pf_cleanup;
 	}
 
-	PMD_INIT_LOG(DEBUG, "qc_bar address: 0x%p", pf_dev->qc_bar);
+	PMD_INIT_LOG(DEBUG, "qc_bar address: %p", pf_dev->qc_bar);
 
 	/*
 	 * PF initialization has been done at this point. Call app specific
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index aaef6ea91a..d3c3c9e953 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -41,8 +41,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_LOG(DEBUG, "Start");
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -136,8 +134,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 static int
 nfp_netvf_stop(struct rte_eth_dev *dev)
 {
-	PMD_INIT_LOG(DEBUG, "Stop");
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
@@ -170,8 +166,6 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	PMD_INIT_LOG(DEBUG, "Close");
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
 	/*
@@ -265,8 +259,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	const struct nfp_dev_info *dev_info;
 	struct rte_ether_addr *tmp_ether_addr;
 
-	PMD_INIT_FUNC_TRACE();
-
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -301,7 +293,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	hw->eth_xstats_base = rte_malloc("rte_eth_xstat",
 			sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);
 	if (hw->eth_xstats_base == NULL) {
-		PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!",
+		PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!",
 				pci_dev->device.name);
 		return -ENOMEM;
 	}
@@ -312,9 +304,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
 	rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q);
 
-	PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off);
-	PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off);
-
 	hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
 	hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;
 
@@ -345,7 +334,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	tmp_ether_addr = &hw->mac_addr;
 	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
-		PMD_INIT_LOG(INFO, "Using random mac address for port %d", port);
+		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
 		nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);
@@ -359,7 +348,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x "
+	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
 			eth_dev->data->port_id, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 315a57811c..16ff61700b 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -12,7 +12,6 @@ extern int nfp_logtype_init;
 #define PMD_INIT_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, nfp_logtype_init, \
 		"%s(): " fmt "\n", __func__, ## args)
-#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
 
 #ifdef RTE_ETHDEV_DEBUG_RX
 extern int nfp_logtype_rx;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index db6122eac3..b37a338b2f 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -192,7 +192,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 	uint64_t dma_addr;
 	struct nfp_net_dp_buf *rxe = rxq->rxbufs;
 
-	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors",
+	PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %hu descriptors",
 			rxq->rx_count);
 
 	for (i = 0; i < rxq->rx_count; i++) {
@@ -212,14 +212,13 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
 		rxe[i].mbuf = mbuf;
-		PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr);
 	}
 
 	/* Make sure all writes are flushed before telling the hardware */
 	rte_wmb();
 
 	/* Not advertising the whole ring as the firmware gets confused if so */
-	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1);
+	PMD_RX_LOG(DEBUG, "Increment FL write pointer in %hu", rxq->rx_count - 1);
 
 	nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);
 
@@ -432,7 +431,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
-	PMD_RX_LOG(DEBUG, "Received outer vlan is %u inter vlan is %u",
+	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
 	mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
 }
@@ -754,15 +753,7 @@ nfp_net_recv_pkts(void *rx_queue,
 			 * responsibility of avoiding it. But we have
 			 * to give some info about the error
 			 */
-			PMD_RX_LOG(ERR,
-					"mbuf overflow likely due to the RX offset.\n"
-					"\t\tYour mbuf size should have extra space for"
-					" RX offset=%u bytes.\n"
-					"\t\tCurrently you just have %u bytes available"
-					" but the received packet is %u bytes long",
-					hw->rx_offset,
-					rxq->mbuf_size - hw->rx_offset,
-					mb->data_len);
+			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.");
 			rte_pktmbuf_free(mb);
 			break;
 		}
@@ -888,8 +879,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	PMD_INIT_FUNC_TRACE();
-
 	nfp_net_rx_desc_limits(hw, &min_rx_desc, &max_rx_desc);
 
 	/* Validating number of descriptors */
@@ -965,9 +954,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 		return -ENOMEM;
 	}
 
-	PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64,
-			rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);
-
 	nfp_net_reset_rx_queue(rxq);
 
 	rxq->hw = hw;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 06/11] net/nfp: standard the comment style
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (4 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 05/11] net/nfp: adjust the log statement Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 07/11] net/nfp: standard the blank character Chaoyong He
                       ` (5 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Follow the DPDK coding style, use the kdoc comment style.
Also delete some comment which are not valid anymore and add some
comment to help understand logic.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_conntrack.c        |   4 +-
 drivers/net/nfp/flower/nfp_flower.c           |  10 +-
 drivers/net/nfp/flower/nfp_flower.h           |  28 ++--
 drivers/net/nfp/flower/nfp_flower_cmsg.c      |   2 +-
 drivers/net/nfp/flower/nfp_flower_cmsg.h      |  56 +++----
 drivers/net/nfp/flower/nfp_flower_ctrl.c      |  16 +-
 .../net/nfp/flower/nfp_flower_representor.c   |  42 +++--
 .../net/nfp/flower/nfp_flower_representor.h   |   2 +-
 drivers/net/nfp/nfd3/nfp_nfd3.h               |  33 ++--
 drivers/net/nfp/nfd3/nfp_nfd3_dp.c            |  24 ++-
 drivers/net/nfp/nfdk/nfp_nfdk.h               |  41 ++---
 drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   8 +-
 drivers/net/nfp/nfp_common.c                  | 152 ++++++++----------
 drivers/net/nfp/nfp_common.h                  |  61 +++----
 drivers/net/nfp/nfp_cpp_bridge.c              |   6 +-
 drivers/net/nfp/nfp_ctrl.h                    |  34 ++--
 drivers/net/nfp/nfp_ethdev.c                  |  40 +++--
 drivers/net/nfp/nfp_ethdev_vf.c               |  15 +-
 drivers/net/nfp/nfp_flow.c                    |  62 +++----
 drivers/net/nfp/nfp_flow.h                    |  10 +-
 drivers/net/nfp/nfp_ipsec.h                   |  12 +-
 drivers/net/nfp/nfp_rxtx.c                    | 125 ++++++--------
 drivers/net/nfp/nfp_rxtx.h                    |  18 +--
 23 files changed, 354 insertions(+), 447 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_conntrack.c b/drivers/net/nfp/flower/nfp_conntrack.c
index 7b84b12546..f89003be8b 100644
--- a/drivers/net/nfp/flower/nfp_conntrack.c
+++ b/drivers/net/nfp/flower/nfp_conntrack.c
@@ -667,8 +667,8 @@ nfp_ct_flow_entry_get(struct nfp_ct_zone_entry *ze,
 {
 	bool ret;
 	uint8_t loop;
-	uint8_t item_cnt = 1;      /* the RTE_FLOW_ITEM_TYPE_END */
-	uint8_t action_cnt = 1;    /* the RTE_FLOW_ACTION_TYPE_END */
+	uint8_t item_cnt = 1;      /* The RTE_FLOW_ITEM_TYPE_END */
+	uint8_t action_cnt = 1;    /* The RTE_FLOW_ACTION_TYPE_END */
 	struct nfp_flow_priv *priv;
 	struct nfp_ct_map_entry *me;
 	struct nfp_ct_flow_entry *fe;
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 7a4e671178..4453ae7b5e 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -208,7 +208,7 @@ nfp_flower_pf_close(struct rte_eth_dev *dev)
 		nfp_net_reset_rx_queue(this_rx_q);
 	}
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff);
@@ -488,7 +488,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Tracking mbuf size for detecting a potential mbuf overflow due to
-		 * RX offset
+		 * RX offset.
 		 */
 		rxq->mem_pool = mp;
 		rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -535,7 +535,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Telling the HW about the physical address of the RX ring and number
-		 * of descriptors in log2 format
+		 * of descriptors in log2 format.
 		 */
 		nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(i), rxq->dma);
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC));
@@ -600,7 +600,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw)
 
 		/*
 		 * Telling the HW about the physical address of the TX ring and number
-		 * of descriptors in log2 format
+		 * of descriptors in log2 format.
 		 */
 		nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(i), txq->dma);
 		nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC));
@@ -758,7 +758,7 @@ nfp_flower_enable_services(struct nfp_app_fw_flower *app_fw_flower)
 	app_fw_flower->ctrl_vnic_id = service_id;
 	PMD_INIT_LOG(INFO, "%s registered", flower_service.name);
 
-	/* Map them to available service cores*/
+	/* Map them to available service cores */
 	ret = nfp_map_service(service_id);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Could not map %s", flower_service.name);
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 244b6daa37..0b4e38cedd 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -53,49 +53,49 @@ struct nfp_flower_nfd_func {
 
 /* The flower application's private structure */
 struct nfp_app_fw_flower {
-	/* switch domain for this app */
+	/** Switch domain for this app */
 	uint16_t switch_domain_id;
 
-	/* Number of VF representors */
+	/** Number of VF representors */
 	uint8_t num_vf_reprs;
 
-	/* Number of phyport representors */
+	/** Number of phyport representors */
 	uint8_t num_phyport_reprs;
 
-	/* Pointer to the PF vNIC */
+	/** Pointer to the PF vNIC */
 	struct nfp_net_hw *pf_hw;
 
-	/* Pointer to a mempool for the ctrlvNIC */
+	/** Pointer to a mempool for the Ctrl vNIC */
 	struct rte_mempool *ctrl_pktmbuf_pool;
 
-	/* Pointer to the ctrl vNIC */
+	/** Pointer to the ctrl vNIC */
 	struct nfp_net_hw *ctrl_hw;
 
-	/* Ctrl vNIC Rx counter */
+	/** Ctrl vNIC Rx counter */
 	uint64_t ctrl_vnic_rx_count;
 
-	/* Ctrl vNIC Tx counter */
+	/** Ctrl vNIC Tx counter */
 	uint64_t ctrl_vnic_tx_count;
 
-	/* Array of phyport representors */
+	/** Array of phyport representors */
 	struct nfp_flower_representor *phy_reprs[MAX_FLOWER_PHYPORTS];
 
-	/* Array of VF representors */
+	/** Array of VF representors */
 	struct nfp_flower_representor *vf_reprs[MAX_FLOWER_VFS];
 
-	/* PF representor */
+	/** PF representor */
 	struct nfp_flower_representor *pf_repr;
 
-	/* service id of ctrl vnic service */
+	/** Service id of Ctrl vNIC service */
 	uint32_t ctrl_vnic_id;
 
-	/* Flower extra features */
+	/** Flower extra features */
 	uint64_t ext_features;
 
 	struct nfp_flow_priv *flow_priv;
 	struct nfp_mtr_priv *mtr_priv;
 
-	/* Function pointers for different NFD version */
+	/** Function pointers for different NFD version */
 	struct nfp_flower_nfd_func nfd_func;
 };
 
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 5d6912b079..2ec9498d22 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -230,7 +230,7 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
 		return -ENOMEM;
 	}
 
-	/* copy the flow to mbuf */
+	/* Copy the flow to mbuf */
 	nfp_flow_meta = flow->payload.meta;
 	msg_len = (nfp_flow_meta->key_len + nfp_flow_meta->mask_len +
 			nfp_flow_meta->act_len) << NFP_FL_LW_SIZ;
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 9449760145..cb019171b6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -348,7 +348,7 @@ struct nfp_flower_stats_frame {
 	rte_be64_t stats_cookie;
 };
 
-/**
+/*
  * See RFC 2698 for more details.
  * Word[0](Flag options):
  * [15] p(pps) 1 for pps, 0 for bps
@@ -378,40 +378,24 @@ struct nfp_cfg_head {
 	rte_be32_t profile_id;
 };
 
-/**
- * Struct nfp_profile_conf - profile config, offload to NIC
- * @head:        config head information
- * @bkt_tkn_p:   token bucket peak
- * @bkt_tkn_c:   token bucket committed
- * @pbs:         peak burst size
- * @cbs:         committed burst size
- * @pir:         peak information rate
- * @cir:         committed information rate
- */
+/* Profile config, offload to NIC */
 struct nfp_profile_conf {
-	struct nfp_cfg_head head;
-	rte_be32_t bkt_tkn_p;
-	rte_be32_t bkt_tkn_c;
-	rte_be32_t pbs;
-	rte_be32_t cbs;
-	rte_be32_t pir;
-	rte_be32_t cir;
-};
-
-/**
- * Struct nfp_mtr_stats_reply - meter stats, read from firmware
- * @head:          config head information
- * @pass_bytes:    count of passed bytes
- * @pass_pkts:     count of passed packets
- * @drop_bytes:    count of dropped bytes
- * @drop_pkts:     count of dropped packets
- */
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be32_t bkt_tkn_p;        /**< Token bucket peak */
+	rte_be32_t bkt_tkn_c;        /**< Token bucket committed */
+	rte_be32_t pbs;              /**< Peak burst size */
+	rte_be32_t cbs;              /**< Committed burst size */
+	rte_be32_t pir;              /**< Peak information rate */
+	rte_be32_t cir;              /**< Committed information rate */
+};
+
+/* Meter stats, read from firmware */
 struct nfp_mtr_stats_reply {
-	struct nfp_cfg_head head;
-	rte_be64_t pass_bytes;
-	rte_be64_t pass_pkts;
-	rte_be64_t drop_bytes;
-	rte_be64_t drop_pkts;
+	struct nfp_cfg_head head;    /**< Config head information */
+	rte_be64_t pass_bytes;       /**< Count of passed bytes */
+	rte_be64_t pass_pkts;        /**< Count of passed packets */
+	rte_be64_t drop_bytes;       /**< Count of dropped bytes */
+	rte_be64_t drop_pkts;        /**< Count of dropped packets */
 };
 
 enum nfp_flower_cmsg_port_type {
@@ -851,7 +835,7 @@ struct nfp_fl_act_set_ipv6_addr {
 };
 
 /*
- * ipv6 tc hl fl
+ * Ipv6 tc hl fl
  *    3                   2                   1
  *  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -954,9 +938,9 @@ struct nfp_fl_act_set_tun {
 	uint8_t    tos;
 	rte_be16_t outer_vlan_tpid;
 	rte_be16_t outer_vlan_tci;
-	uint8_t    tun_len;      /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	uint8_t    tun_len;      /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 	uint8_t    reserved2;
-	rte_be16_t tun_proto;    /* Only valid for NFP_FL_TUNNEL_GENEVE */
+	rte_be16_t tun_proto;    /**< Only valid for NFP_FL_TUNNEL_GENEVE */
 } __rte_packed;
 
 /*
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c
index d1c350ae93..b4be28ccdf 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c
@@ -34,7 +34,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 	if (unlikely(rxq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_RX_LOG(ERR, "RX Bad queue");
 		return 0;
@@ -60,7 +60,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 
 		/*
 		 * We got a packet. Let's alloc a new mbuf for refilling the
-		 * free descriptor ring as soon as possible
+		 * free descriptor ring as soon as possible.
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
@@ -72,7 +72,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 
 		/*
 		 * Grab the mbuf and refill the descriptor with the
-		 * previously allocated mbuf
+		 * previously allocated mbuf.
 		 */
 		mb = rxb->mbuf;
 		rxb->mbuf = new_mb;
@@ -86,7 +86,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
-			 * to give some info about the error
+			 * to give some info about the error.
 			 */
 			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.");
 			rte_pktmbuf_free(mb);
@@ -116,7 +116,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -163,7 +163,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	if (unlikely(txq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_TX_LOG(ERR, "ctrl dev TX Bad queue");
 		goto xmit_end;
@@ -199,7 +199,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower,
 	txds->offset_eop = FLOWER_PKT_DATA_OFFSET | NFD3_DESC_TX_EOP;
 
 	txq->wr_p++;
-	if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/
+	if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 		txq->wr_p = 0;
 
 	cnt++;
@@ -513,7 +513,7 @@ nfp_flower_ctrl_vnic_poll(struct nfp_app_fw_flower *app_fw_flower)
 	ctrl_hw = app_fw_flower->ctrl_hw;
 	ctrl_eth_dev = ctrl_hw->eth_dev;
 
-	/* ctrl vNIC only has a single Rx queue */
+	/* Ctrl vNIC only has a single Rx queue */
 	rxq = ctrl_eth_dev->data->rx_queues[0];
 
 	while (rte_service_runstate_get(app_fw_flower->ctrl_vnic_id) != 0) {
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index bf794a1d70..90f8ccba71 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -10,18 +10,12 @@
 #include "../nfp_logs.h"
 #include "../nfp_mtr.h"
 
-/*
- * enum nfp_repr_type - type of representor
- * @NFP_REPR_TYPE_PHYS_PORT:   external NIC port
- * @NFP_REPR_TYPE_PF:          physical function
- * @NFP_REPR_TYPE_VF:          virtual function
- * @NFP_REPR_TYPE_MAX:         number of representor types
- */
+/* Type of representor */
 enum nfp_repr_type {
-	NFP_REPR_TYPE_PHYS_PORT,
-	NFP_REPR_TYPE_PF,
-	NFP_REPR_TYPE_VF,
-	NFP_REPR_TYPE_MAX,
+	NFP_REPR_TYPE_PHYS_PORT,    /*<< External NIC port */
+	NFP_REPR_TYPE_PF,           /*<< Physical function */
+	NFP_REPR_TYPE_VF,           /*<< Virtual function */
+	NFP_REPR_TYPE_MAX,          /*<< Number of representor types */
 };
 
 static int
@@ -55,7 +49,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Tracking mbuf size for detecting a potential mbuf overflow due to
-	 * RX offset
+	 * RX offset.
 	 */
 	rxq->mem_pool = mp;
 	rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -86,7 +80,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -101,7 +95,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the RX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -159,7 +153,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -170,7 +164,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = (uint64_t)tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * nb_desc,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -185,7 +179,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -603,7 +597,7 @@ nfp_flower_pf_repr_init(struct rte_eth_dev *eth_dev,
 	/* Memory has been allocated in the eth_dev_create() function */
 	repr = eth_dev->data->dev_private;
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->repr_type        = init_repr_data->repr_type;
@@ -673,7 +667,7 @@ nfp_flower_repr_init(struct rte_eth_dev *eth_dev,
 		return -ENOMEM;
 	}
 
-	/* Copy data here from the input representor template*/
+	/* Copy data here from the input representor template */
 	repr->vf_id            = init_repr_data->vf_id;
 	repr->switch_domain_id = init_repr_data->switch_domain_id;
 	repr->port_id          = init_repr_data->port_id;
@@ -756,7 +750,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 	nfp_eth_table = app_fw_flower->pf_hw->pf_dev->nfp_eth_table;
 	eth_dev = app_fw_flower->ctrl_hw->eth_dev;
 
-	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware*/
+	/* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware */
 	ret = nfp_flower_cmsg_mac_repr(app_fw_flower);
 	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Cloud not send mac repr cmsgs");
@@ -799,8 +793,8 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 				"%s_repr_p%d", pci_name, i);
 
 		/*
-		 * Create a eth_dev for this representor
-		 * This will also allocate private memory for the device
+		 * Create a eth_dev for this representor.
+		 * This will also allocate private memory for the device.
 		 */
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
@@ -816,7 +810,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 
 	/*
 	 * Now allocate eth_dev's for VF representors.
-	 * Also send reify messages
+	 * Also send reify messages.
 	 */
 	for (i = 0; i < app_fw_flower->num_vf_reprs; i++) {
 		flower_repr.repr_type = NFP_REPR_TYPE_VF;
@@ -830,7 +824,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)
 		snprintf(flower_repr.name, sizeof(flower_repr.name),
 				"%s_repr_vf%d", pci_name, i);
 
-		/* This will also allocate private memory for the device*/
+		/* This will also allocate private memory for the device */
 		ret = rte_eth_dev_create(eth_dev->device, flower_repr.name,
 				sizeof(struct nfp_flower_representor),
 				NULL, NULL, nfp_flower_repr_init, &flower_repr);
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index 5ac5e38186..eda19cbb16 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -13,7 +13,7 @@ struct nfp_flower_representor {
 	uint16_t switch_domain_id;
 	uint32_t repr_type;
 	uint32_t port_id;
-	uint32_t nfp_idx;    /* only valid for the repr of physical port */
+	uint32_t nfp_idx;    /**< Only valid for the repr of physical port */
 	char name[RTE_ETH_NAME_MAX_LEN];
 	struct rte_ether_addr mac_addr;
 	struct nfp_app_fw_flower *app_fw_flower;
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 7c56ca4908..0b0ca361f4 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -17,24 +17,24 @@
 struct nfp_net_nfd3_tx_desc {
 	union {
 		struct {
-			uint8_t dma_addr_hi; /* High bits of host buf address */
-			uint16_t dma_len;    /* Length to DMA for this desc */
-			/* Offset in buf where pkt starts + highest bit is eop flag */
+			uint8_t dma_addr_hi; /**< High bits of host buf address */
+			uint16_t dma_len;    /**< Length to DMA for this desc */
+			/** Offset in buf where pkt starts + highest bit is eop flag */
 			uint8_t offset_eop;
-			uint32_t dma_addr_lo; /* Low 32bit of host buf addr */
+			uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */
 
-			uint16_t mss;         /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;   /* LSO, where the data starts */
-			uint8_t flags;        /* TX Flags, see @NFD3_DESC_TX_* */
+			uint16_t mss;         /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;   /**< LSO, where the data starts */
+			uint8_t flags;        /**< TX Flags, see @NFD3_DESC_TX_* */
 
 			union {
 				struct {
-					uint8_t l3_offset; /* L3 header offset */
-					uint8_t l4_offset; /* L4 header offset */
+					uint8_t l3_offset; /**< L3 header offset */
+					uint8_t l4_offset; /**< L4 header offset */
 				};
-				uint16_t vlan; /* VLAN tag to add if indicated */
+				uint16_t vlan; /**< VLAN tag to add if indicated */
 			};
-			uint16_t data_len;     /* Length of frame + meta data */
+			uint16_t data_len;     /**< Length of frame + meta data */
 		} __rte_packed;
 		uint32_t vals[4];
 	};
@@ -54,13 +54,14 @@ nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq)
 	return (free_desc > 8) ? (free_desc - 8) : 0;
 }
 
-/*
- * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfd3
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware with nfd3
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfd3_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
index 51755f4324..4df2c5d4d2 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
+++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c
@@ -113,14 +113,12 @@ nfp_flower_nfd3_pkt_add_metadata(struct rte_mbuf *mbuf,
 }
 
 /*
- * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc
+ * Set vlan info in the nfd3 tx desc
  *
  * If enable NFP_NET_CFG_CTRL_TXVLAN_V2
- *	Vlan_info is stored in the meta and
- *	is handled in the nfp_net_nfd3_set_meta_vlan()
+ *   Vlan_info is stored in the meta and is handled in the @nfp_net_nfd3_set_meta_vlan()
  * else if enable NFP_NET_CFG_CTRL_TXVLAN
- *	Vlan_info is stored in the tx_desc and
- *	is handled in the nfp_net_nfd3_tx_vlan()
+ *   Vlan_info is stored in the tx_desc and is handled in the @nfp_net_nfd3_tx_vlan()
  */
 static inline void
 nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq,
@@ -299,9 +297,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 		nfp_net_nfd3_tx_vlan(txq, &txd, pkt);
 
 		/*
-		 * mbuf data_len is the data in one segment and pkt_len data
+		 * Mbuf data_len is the data in one segment and pkt_len data
 		 * in the whole packet. When the packet is just one segment,
-		 * then data_len = pkt_len
+		 * then data_len = pkt_len.
 		 */
 		pkt_size = pkt->pkt_len;
 
@@ -315,7 +313,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 
 			/*
 			 * Linking mbuf with descriptor for being released
-			 * next time descriptor is used
+			 * next time descriptor is used.
 			 */
 			*lmbuf = pkt;
 
@@ -330,14 +328,14 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue,
 			free_descs--;
 
 			txq->wr_p++;
-			if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping */
+			if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */
 				txq->wr_p = 0;
 
 			pkt_size -= dma_size;
 
 			/*
 			 * Making the EOP, packets with just one segment
-			 * the priority
+			 * the priority.
 			 */
 			if (likely(pkt_size == 0))
 				txds->offset_eop = NFD3_DESC_TX_EOP;
@@ -439,7 +437,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFD3_TX_DESC_PER_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -449,7 +447,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->txds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -465,7 +463,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 99675b6bd7..04bd3c7600 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -75,7 +75,7 @@
  * dma_addr_hi - bits [47:32] of host memory address
  * dma_addr_lo - bits [31:0] of host memory address
  *
- * --> metadata descriptor
+ * --> Metadata descriptor
  * Bit     3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
  * -----\  1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
  * Word   +-------+-----------------------+---------------------+---+-----+
@@ -104,27 +104,27 @@
  */
 struct nfp_net_nfdk_tx_desc {
 	union {
-		/* Address descriptor */
+		/** Address descriptor */
 		struct {
-			uint16_t dma_addr_hi;  /* High bits of host buf address */
-			uint16_t dma_len_type; /* Length to DMA for this desc */
-			uint32_t dma_addr_lo;  /* Low 32bit of host buf addr */
+			uint16_t dma_addr_hi;  /**< High bits of host buf address */
+			uint16_t dma_len_type; /**< Length to DMA for this desc */
+			uint32_t dma_addr_lo;  /**< Low 32bit of host buf addr */
 		};
 
-		/* TSO descriptor */
+		/** TSO descriptor */
 		struct {
-			uint16_t mss;          /* MSS to be used for LSO */
-			uint8_t lso_hdrlen;    /* LSO, TCP payload offset */
-			uint8_t lso_totsegs;   /* LSO, total segments */
-			uint8_t l3_offset;     /* L3 header offset */
-			uint8_t l4_offset;     /* L4 header offset */
-			uint16_t lso_meta_res; /* Rsvd bits in TSO metadata */
+			uint16_t mss;          /**< MSS to be used for LSO */
+			uint8_t lso_hdrlen;    /**< LSO, TCP payload offset */
+			uint8_t lso_totsegs;   /**< LSO, total segments */
+			uint8_t l3_offset;     /**< L3 header offset */
+			uint8_t l4_offset;     /**< L4 header offset */
+			uint16_t lso_meta_res; /**< Rsvd bits in TSO metadata */
 		};
 
-		/* Metadata descriptor */
+		/** Metadata descriptor */
 		struct {
-			uint8_t flags;         /* TX Flags, see @NFDK_DESC_TX_* */
-			uint8_t reserved[7];   /* meta byte placeholder */
+			uint8_t flags;         /**< TX Flags, see @NFDK_DESC_TX_* */
+			uint8_t reserved[7];   /**< Meta byte place holder */
 		};
 
 		uint32_t vals[2];
@@ -146,13 +146,14 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)
 			(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;
 }
 
-/*
- * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors
- * is below tx_free_threshold for firmware of nfdk
- *
- * @txq: TX queue to check
+/**
+ * Check if the TX queue free descriptors is below tx_free_threshold
+ * for firmware of nfdk
  *
  * This function uses the host copy* of read/write pointers.
+ *
+ * @param txq
+ *   TX queue to check
  */
 static inline bool
 nfp_net_nfdk_txq_full(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
index dae87ac6df..1289ba1d65 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c
@@ -478,7 +478,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Free memory prior to re-allocation if needed. This is the case after
-	 * calling nfp_net_stop
+	 * calling nfp_net_stop().
 	 */
 	if (dev->data->tx_queues[queue_idx] != NULL) {
 		PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
@@ -513,7 +513,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT;
 	txq->tx_free_thresh = tx_free_thresh;
 
-	/* queue mapping based on firmware configuration */
+	/* Queue mapping based on firmware configuration */
 	txq->qidx = queue_idx;
 	txq->tx_qcidx = queue_idx * hw->stride_tx;
 	txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);
@@ -523,7 +523,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->dma = tz->iova;
 	txq->ktxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to TX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to TX descriptors */
 	txq->txbufs = rte_zmalloc_socket("txq->txbufs",
 			sizeof(*txq->txbufs) * txq->tx_count,
 			RTE_CACHE_LINE_SIZE, socket_id);
@@ -539,7 +539,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the TX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index f48e1930dc..130f004b4d 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -55,7 +55,7 @@ struct nfp_xstat {
 }
 
 static const struct nfp_xstat nfp_net_xstats[] = {
-	/**
+	/*
 	 * Basic xstats available on both VF and PF.
 	 * Note that in case new statistics of group NFP_XSTAT_GROUP_NET
 	 * are added to this array, they must appear before any statistics
@@ -80,7 +80,7 @@ static const struct nfp_xstat nfp_net_xstats[] = {
 	NFP_XSTAT_NET("bpf_app2_bytes", APP2_BYTES),
 	NFP_XSTAT_NET("bpf_app3_pkts", APP3_FRAMES),
 	NFP_XSTAT_NET("bpf_app3_bytes", APP3_BYTES),
-	/**
+	/*
 	 * MAC xstats available only on PF. These statistics are not available for VFs as the
 	 * PF is not initialized when the VF is initialized as it is still bound to the kernel
 	 * driver. As such, the PMD cannot obtain a CPP handle and access the rtsym_table in order
@@ -175,7 +175,7 @@ static void
 nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		struct rte_eth_link *link)
 {
-	/**
+	/*
 	 * Read the link status from NFP_NET_CFG_STS. If the link is down
 	 * then write the link speed NFP_NET_CFG_STS_LINK_RATE_UNKNOWN to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -184,7 +184,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
-	/**
+	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
 	 */
@@ -214,7 +214,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 	nfp_qcp_ptr_add(hw->qcp_cfg, NFP_QCP_WRITE_PTR, 1);
 
 	wait.tv_sec = 0;
-	wait.tv_nsec = 1000000;
+	wait.tv_nsec = 1000000; /* 1ms */
 
 	PMD_DRV_LOG(DEBUG, "Polling for update ack...");
 
@@ -253,7 +253,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_reconfig(struct nfp_net_hw *hw,
@@ -297,7 +297,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw,
  *
  * @return
  *   - (0) if OK to reconfigure the device.
- *   - (EIO) if I/O err and fail to reconfigure the device.
+ *   - (-EIO) if I/O err and fail to reconfigure the device.
  */
 int
 nfp_net_ext_reconfig(struct nfp_net_hw *hw,
@@ -368,9 +368,15 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw,
 }
 
 /*
- * Configure an Ethernet device. This function must be invoked first
- * before any other function in the Ethernet API. This function can
- * also be re-invoked when a device is in the stopped state.
+ * Configure an Ethernet device.
+ *
+ * This function must be invoked first before any other function in the Ethernet API.
+ * This function can also be re-invoked when a device is in the stopped state.
+ *
+ * A DPDK app sends info about how many queues to use and how  those queues
+ * need to be configured. This is used by the DPDK core and it makes sure no
+ * more queues than those advertised by the driver are requested.
+ * This function is called after that internal process.
  */
 int
 nfp_net_configure(struct rte_eth_dev *dev)
@@ -382,14 +388,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * A DPDK app sends info about how many queues to use and how
-	 * those queues need to be configured. This is used by the
-	 * DPDK core and it makes sure no more queues than those
-	 * advertised by the driver are requested. This function is
-	 * called after that internal process
-	 */
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -557,12 +555,12 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	/* Writing new MAC to the specific port BAR address */
 	nfp_net_write_mac(hw, (uint8_t *)mac_addr);
 
-	/* Signal the NIC about the change */
 	update = NFP_NET_CFG_UPDATE_MACADDR;
 	ctrl = hw->ctrl;
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
@@ -588,7 +586,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 
 	if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 		PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO");
-		/* UIO just supports one queue and no LSC*/
+		/* UIO just supports one queue and no LSC */
 		nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
 		if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0)
 			return -1;
@@ -597,8 +595,8 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			/*
 			 * The first msix vector is reserved for non
-			 * efd interrupts
-			*/
+			 * efd interrupts.
+			 */
 			nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
 			if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)
 				return -1;
@@ -706,10 +704,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode on just after this call assuming
-	 * it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -737,10 +731,6 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 	new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_PROMISC;
 	update = NFP_NET_CFG_UPDATE_GEN;
 
-	/*
-	 * DPDK sets promiscuous mode off just before this call
-	 * assuming it can not fail ...
-	 */
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
 	if (ret != 0)
 		return ret;
@@ -751,7 +741,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)
 }
 
 /*
- * return 0 means link status changed, -1 means not changed
+ * Return 0 means link status changed, -1 means not changed
  *
  * Wait to complete is needed as it can take up to 9 seconds to get the Link
  * status.
@@ -793,7 +783,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 				}
 			}
 		} else {
-			/**
+			/*
 			 * Shift and mask nn_link_status so that it is effectively the value
 			 * at offset NFP_NET_CFG_STS_NSP_LINK_RATE.
 			 */
@@ -812,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(INFO, "NIC Link is Down");
 	}
 
-	/**
+	/*
 	 * Notify the port to update the speed value in the CTRL BAR from NSP.
 	 * Not applicable for VFs as the associated PF is still attached to the
 	 * kernel driver.
@@ -833,11 +823,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* RTE_ETHDEV_QUEUE_STAT_CNTRS default value is 16 */
-
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
 
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -855,7 +843,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 				hw->eth_stats_base.q_ibytes[i];
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -889,7 +877,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -915,6 +903,10 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	return -EINVAL;
 }
 
+/*
+ * hw->eth_stats_base records the per counter starting point.
+ * Lets update it now.
+ */
 int
 nfp_net_stats_reset(struct rte_eth_dev *dev)
 {
@@ -923,12 +915,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/*
-	 * hw->eth_stats_base records the per counter starting point.
-	 * Lets update it now
-	 */
-
-	/* reading per RX ring stats */
+	/* Reading per RX ring stats */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -940,7 +927,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
 	}
 
-	/* reading per TX ring stats */
+	/* Reading per TX ring stats */
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS)
 			break;
@@ -964,7 +951,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)
 	hw->eth_stats_base.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
 
-	/* reading general device stats */
+	/* Reading general device stats */
 	hw->eth_stats_base.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
 
@@ -1032,7 +1019,7 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	if (raw)
 		return value;
 
-	/**
+	/*
 	 * A baseline value of each statistic counter is recorded when stats are "reset".
 	 * Thus, the value returned by this function need to be decremented by this
 	 * baseline value. The result is the count of this statistic since the last time
@@ -1041,12 +1028,12 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev,
 	return value - hw->eth_xstats_base[index].value;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1066,12 +1053,12 @@ nfp_net_xstats_get_names(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/* NOTE: All callers ensure dev is always set. */
 int
 nfp_net_xstats_get(struct rte_eth_dev *dev,
 		struct rte_eth_xstat *xstats,
 		unsigned int n)
 {
-	/* NOTE: All callers ensure dev is always set. */
 	uint32_t id;
 	uint32_t nfp_size;
 	uint32_t read_size;
@@ -1092,16 +1079,16 @@ nfp_net_xstats_get(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
+ * ids, xstats_names and size are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		struct rte_eth_xstat_name *xstats_names,
 		unsigned int size)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev,
-	 * ids, xstats_names and size are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1123,16 +1110,16 @@ nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev,
 	return read_size;
 }
 
+/*
+ * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
+ * ids, values and n are valid, and non-NULL.
+ */
 int
 nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,
 		const uint64_t *ids,
 		uint64_t *values,
 		unsigned int n)
 {
-	/**
-	 * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev,
-	 * ids, values and n are valid, and non-NULL.
-	 */
 	uint32_t i;
 	uint32_t read_size;
 
@@ -1167,10 +1154,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
-	/**
-	 * Successfully reset xstats, now call function to reset basic stats
-	 * return value is then based on the success of that function
-	 */
+	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
 
@@ -1217,7 +1201,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
-	/*
+	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
 	 * includes layer 2 headers, CRC and other metadata that can
@@ -1358,7 +1342,7 @@ nfp_net_common_init(struct rte_pci_device *pci_dev,
 
 	nfp_net_init_metadata_format(hw);
 
-	/* read the Rx offset configured from firmware */
+	/* Read the Rx offset configured from firmware */
 	if (hw->ver.major < 2)
 		hw->rx_offset = NFP_NET_RX_OFFSET;
 	else
@@ -1375,7 +1359,6 @@ const uint32_t *
 nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 {
 	static const uint32_t ptypes[] = {
-		/* refers to nfp_net_set_hash() */
 		RTE_PTYPE_INNER_L3_IPV4,
 		RTE_PTYPE_INNER_L3_IPV6,
 		RTE_PTYPE_INNER_L3_IPV6_EXT,
@@ -1449,10 +1432,8 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 			pci_dev->addr.devid, pci_dev->addr.function);
 }
 
-/* Interrupt configuration and handling */
-
 /*
- * nfp_net_irq_unmask - Unmask an interrupt
+ * Unmask an interrupt
  *
  * If MSI-X auto-masking is enabled clear the mask bit, otherwise
  * clear the ICR for the entry.
@@ -1478,16 +1459,14 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	}
 }
 
-/*
+/**
  * Interrupt handler which shall be registered for alarm callback for delayed
  * handling specific interrupt to wait for the stable nic state. As the NIC
  * interrupt state is not stable for nfp after link is just down, it needs
  * to wait 4 seconds to get the stable status.
  *
- * @param handle   Pointer to interrupt handle.
- * @param param    The address of parameter (struct rte_eth_dev *)
- *
- * @return  void
+ * @param param
+ *   The address of parameter (struct rte_eth_dev *)
  */
 void
 nfp_net_dev_interrupt_delayed_handler(void *param)
@@ -1516,13 +1495,12 @@ nfp_net_dev_interrupt_handler(void *param)
 
 	nfp_net_link_update(dev, 0);
 
-	/* likely to up */
+	/* Likely to up */
 	if (link.link_status == 0) {
-		/* handle it 1 sec later, wait it being stable */
+		/* Handle it 1 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_UP_CHECK_TIMEOUT;
-		/* likely to down */
-	} else {
-		/* handle it 4 sec later, wait it being stable */
+	} else {  /* Likely to down */
+		/* Handle it 4 sec later, wait it being stable */
 		timeout = NFP_NET_LINK_DOWN_CHECK_TIMEOUT;
 	}
 
@@ -1543,7 +1521,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* mtu setting is forbidden if port is started */
+	/* MTU setting is forbidden if port is started */
 	if (dev->data->dev_started) {
 		PMD_DRV_LOG(ERR, "port %d must be stopped before configuration",
 				dev->data->port_id);
@@ -1557,7 +1535,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev,
 		return -ERANGE;
 	}
 
-	/* writing to configuration space */
+	/* Writing to configuration space */
 	nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu);
 
 	hw->mtu = mtu;
@@ -1634,7 +1612,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 
 	/*
 	 * Update Redirection Table. There are 128 8bit-entries which can be
-	 * manage as 32 32bit-entries
+	 * manage as 32 32bit-entries.
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
@@ -1653,8 +1631,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+			/* Clearing the entry bits */
 			if (mask != 0xF)
-				/* Clearing the entry bits */
 				reta &= ~(0xFF << (8 * j));
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
@@ -1689,7 +1667,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev,
 	return 0;
 }
 
- /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
+/* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */
 int
 nfp_net_reta_query(struct rte_eth_dev *dev,
 		struct rte_eth_rss_reta_entry64 *reta_conf,
@@ -1717,7 +1695,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 
 	/*
 	 * Reading Redirection Table. There are 128 8bit-entries which can be
-	 * manage as 32 32bit-entries
+	 * manage as 32 32bit-entries.
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
@@ -1751,7 +1729,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Writing the key byte a byte */
+	/* Writing the key byte by byte */
 	for (i = 0; i < rss_conf->rss_key_len; i++) {
 		memcpy(&key, &rss_conf->rss_key[i], 1);
 		nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key);
@@ -1786,7 +1764,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ;
 
-	/* configuring where to apply the RSS hash */
+	/* Configuring where to apply the RSS hash */
 	nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl);
 
 	/* Writing the key size */
@@ -1809,7 +1787,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 
 	/* Checking if RSS is enabled */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {
-		if (rss_hf != 0) { /* Enable RSS? */
+		if (rss_hf != 0) {
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
@@ -2010,7 +1988,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,
 
 /*
  * The firmware with NFD3 can not handle DMA address requiring more
- * than 40 bits
+ * than 40 bits.
  */
 int
 nfp_net_check_dma_mask(struct nfp_net_hw *hw,
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 9cb889c4a6..6a36e2b04c 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -53,7 +53,7 @@ enum nfp_app_fw_id {
 	NFP_APP_FW_FLOWER_NIC             = 0x3,
 };
 
-/* nfp_qcp_ptr - Read or Write Pointer of a queue */
+/* Read or Write Pointer of a queue */
 enum nfp_qcp_ptr {
 	NFP_QCP_READ_PTR = 0,
 	NFP_QCP_WRITE_PTR
@@ -72,15 +72,15 @@ struct nfp_net_tlv_caps {
 };
 
 struct nfp_pf_dev {
-	/* Backpointer to associated pci device */
+	/** Backpointer to associated pci device */
 	struct rte_pci_device *pci_dev;
 
 	enum nfp_app_fw_id app_fw_id;
 
-	/* Pointer to the app running on the PF */
+	/** Pointer to the app running on the PF */
 	void *app_fw_priv;
 
-	/* The eth table reported by firmware */
+	/** The eth table reported by firmware */
 	struct nfp_eth_table *nfp_eth_table;
 
 	uint8_t *ctrl_bar;
@@ -94,17 +94,17 @@ struct nfp_pf_dev {
 	struct nfp_hwinfo *hwinfo;
 	struct nfp_rtsym_table *sym_tbl;
 
-	/* service id of cpp bridge service */
+	/** Service id of cpp bridge service */
 	uint32_t cpp_bridge_id;
 };
 
 struct nfp_app_fw_nic {
-	/* Backpointer to the PF device */
+	/** Backpointer to the PF device */
 	struct nfp_pf_dev *pf_dev;
 
-	/*
-	 * Array of physical ports belonging to the this CoreNIC app
-	 * This is really a list of vNIC's. One for each physical port
+	/**
+	 * Array of physical ports belonging to this CoreNIC app.
+	 * This is really a list of vNIC's, one for each physical port.
 	 */
 	struct nfp_net_hw *ports[NFP_MAX_PHYPORTS];
 
@@ -113,13 +113,13 @@ struct nfp_app_fw_nic {
 };
 
 struct nfp_net_hw {
-	/* Backpointer to the PF this port belongs to */
+	/** Backpointer to the PF this port belongs to */
 	struct nfp_pf_dev *pf_dev;
 
-	/* Backpointer to the eth_dev of this port*/
+	/** Backpointer to the eth_dev of this port */
 	struct rte_eth_dev *eth_dev;
 
-	/* Info from the firmware */
+	/** Info from the firmware */
 	struct nfp_net_fw_ver ver;
 	uint32_t cap;
 	uint32_t max_mtu;
@@ -130,7 +130,7 @@ struct nfp_net_hw {
 	/** NFP ASIC params */
 	const struct nfp_dev_info *dev_info;
 
-	/* Current values for control */
+	/** Current values for control */
 	uint32_t ctrl;
 
 	uint8_t *ctrl_bar;
@@ -156,7 +156,7 @@ struct nfp_net_hw {
 
 	struct rte_ether_addr mac_addr;
 
-	/* Records starting point for counters */
+	/** Records starting point for counters */
 	struct rte_eth_stats eth_stats_base;
 	struct rte_eth_xstat *eth_xstats_base;
 
@@ -166,9 +166,9 @@ struct nfp_net_hw {
 	uint8_t *mac_stats_bar;
 	uint8_t *mac_stats;
 
-	/* Sequential physical port number, only valid for CoreNIC firmware */
+	/** Sequential physical port number, only valid for CoreNIC firmware */
 	uint8_t idx;
-	/* Internal port number as seen from NFP */
+	/** Internal port number as seen from NFP */
 	uint8_t nfp_idx;
 
 	struct nfp_net_tlv_caps tlv_caps;
@@ -240,10 +240,6 @@ nn_writeq(uint64_t val,
 	nn_writel(val, addr);
 }
 
-/*
- * Functions to read/write from/to Config BAR
- * Performs any endian conversion necessary.
- */
 static inline uint8_t
 nn_cfg_readb(struct nfp_net_hw *hw,
 		uint32_t off)
@@ -304,11 +300,15 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 	nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);
 }
 
-/*
- * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue
- * @q: Base address for queue structure
- * @ptr: Add to the Read or Write pointer
- * @val: Value to add to the queue pointer
+/**
+ * Add the value to the selected pointer of a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Add to the read or write pointer
+ * @param val
+ *   Value to add to the queue pointer
  */
 static inline void
 nfp_qcp_ptr_add(uint8_t *q,
@@ -325,10 +325,13 @@ nfp_qcp_ptr_add(uint8_t *q,
 	nn_writel(rte_cpu_to_le_32(val), q + off);
 }
 
-/*
- * nfp_qcp_read - Read the current Read/Write pointer value for a queue
- * @q:  Base address for queue structure
- * @ptr: Read or Write pointer
+/**
+ * Read the current read/write pointer value for a queue.
+ *
+ * @param q
+ *   Base address for queue structure
+ * @param ptr
+ *   Read or Write pointer
  */
 static inline uint32_t
 nfp_qcp_read(uint8_t *q,
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 222cfdcbc3..8f5271cde9 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -1,8 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
 #include "nfp_cpp_bridge.h"
@@ -48,7 +46,7 @@ nfp_map_service(uint32_t service_id)
 
 	/*
 	 * Find a service core with the least number of services already
-	 * registered to it
+	 * registered to it.
 	 */
 	while (slcore_count--) {
 		service_count = rte_service_lcore_count_services(slcore_array[slcore_count]);
@@ -100,7 +98,7 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)
 	pf_dev->cpp_bridge_id = service_id;
 	PMD_INIT_LOG(INFO, "NFP cpp service registered");
 
-	/* Map it to available service core*/
+	/* Map it to available service core */
 	ret = nfp_map_service(service_id);
 	if (ret != 0) {
 		PMD_INIT_LOG(DEBUG, "Could not map nfp cpp service");
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 55073c3cea..cd0a2f92a8 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -20,7 +20,7 @@
 /* Offset in Freelist buffer where packet starts on RX */
 #define NFP_NET_RX_OFFSET               32
 
-/* working with metadata api (NFD version > 3.0) */
+/* Working with metadata api (NFD version > 3.0) */
 #define NFP_NET_META_FIELD_SIZE         4
 #define NFP_NET_META_FIELD_MASK ((1 << NFP_NET_META_FIELD_SIZE) - 1)
 #define NFP_NET_META_HEADER_SIZE        4
@@ -36,14 +36,14 @@
 						NFP_NET_META_VLAN_TPID_MASK)
 
 /* Prepend field types */
-#define NFP_NET_META_HASH               1 /* next field carries hash type */
+#define NFP_NET_META_HASH               1 /* Next field carries hash type */
 #define NFP_NET_META_VLAN               4
 #define NFP_NET_META_PORTID             5
 #define NFP_NET_META_IPSEC              9
 
 #define NFP_META_PORT_ID_CTRL           ~0U
 
-/* Hash type pre-pended when a RSS hash was computed */
+/* Hash type prepended when a RSS hash was computed */
 #define NFP_NET_RSS_NONE                0
 #define NFP_NET_RSS_IPV4                1
 #define NFP_NET_RSS_IPV6                2
@@ -102,7 +102,7 @@
 #define   NFP_NET_CFG_CTRL_IRQMOD         (0x1 << 18) /* Interrupt moderation */
 #define   NFP_NET_CFG_CTRL_RINGPRIO       (0x1 << 19) /* Ring priorities */
 #define   NFP_NET_CFG_CTRL_MSIXAUTO       (0x1 << 20) /* MSI-X auto-masking */
-#define   NFP_NET_CFG_CTRL_TXRWB          (0x1 << 21) /* Write-back of TX ring*/
+#define   NFP_NET_CFG_CTRL_TXRWB          (0x1 << 21) /* Write-back of TX ring */
 #define   NFP_NET_CFG_CTRL_L2SWITCH       (0x1 << 22) /* L2 Switch */
 #define   NFP_NET_CFG_CTRL_TXVLAN_V2      (0x1 << 23) /* Enable VLAN insert with metadata */
 #define   NFP_NET_CFG_CTRL_VXLAN          (0x1 << 24) /* Enable VXLAN */
@@ -111,7 +111,7 @@
 #define   NFP_NET_CFG_CTRL_LSO2           (0x1 << 28) /* LSO/TSO (version 2) */
 #define   NFP_NET_CFG_CTRL_RSS2           (0x1 << 29) /* RSS (version 2) */
 #define   NFP_NET_CFG_CTRL_CSUM_COMPLETE  (0x1 << 30) /* Checksum complete */
-#define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31)/* live MAC addr change */
+#define   NFP_NET_CFG_CTRL_LIVE_ADDR      (0x1U << 31) /* Live MAC addr change */
 #define NFP_NET_CFG_UPDATE              0x0004
 #define   NFP_NET_CFG_UPDATE_GEN          (0x1 <<  0) /* General update */
 #define   NFP_NET_CFG_UPDATE_RING         (0x1 <<  1) /* Ring config change */
@@ -124,7 +124,7 @@
 #define   NFP_NET_CFG_UPDATE_IRQMOD       (0x1 <<  8) /* IRQ mod change */
 #define   NFP_NET_CFG_UPDATE_VXLAN        (0x1 <<  9) /* VXLAN port change */
 #define   NFP_NET_CFG_UPDATE_MACADDR      (0x1 << 11) /* MAC address change */
-#define   NFP_NET_CFG_UPDATE_MBOX         (0x1 << 12) /**< Mailbox update */
+#define   NFP_NET_CFG_UPDATE_MBOX         (0x1 << 12) /* Mailbox update */
 #define   NFP_NET_CFG_UPDATE_ERR          (0x1U << 31) /* A error occurred */
 #define NFP_NET_CFG_TXRS_ENABLE         0x0008
 #define NFP_NET_CFG_RXRS_ENABLE         0x0010
@@ -205,7 +205,7 @@ struct nfp_net_fw_ver {
  * @NFP_NET_CFG_SPARE_ADDR:  DMA address for ME code to use (e.g. YDS-155 fix)
  */
 #define NFP_NET_CFG_SPARE_ADDR          0x0050
-/**
+/*
  * NFP6000/NFP4000 - Prepend configuration
  */
 #define NFP_NET_CFG_RX_OFFSET		0x0050
@@ -280,7 +280,7 @@ struct nfp_net_fw_ver {
  * @NFP_NET_CFG_TXR_BASE:    Base offset for TX ring configuration
  * @NFP_NET_CFG_TXR_ADDR:    Per TX ring DMA address (8B entries)
  * @NFP_NET_CFG_TXR_WB_ADDR: Per TX ring write back DMA address (8B entries)
- * @NFP_NET_CFG_TXR_SZ:      Per TX ring ring size (1B entries)
+ * @NFP_NET_CFG_TXR_SZ:      Per TX ring size (1B entries)
  * @NFP_NET_CFG_TXR_VEC:     Per TX ring MSI-X table entry (1B entries)
  * @NFP_NET_CFG_TXR_PRIO:    Per TX ring priority (1B entries)
  * @NFP_NET_CFG_TXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries)
@@ -299,7 +299,7 @@ struct nfp_net_fw_ver {
  * RX ring configuration (0x0800 - 0x0c00)
  * @NFP_NET_CFG_RXR_BASE:    Base offset for RX ring configuration
  * @NFP_NET_CFG_RXR_ADDR:    Per TX ring DMA address (8B entries)
- * @NFP_NET_CFG_RXR_SZ:      Per TX ring ring size (1B entries)
+ * @NFP_NET_CFG_RXR_SZ:      Per TX ring size (1B entries)
  * @NFP_NET_CFG_RXR_VEC:     Per TX ring MSI-X table entry (1B entries)
  * @NFP_NET_CFG_RXR_PRIO:    Per TX ring priority (1B entries)
  * @NFP_NET_CFG_RXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries)
@@ -330,7 +330,7 @@ struct nfp_net_fw_ver {
 
 /*
  * General device stats (0x0d00 - 0x0d90)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_NET_CFG_STATS_BASE          0x0d00
 #define NFP_NET_CFG_STATS_RX_DISCARDS   (NFP_NET_CFG_STATS_BASE + 0x00)
@@ -364,7 +364,7 @@ struct nfp_net_fw_ver {
 
 /*
  * Per ring stats (0x1000 - 0x1800)
- * options, 64bit per entry
+ * Options, 64bit per entry
  * @NFP_NET_CFG_TXR_STATS:   TX ring statistics (Packet and Byte count)
  * @NFP_NET_CFG_RXR_STATS:   RX ring statistics (Packet and Byte count)
  */
@@ -375,9 +375,9 @@ struct nfp_net_fw_ver {
 #define NFP_NET_CFG_RXR_STATS(_x)       (NFP_NET_CFG_RXR_STATS_BASE + \
 					 ((_x) * 0x10))
 
-/**
+/*
  * Mac stats (0x0000 - 0x0200)
- * all counters are 64bit.
+ * All counters are 64bit.
  */
 #define NFP_MAC_STATS_BASE                0x0000
 #define NFP_MAC_STATS_SIZE                0x0200
@@ -558,9 +558,11 @@ struct nfp_net_fw_ver {
 
 int nfp_net_tlv_caps_parse(struct rte_eth_dev *dev);
 
-/*
- * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability
- * @hw_cap: The firmware's capabilities
+/**
+ * Get RSS flag based on firmware's capability
+ *
+ * @param hw_cap
+ *   The firmware's capabilities
  */
 static inline uint32_t
 nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 72abc4c16e..1651ac2455 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -66,7 +66,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (app_fw_nic->multiport) {
 			PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported "
@@ -76,7 +76,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
-			 * Unregistering LSC interrupt handler
+			 * Unregistering LSC interrupt handler.
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
@@ -150,7 +150,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
-	 * This requires queues being enabled before
+	 * This requires queues being enabled before.
 	 */
 	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
@@ -273,11 +273,11 @@ nfp_net_close(struct rte_eth_dev *dev)
 	/* Clear ipsec */
 	nfp_ipsec_uninit(dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
 	/* Only free PF resources after all physical ports have been closed */
-	/* Mark this port as unused and free device priv resources*/
+	/* Mark this port as unused and free device priv resources */
 	nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff);
 	app_fw_nic->ports[hw->idx] = NULL;
 	rte_eth_dev_release_port(dev);
@@ -300,15 +300,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
@@ -497,7 +492,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	/*
 	 * Use PF array of physical ports to get pointer to
-	 * this specific port
+	 * this specific port.
 	 */
 	hw = app_fw_nic->ports[port];
 
@@ -779,7 +774,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 	/*
 	 * For coreNIC the number of vNICs exposed should be the same as the
-	 * number of physical ports
+	 * number of physical ports.
 	 */
 	if (total_vnics != nfp_eth_table->count) {
 		PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs");
@@ -787,7 +782,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		goto app_cleanup;
 	}
 
-	/* Populate coreNIC app properties*/
+	/* Populate coreNIC app properties */
 	app_fw_nic->total_phyports = total_vnics;
 	app_fw_nic->pf_dev = pf_dev;
 	if (total_vnics > 1)
@@ -842,8 +837,9 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 
 		eth_dev->device = &pf_dev->pci_dev->device;
 
-		/* ctrl/tx/rx BAR mappings and remaining init happens in
-		 * nfp_net_init
+		/*
+		 * Ctrl/tx/rx BAR mappings and remaining init happens in
+		 * @nfp_net_init()
 		 */
 		ret = nfp_net_init(eth_dev);
 		if (ret != 0) {
@@ -970,7 +966,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	pf_dev->pci_dev = pci_dev;
 	pf_dev->nfp_eth_table = nfp_eth_table;
 
-	/* configure access to tx/rx vNIC BARs */
+	/* Configure access to tx/rx vNIC BARs */
 	addr = nfp_qcp_queue_offset(dev_info, 0);
 	cpp_id = NFP_CPP_ISLAND_ID(0, NFP_CPP_ACTION_RW, 0, 0);
 
@@ -986,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 
 	/*
 	 * PF initialization has been done at this point. Call app specific
-	 * init code now
+	 * init code now.
 	 */
 	switch (pf_dev->app_fw_id) {
 	case NFP_APP_FW_CORE_NIC:
@@ -1011,7 +1007,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 		goto hwqueues_cleanup;
 	}
 
-	/* register the CPP bridge service here for primary use */
+	/* Register the CPP bridge service here for primary use */
 	ret = nfp_enable_cpp_service(pf_dev);
 	if (ret != 0)
 		PMD_INIT_LOG(INFO, "Enable cpp service failed.");
@@ -1079,7 +1075,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 /*
  * When attaching to the NFP4000/6000 PF on a secondary process there
  * is no need to initialise the PF again. Only minimal work is required
- * here
+ * here.
  */
 static int
 nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
@@ -1119,7 +1115,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 
 	/*
 	 * We don't have access to the PF created in the primary process
-	 * here so we have to read the number of ports from firmware
+	 * here so we have to read the number of ports from firmware.
 	 */
 	sym_tbl = nfp_rtsym_table_read(cpp);
 	if (sym_tbl == NULL) {
@@ -1216,7 +1212,7 @@ nfp_pci_uninit(struct rte_eth_dev *eth_dev)
 		rte_eth_dev_close(port_id);
 	/*
 	 * Ports can be closed and freed but hotplugging is not
-	 * currently supported
+	 * currently supported.
 	 */
 	return -ENOTSUP;
 }
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index d3c3c9e953..c9e72dd953 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -47,12 +47,12 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	/* Enabling the required queues in the device */
 	nfp_net_enable_queues(dev);
 
-	/* check and configure queue intr-vector mapping */
+	/* Check and configure queue intr-vector mapping */
 	if (dev->data->dev_conf.intr_conf.rxq != 0) {
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
-			 * Unregistering LSC interrupt handler
+			 * Unregistering LSC interrupt handler.
 			 */
 			rte_intr_callback_unregister(pci_dev->intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 
 	/*
 	 * Allocating rte mbufs for configured rx queues.
-	 * This requires queues being enabled before
+	 * This requires queues being enabled before.
 	 */
 	if (nfp_net_rx_freelist_setup(dev) != 0) {
 		ret = -ENOMEM;
@@ -182,18 +182,13 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 	rte_intr_disable(pci_dev->intr_handle);
 
-	/* unregister callback func from eal lib */
+	/* Unregister callback func from eal lib */
 	rte_intr_callback_unregister(pci_dev->intr_handle,
 			nfp_net_dev_interrupt_handler, (void *)dev);
 
-	/* Cancel possible impending LSC work here before releasing the port*/
+	/* Cancel possible impending LSC work here before releasing the port */
 	rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);
 
-	/*
-	 * The ixgbe PMD disables the pcie master on the
-	 * device. The i40e does not...
-	 */
-
 	return 0;
 }
 
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 84b48daf85..fbcdb3d19e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -108,21 +108,21 @@
 #define NVGRE_V4_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv4_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 #define NVGRE_V6_LEN     (sizeof(struct rte_ether_hdr) + \
 				sizeof(struct rte_ipv6_hdr) + \
 				sizeof(struct rte_flow_item_gre) + \
-				sizeof(rte_be32_t))    /* gre key */
+				sizeof(rte_be32_t))    /* Gre key */
 
 /* Process structure associated with a flow item */
 struct nfp_flow_item_proc {
-	/* Bit-mask for fields supported by this PMD. */
+	/** Bit-mask for fields supported by this PMD. */
 	const void *mask_support;
-	/* Bit-mask to use when @p item->mask is not provided. */
+	/** Bit-mask to use when @p item->mask is not provided. */
 	const void *mask_default;
-	/* Size in bytes for @p mask_support and @p mask_default. */
+	/** Size in bytes for @p mask_support and @p mask_default. */
 	const size_t mask_sz;
-	/* Merge a pattern item into a flow rule handle. */
+	/** Merge a pattern item into a flow rule handle. */
 	int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
 			struct rte_flow *nfp_flow,
 			char **mbuf_off,
@@ -130,7 +130,7 @@ struct nfp_flow_item_proc {
 			const struct nfp_flow_item_proc *proc,
 			bool is_mask,
 			bool is_outer_layer);
-	/* List of possible subsequent items. */
+	/** List of possible subsequent items. */
 	const enum rte_flow_item_type *const next_item;
 };
 
@@ -308,12 +308,12 @@ nfp_check_mask_add(struct nfp_flow_priv *priv,
 
 	mask_entry = nfp_mask_table_search(priv, mask_data, mask_len);
 	if (mask_entry == NULL) {
-		/* mask entry does not exist, let's create one */
+		/* Mask entry does not exist, let's create one */
 		ret = nfp_mask_table_add(priv, mask_data, mask_len, mask_id);
 		if (ret != 0)
 			return false;
 	} else {
-		/* mask entry already exist */
+		/* Mask entry already exist */
 		mask_entry->ref_cnt++;
 		*mask_id = mask_entry->mask_id;
 	}
@@ -818,7 +818,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 		case RTE_FLOW_ITEM_TYPE_ETH:
 			PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_ETH detected");
 			/*
-			 * eth is set with no specific params.
+			 * Eth is set with no specific params.
 			 * NFP does not need this.
 			 */
 			if (item->spec == NULL)
@@ -879,7 +879,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_udp_tun`
+				 * in `struct nfp_flower_ipv4_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -889,7 +889,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_udp_tun`
+				 * in `struct nfp_flower_ipv6_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -910,7 +910,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_udp_tun`
+				 * in `struct nfp_flower_ipv4_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -918,7 +918,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_udp_tun`
+				 * in `struct nfp_flower_ipv6_udp_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -939,7 +939,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv4_gre_tun`
+				 * in `struct nfp_flower_ipv4_gre_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
 			} else if (outer_ip6_flag) {
@@ -947,7 +947,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
 				key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun);
 				/*
 				 * The outer l3 layer information is
-				 * in `struct nfp_flower_ipv6_gre_tun`
+				 * in `struct nfp_flower_ipv6_gre_tun`.
 				 */
 				key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
 			} else {
@@ -1309,8 +1309,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
-		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
+		 * Reserve space for L4 info.
+		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4.
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
@@ -1392,8 +1392,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
 		}
 
 		/*
-		 * reserve space for L4 info.
-		 * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
+		 * Reserve space for L4 info.
+		 * rte_flow has ipv6 before L4 but NFP flower fw requires L4 before ipv6.
 		 */
 		if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0)
 			*mbuf_off += sizeof(struct nfp_flower_tp_ports);
@@ -2127,7 +2127,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor,
 	if (nfp_flow_tcp_flag_check(items))
 		nfp_flow->tcp_flag = true;
 
-	/* Check if this is a tunnel flow and get the inner item*/
+	/* Check if this is a tunnel flow and get the inner item */
 	is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
 	if (is_tun_flow)
 		is_outer_layer = false;
@@ -3366,9 +3366,9 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
 		return -EINVAL;
 	}
 
-	/* Pre_tunnel action must be the first on action list.
-	 * If other actions already exist, they need to be
-	 * pushed forward.
+	/*
+	 * Pre_tunnel action must be the first on action list.
+	 * If other actions already exist, they need to be pushed forward.
 	 */
 	act_len = act_data - actions;
 	if (act_len != 0) {
@@ -4384,7 +4384,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_id;
 	}
 
-	/* flow stats */
+	/* Flow stats */
 	rte_spinlock_init(&priv->stats_lock);
 	stats_size = (ctx_count & NFP_FL_STAT_ID_STAT) |
 			((ctx_split - 1) & NFP_FL_STAT_ID_MU_NUM);
@@ -4398,7 +4398,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats_id;
 	}
 
-	/* mask table */
+	/* Mask table */
 	mask_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->mask_table = rte_hash_create(&mask_hash_params);
 	if (priv->mask_table == NULL) {
@@ -4407,7 +4407,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_stats;
 	}
 
-	/* flow table */
+	/* Flow table */
 	flow_hash_params.hash_func_init_val = priv->hash_seed;
 	flow_hash_params.entries = ctx_count;
 	priv->flow_table = rte_hash_create(&flow_hash_params);
@@ -4417,7 +4417,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_mask_table;
 	}
 
-	/* pre tunnel table */
+	/* Pre tunnel table */
 	priv->pre_tun_cnt = 1;
 	pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
 	priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
@@ -4446,15 +4446,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
 		goto free_ct_zone_table;
 	}
 
-	/* ipv4 off list */
+	/* IPv4 off list */
 	rte_spinlock_init(&priv->ipv4_off_lock);
 	LIST_INIT(&priv->ipv4_off_list);
 
-	/* ipv6 off list */
+	/* IPv6 off list */
 	rte_spinlock_init(&priv->ipv6_off_lock);
 	LIST_INIT(&priv->ipv6_off_list);
 
-	/* neighbor next list */
+	/* Neighbor next list */
 	LIST_INIT(&priv->nn_list);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index ed06eca371..ab38dbe1f4 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -126,19 +126,19 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
-	/* mask hash table */
+	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
-	/* flow hash table */
+	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
-	/* flow stats */
+	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
 	uint32_t stats_ring_size; /**< The size of stats id ring. */
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
-	/* pre tunnel rule */
+	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
@@ -148,7 +148,7 @@ struct nfp_flow_priv {
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
-	/* neighbor next */
+	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 	/* Conntrack */
 	struct rte_hash *ct_zone_table; /**< Hash table to store ct zone entry */
diff --git a/drivers/net/nfp/nfp_ipsec.h b/drivers/net/nfp/nfp_ipsec.h
index aaebb80fe1..d7a729398a 100644
--- a/drivers/net/nfp/nfp_ipsec.h
+++ b/drivers/net/nfp/nfp_ipsec.h
@@ -82,7 +82,7 @@ struct ipsec_discard_stats {
 	uint32_t discards_alignment;             /**< Alignment error */
 	uint32_t discards_hard_bytelimit;        /**< Hard byte Count limit */
 	uint32_t discards_seq_num_wrap;          /**< Sequ Number wrap */
-	uint32_t discards_pmtu_exceeded;         /**< PMTU Limit exceeded*/
+	uint32_t discards_pmtu_exceeded;         /**< PMTU Limit exceeded */
 	uint32_t discards_arw_old_seq;           /**< Anti-Replay seq small */
 	uint32_t discards_arw_replay;            /**< Anti-Replay seq rcvd */
 	uint32_t discards_ctrl_word;             /**< Bad SA Control word */
@@ -99,16 +99,16 @@ struct ipsec_discard_stats {
 
 struct ipsec_get_sa_stats {
 	uint32_t seq_lo;                         /**< Sequence Number (low 32bits) */
-	uint32_t seq_high;                       /**< Sequence Number (high 32bits)*/
+	uint32_t seq_high;                       /**< Sequence Number (high 32bits) */
 	uint32_t arw_counter_lo;                 /**< Anti-replay wndw cntr */
 	uint32_t arw_counter_high;               /**< Anti-replay wndw cntr */
 	uint32_t arw_bitmap_lo;                  /**< Anti-replay wndw bitmap */
 	uint32_t arw_bitmap_high;                /**< Anti-replay wndw bitmap */
 	uint32_t spare:1;
-	uint32_t soft_byte_exceeded :1;          /**< Soft lifetime byte cnt exceeded*/
-	uint32_t hard_byte_exceeded :1;          /**< Hard lifetime byte cnt exceeded*/
-	uint32_t soft_time_exceeded :1;          /**< Soft lifetime time limit exceeded*/
-	uint32_t hard_time_exceeded :1;          /**< Hard lifetime time limit exceeded*/
+	uint32_t soft_byte_exceeded :1;          /**< Soft lifetime byte cnt exceeded */
+	uint32_t hard_byte_exceeded :1;          /**< Hard lifetime byte cnt exceeded */
+	uint32_t soft_time_exceeded :1;          /**< Soft lifetime time limit exceeded */
+	uint32_t hard_time_exceeded :1;          /**< Hard lifetime time limit exceeded */
 	uint32_t spare1:27;
 	uint32_t lifetime_byte_count;
 	uint32_t pkt_count;
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index b37a338b2f..9e08e38955 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -20,43 +20,22 @@
 /* Maximum number of supported VLANs in parsed form packet metadata. */
 #define NFP_META_MAX_VLANS       2
 
-/*
- * struct nfp_meta_parsed - Record metadata parsed from packet
- *
- * Parsed NFP packet metadata are recorded in this struct. The content is
- * read-only after it have been recorded during parsing by nfp_net_parse_meta().
- *
- * @port_id: Port id value
- * @sa_idx: IPsec SA index
- * @hash: RSS hash value
- * @hash_type: RSS hash type
- * @ipsec_type: IPsec type
- * @vlan_layer: The layers of VLAN info which are passed from nic.
- *              Only this number of entries of the @vlan array are valid.
- *
- * @vlan: Holds information parses from NFP_NET_META_VLAN. The inner most vlan
- *        starts at position 0 and only @vlan_layer entries contain valid
- *        information.
- *
- *        Currently only 2 layers of vlan are supported,
- *        vlan[0] - vlan strip info
- *        vlan[1] - qinq strip info
- *
- * @vlan.offload:  Flag indicates whether VLAN is offloaded
- * @vlan.tpid: Vlan TPID
- * @vlan.tci: Vlan TCI including PCP + Priority + VID
- */
+/* Record metadata parsed from packet */
 struct nfp_meta_parsed {
-	uint32_t port_id;
-	uint32_t sa_idx;
-	uint32_t hash;
-	uint8_t hash_type;
-	uint8_t ipsec_type;
-	uint8_t vlan_layer;
+	uint32_t port_id;         /**< Port id value */
+	uint32_t sa_idx;          /**< IPsec SA index */
+	uint32_t hash;            /**< RSS hash value */
+	uint8_t hash_type;        /**< RSS hash type */
+	uint8_t ipsec_type;       /**< IPsec type */
+	uint8_t vlan_layer;       /**< The valid number of value in @vlan[] */
+	/**
+	 * Holds information parses from NFP_NET_META_VLAN.
+	 * The inner most vlan starts at position 0
+	 */
 	struct {
-		uint8_t offload;
-		uint8_t tpid;
-		uint16_t tci;
+		uint8_t offload;  /**< Flag indicates whether VLAN is offloaded */
+		uint8_t tpid;     /**< Vlan TPID */
+		uint16_t tci;     /**< Vlan TCI (PCP + Priority + VID) */
 	} vlan[NFP_META_MAX_VLANS];
 };
 
@@ -156,7 +135,7 @@ struct nfp_ptype_parsed {
 	uint8_t outer_l3_ptype; /**< Packet type of outer layer 3. */
 };
 
-/* set mbuf checksum flags based on RX descriptor flags */
+/* Set mbuf checksum flags based on RX descriptor flags */
 void
 nfp_net_rx_cksum(struct nfp_net_rxq *rxq,
 		struct nfp_net_rx_desc *rxd,
@@ -254,7 +233,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * descriptors and counting all four if the first has the DD
 	 * bit on. Of course, this is not accurate but can be good for
 	 * performance. But ideally that should be done in descriptors
-	 * chunks belonging to the same cache line
+	 * chunks belonging to the same cache line.
 	 */
 
 	while (count < rxq->rx_count) {
@@ -265,7 +244,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 		count++;
 		idx++;
 
-		/* Wrapping? */
+		/* Wrapping */
 		if ((idx) == rxq->rx_count)
 			idx = 0;
 	}
@@ -273,7 +252,7 @@ nfp_net_rx_queue_count(void *rx_queue)
 	return count;
 }
 
-/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */
+/* Parse the chained metadata from packet */
 static bool
 nfp_net_parse_chained_meta(uint8_t *meta_base,
 		rte_be32_t meta_header,
@@ -320,12 +299,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 	return true;
 }
 
-/*
- * nfp_net_parse_meta_hash() - Set mbuf hash data based on the metadata info
- *
- * The RSS hash and hash-type are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf hash data based on the metadata info */
 static void
 nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rxq *rxq,
@@ -341,7 +315,7 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_single_meta() - Parse the single metadata
+ * Parse the single metadata
  *
  * The RSS hash and hash-type are prepended to the packet data.
  * Get it from metadata area.
@@ -355,12 +329,7 @@ nfp_net_parse_single_meta(uint8_t *meta_base,
 	meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4));
 }
 
-/*
- * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info
- *
- * The VLAN info TPID and TCI are prepended to the packet data.
- * Extract and decode it and set the mbuf fields.
- */
+/* Set mbuf vlan_strip data based on metadata info */
 static void
 nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 		struct nfp_net_rx_desc *rxd,
@@ -369,19 +338,14 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 {
 	struct nfp_net_hw *hw = rxq->hw;
 
-	/* Skip if hardware don't support setting vlan. */
+	/* Skip if firmware don't support setting vlan. */
 	if ((hw->ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0)
 		return;
 
 	/*
-	 * The nic support the two way to send the VLAN info,
-	 * 1. According the metadata to send the VLAN info when NFP_NET_CFG_CTRL_RXVLAN_V2
-	 * is set
-	 * 2. According the descriptor to sned the VLAN info when NFP_NET_CFG_CTRL_RXVLAN
-	 * is set
-	 *
-	 * If the nic doesn't send the VLAN info, it is not necessary
-	 * to do anything.
+	 * The firmware support two ways to send the VLAN info (with priority) :
+	 * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set,
+	 * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set.
 	 */
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) {
 		if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) {
@@ -397,7 +361,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta,
 }
 
 /*
- * nfp_net_parse_meta_qinq() - Set mbuf qinq_strip data based on metadata info
+ * Set mbuf qinq_strip data based on metadata info
  *
  * The out VLAN tci are prepended to the packet data.
  * Extract and decode it and set the mbuf fields.
@@ -469,7 +433,7 @@ nfp_net_parse_meta_ipsec(struct nfp_meta_parsed *meta,
 	}
 }
 
-/* nfp_net_parse_meta() - Parse the metadata from packet */
+/* Parse the metadata from packet */
 static void
 nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,
 		struct nfp_net_rxq *rxq,
@@ -672,7 +636,7 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * doing now have any benefit at all. Again, tests with this change have not
  * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing
  * so looking at the implications of this type of allocation should be studied
- * deeply
+ * deeply.
  */
 
 uint16_t
@@ -695,7 +659,7 @@ nfp_net_recv_pkts(void *rx_queue,
 	if (unlikely(rxq == NULL)) {
 		/*
 		 * DPDK just checks the queue is lower than max queues
-		 * enabled. But the queue needs to be configured
+		 * enabled. But the queue needs to be configured.
 		 */
 		PMD_RX_LOG(ERR, "RX Bad queue");
 		return 0;
@@ -722,7 +686,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 		/*
 		 * We got a packet. Let's alloc a new mbuf for refilling the
-		 * free descriptor ring as soon as possible
+		 * free descriptor ring as soon as possible.
 		 */
 		new_mb = rte_pktmbuf_alloc(rxq->mem_pool);
 		if (unlikely(new_mb == NULL)) {
@@ -734,7 +698,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 		/*
 		 * Grab the mbuf and refill the descriptor with the
-		 * previously allocated mbuf
+		 * previously allocated mbuf.
 		 */
 		mb = rxb->mbuf;
 		rxb->mbuf = new_mb;
@@ -751,7 +715,7 @@ nfp_net_recv_pkts(void *rx_queue,
 			/*
 			 * This should not happen and the user has the
 			 * responsibility of avoiding it. But we have
-			 * to give some info about the error
+			 * to give some info about the error.
 			 */
 			PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.");
 			rte_pktmbuf_free(mb);
@@ -796,7 +760,7 @@ nfp_net_recv_pkts(void *rx_queue,
 		nb_hold++;
 
 		rxq->rd_p++;
-		if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/
+		if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */
 			rxq->rd_p = 0;
 	}
 
@@ -810,7 +774,7 @@ nfp_net_recv_pkts(void *rx_queue,
 
 	/*
 	 * FL descriptors needs to be written before incrementing the
-	 * FL queue WR pointer
+	 * FL queue WR pointer.
 	 */
 	rte_wmb();
 	if (nb_hold > rxq->rx_free_thresh) {
@@ -891,7 +855,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Free memory prior to re-allocation if needed. This is the case after
-	 * calling nfp_net_stop
+	 * calling @nfp_net_stop().
 	 */
 	if (dev->data->rx_queues[queue_idx] != NULL) {
 		nfp_net_rx_queue_release(dev, queue_idx);
@@ -913,7 +877,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Tracking mbuf size for detecting a potential mbuf overflow due to
-	 * RX offset
+	 * RX offset.
 	 */
 	rxq->mem_pool = mp;
 	rxq->mbuf_size = rxq->mem_pool->elt_size;
@@ -944,7 +908,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->dma = (uint64_t)tz->iova;
 	rxq->rxds = tz->addr;
 
-	/* mbuf pointers array for referencing mbufs linked to RX descriptors */
+	/* Mbuf pointers array for referencing mbufs linked to RX descriptors */
 	rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs",
 			sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,
 			socket_id);
@@ -960,7 +924,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 
 	/*
 	 * Telling the HW about the physical address of the RX ring and number
-	 * of descriptors in log2 format
+	 * of descriptors in log2 format.
 	 */
 	nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma);
 	nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc));
@@ -968,11 +932,14 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	return 0;
 }
 
-/*
- * nfp_net_tx_free_bufs - Check for descriptors with a complete
- * status
- * @txq: TX queue to work with
- * Returns number of descriptors freed
+/**
+ * Check for descriptors with a complete status
+ *
+ * @param txq
+ *   TX queue to work with
+ *
+ * @return
+ *   Number of descriptors freed
  */
 uint32_t
 nfp_net_tx_free_bufs(struct nfp_net_txq *txq)
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 98ef6c3d93..899cc42c97 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -19,21 +19,11 @@
 /* Maximum number of NFP packet metadata fields. */
 #define NFP_META_MAX_FIELDS      8
 
-/*
- * struct nfp_net_meta_raw - Raw memory representation of packet metadata
- *
- * Describe the raw metadata format, useful when preparing metadata for a
- * transmission mbuf.
- *
- * @header: NFD3 or NFDk field type header (see format in nfp.rst)
- * @data: Array of each fields data member
- * @length: Keep track of number of valid fields in @header and data. Not part
- *          of the raw metadata.
- */
+/* Describe the raw metadata format. */
 struct nfp_net_meta_raw {
-	uint32_t header;
-	uint32_t data[NFP_META_MAX_FIELDS];
-	uint8_t length;
+	uint32_t header; /**< Field type header (see format in nfp.rst) */
+	uint32_t data[NFP_META_MAX_FIELDS]; /**< Array of each fields data member */
+	uint8_t length; /**< Number of valid fields in @header */
 };
 
 /* Descriptor alignment */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 07/11] net/nfp: standard the blank character
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (5 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 06/11] net/nfp: standard the comment style Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 08/11] net/nfp: unify the guide line of header file Chaoyong He
                       ` (4 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Use space character to align instead of TAB character.
There should one blank line to split the block of logic, no more no less.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c           | 39 +++++++++--------
 drivers/net/nfp/nfp_common.h           |  6 +--
 drivers/net/nfp/nfp_cpp_bridge.c       |  5 +++
 drivers/net/nfp/nfp_ctrl.h             |  6 +--
 drivers/net/nfp/nfp_ethdev.c           | 58 +++++++++++++-------------
 drivers/net/nfp/nfp_ethdev_vf.c        | 49 +++++++++++-----------
 drivers/net/nfp/nfp_flow.c             | 27 +++++++-----
 drivers/net/nfp/nfp_flow.h             |  7 ++++
 drivers/net/nfp/nfp_rxtx.c             |  7 ++--
 drivers/net/nfp/nfpcore/nfp_resource.h |  2 +-
 10 files changed, 114 insertions(+), 92 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 130f004b4d..a102c6f272 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -36,6 +36,7 @@ enum nfp_xstat_group {
 	NFP_XSTAT_GROUP_NET,
 	NFP_XSTAT_GROUP_MAC
 };
+
 struct nfp_xstat {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
 	int offset;
@@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw,
 		nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN);
 		return;
 	}
+
 	/*
 	 * Link is up so write the link speed from the eth_table to
 	 * NFP_NET_CFG_STS_NSP_LINK_RATE.
@@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw,
 		new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE);
 		if (new == 0)
 			break;
+
 		if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) {
 			PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new);
 			return -1;
 		}
+
 		if (cnt >= NFP_NET_POLL_TIMEOUT) {
 			PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms",
 					update, cnt);
 			return -EIO;
 		}
+
 		nanosleep(&wait, 0); /* Waiting for a 1ms */
 	}
+
 	PMD_DRV_LOG(DEBUG, "Ack DONE");
 	return 0;
 }
@@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
@@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev,
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&
 			(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;
+
 	/* Signal the NIC about the change */
 	if (nfp_net_reconfig(hw, ctrl, update) != 0) {
 		PMD_DRV_LOG(ERR, "MAC address update failed");
 		return -EIO;
 	}
+
 	return 0;
 }
 
@@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_ipackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));
-
 		nfp_dev_stats.q_ipackets[i] -=
 				hw->eth_stats_base.q_ipackets[i];
 
 		nfp_dev_stats.q_ibytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_ibytes[i] -=
 				hw->eth_stats_base.q_ibytes[i];
 	}
@@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 		nfp_dev_stats.q_opackets[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));
-
 		nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];
 
 		nfp_dev_stats.q_obytes[i] =
 				nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);
-
 		nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];
 	}
 
 	nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);
-
 	nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;
 
 	nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);
-
 	nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;
 
 	nfp_dev_stats.opackets =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);
-
 	nfp_dev_stats.opackets -= hw->eth_stats_base.opackets;
 
 	nfp_dev_stats.obytes =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);
-
 	nfp_dev_stats.obytes -= hw->eth_stats_base.obytes;
 
 	/* Reading general device stats */
 	nfp_dev_stats.ierrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);
-
 	nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;
 
 	nfp_dev_stats.oerrors =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);
-
 	nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;
 
 	/* RX ring mbuf allocation failures */
@@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 
 	nfp_dev_stats.imissed =
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
-
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
 	if (stats != NULL) {
@@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev)
 			if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC)
 				break;
 		}
+
 		return count;
 	}
 
@@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev)
 		hw->eth_xstats_base[id].id = id;
 		hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true);
 	}
+
 	/* Successfully reset xstats, now call function to reset basic stats. */
 	return nfp_net_stats_reset(dev);
 }
@@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues;
 	dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues;
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
+
 	/**
 	 * The maximum rx packet length (max_rx_pktlen) is set to the
 	 * maximum supported frame size that the NFP can handle. This
@@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)
 
 	if (dev->rx_pkt_burst == nfp_net_recv_pkts)
 		return ptypes;
+
 	return NULL;
 }
 
@@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
-
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
@@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		idx = i / RTE_ETH_RETA_GROUP_SIZE;
 		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
-
 		if (mask == 0)
 			continue;
 
 		reta = 0;
+
 		/* If all 4 entries were set, don't need read RETA register */
 		if (mask != 0xF)
 			reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i);
@@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			/* Clearing the entry bits */
 			if (mask != 0xF)
 				reta &= ~(0xFF << (8 * j));
+
 			reta |= reta_conf[idx].reta[shift + j] << (8 * j);
 		}
+
 		nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);
 	}
+
 	return 0;
 }
 
@@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0)
 		return -EINVAL;
 
@@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 		for (j = 0; j < 4; j++) {
 			if ((mask & (0x1 << j)) == 0)
 				continue;
+
 			reta_conf[idx].reta[shift + j] =
 					(uint8_t)((reta >> (8 * j)) & 0xF);
 		}
 	}
+
 	return 0;
 }
 
@@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,
 			PMD_DRV_LOG(ERR, "RSS unsupported");
 			return -EINVAL;
 		}
+
 		return 0; /* Nothing to do */
 	}
 
@@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 			queue %= rx_queues;
 		}
 	}
+
 	ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80);
 	if (ret != 0)
 		return ret;
@@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(ERR, "Wrong rss conf");
 		return -EINVAL;
 	}
-	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 
+	rss_conf = dev_conf->rx_adv_conf.rss_conf;
 	ret = nfp_net_rss_hash_write(dev, &rss_conf);
 
 	return ret;
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 6a36e2b04c..5439865c5e 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -32,7 +32,7 @@
 #define DEFAULT_RX_HTHRESH      8
 #define DEFAULT_RX_WTHRESH      0
 
-#define DEFAULT_TX_RS_THRESH	32
+#define DEFAULT_TX_RS_THRESH    32
 #define DEFAULT_TX_FREE_THRESH  32
 #define DEFAULT_TX_PTHRESH      32
 #define DEFAULT_TX_HTHRESH      0
@@ -40,12 +40,12 @@
 #define DEFAULT_TX_RSBIT_THRESH 32
 
 /* Alignment for dma zones */
-#define NFP_MEMZONE_ALIGN	128
+#define NFP_MEMZONE_ALIGN       128
 
 #define NFP_QCP_QUEUE_ADDR_SZ   (0x800)
 
 /* Number of supported physical ports */
-#define NFP_MAX_PHYPORTS	12
+#define NFP_MAX_PHYPORTS        12
 
 /* Firmware application ID's */
 enum nfp_app_fw_id {
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index 8f5271cde9..bb2a6fdcda 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd,
 				nfp_cpp_area_free(area);
 				return -EIO;
 			}
+
 			err = nfp_cpp_area_write(area, pos, tmpbuf, len);
 			if (err < 0) {
 				PMD_CPP_LOG(ERR, "nfp_cpp_area_write error");
@@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd,
 		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
 				NFP_CPP_MEMIO_BOUNDARY : count;
 	}
+
 	return 0;
 }
 
@@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args)
 	struct timeval timeout = {1, 0};
 
 	unlink("/tmp/nfp_cpp");
+
 	sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
 	if (sockfd < 0) {
 		PMD_CPP_LOG(ERR, "socket creation error. Service failed");
@@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args)
 			if (op == 0)
 				break;
 		}
+
 		close(datafd);
 	}
+
 	close(sockfd);
 
 	return 0;
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index cd0a2f92a8..5cc83ff3e6 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -208,8 +208,8 @@ struct nfp_net_fw_ver {
 /*
  * NFP6000/NFP4000 - Prepend configuration
  */
-#define NFP_NET_CFG_RX_OFFSET		0x0050
-#define NFP_NET_CFG_RX_OFFSET_DYNAMIC		0	/* Prepend mode */
+#define NFP_NET_CFG_RX_OFFSET           0x0050
+#define NFP_NET_CFG_RX_OFFSET_DYNAMIC          0    /* Prepend mode */
 
 /* Start anchor of the TLV area */
 #define NFP_NET_CFG_TLV_BASE            0x0058
@@ -442,7 +442,7 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE	(32 * 1024)
+#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
 
 /*
  * General use mailbox area (0x1800 - 0x19ff)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1651ac2455..b65c2c1fe0 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,
 	rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr);
 
 	free(nfp_eth_table);
+
 	return 0;
 }
 
@@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 					"with NFP multiport PF");
 				return -EINVAL;
 		}
+
 		if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
 			/*
 			 * Better not to share LSC with RX interrupts.
@@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev)
 
 	/* Clear queues */
 	nfp_net_stop_tx_queue(dev);
-
 	nfp_net_stop_rx_queue(dev);
 
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
@@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	/* Clear ipsec */
@@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev,
 
 /* Initialise and register driver with DPDK Application */
 static const struct eth_dev_ops nfp_net_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_net_start,
-	.dev_stop		= nfp_net_stop,
-	.dev_set_link_up	= nfp_net_set_link_up,
-	.dev_set_link_down	= nfp_net_set_link_down,
-	.dev_close		= nfp_net_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_net_start,
+	.dev_stop               = nfp_net_stop,
+	.dev_set_link_up        = nfp_net_set_link_up,
+	.dev_set_link_down      = nfp_net_set_link_down,
+	.dev_close              = nfp_net_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 	.udp_tunnel_port_add    = nfp_udp_tunnel_port_add,
@@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 
 	rte_eth_copy_pci_info(eth_dev, pci_dev);
 
-
 	hw->ctrl_bar = pci_dev->mem_resource[0].addr;
 	if (hw->ctrl_bar == NULL) {
 		PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured");
@@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar");
 			return -EIO;
 		}
+
 		hw->mac_stats = hw->mac_stats_bar;
 	} else {
 		if (pf_dev->ctrl_bar == NULL)
 			return -ENODEV;
+
 		/* Use port offset in pf ctrl_bar for this ports control bar */
 		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
@@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 		return -ENOMEM;
 	}
 
-
 	/* Work out where in the BAR the queues start. */
 	tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
 	rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ);
@@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev,
 			"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x",
 			cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],
 			cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);
-
 	snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial);
 
 	PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name);
 	if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)
 		goto load_fw;
+
 	/* Then try the PCI name */
 	snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH,
 			dev->name);
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index c9e72dd953..7096695de6 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 				return -EIO;
 			}
 		}
+
 		intr_vector = dev->data->nb_rx_queues;
 		if (rte_intr_efd_enable(intr_handle, intr_vector) != 0)
 			return -1;
@@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 	 * We assume that the DPDK application is stopping all the
 	 * threads/queues before calling the device close function.
 	 */
-
 	nfp_net_disable_queues(dev);
 
 	/* Clear queues */
 	nfp_net_close_tx_queue(dev);
-
 	nfp_net_close_rx_queue(dev);
 
 	rte_intr_disable(pci_dev->intr_handle);
@@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev)
 
 /* Initialise and register VF driver with DPDK Application */
 static const struct eth_dev_ops nfp_netvf_eth_dev_ops = {
-	.dev_configure		= nfp_net_configure,
-	.dev_start		= nfp_netvf_start,
-	.dev_stop		= nfp_netvf_stop,
-	.dev_set_link_up	= nfp_netvf_set_link_up,
-	.dev_set_link_down	= nfp_netvf_set_link_down,
-	.dev_close		= nfp_netvf_close,
-	.promiscuous_enable	= nfp_net_promisc_enable,
-	.promiscuous_disable	= nfp_net_promisc_disable,
-	.link_update		= nfp_net_link_update,
-	.stats_get		= nfp_net_stats_get,
-	.stats_reset		= nfp_net_stats_reset,
+	.dev_configure          = nfp_net_configure,
+	.dev_start              = nfp_netvf_start,
+	.dev_stop               = nfp_netvf_stop,
+	.dev_set_link_up        = nfp_netvf_set_link_up,
+	.dev_set_link_down      = nfp_netvf_set_link_down,
+	.dev_close              = nfp_netvf_close,
+	.promiscuous_enable     = nfp_net_promisc_enable,
+	.promiscuous_disable    = nfp_net_promisc_disable,
+	.link_update            = nfp_net_link_update,
+	.stats_get              = nfp_net_stats_get,
+	.stats_reset            = nfp_net_stats_reset,
 	.xstats_get             = nfp_net_xstats_get,
 	.xstats_reset           = nfp_net_xstats_reset,
 	.xstats_get_names       = nfp_net_xstats_get_names,
 	.xstats_get_by_id       = nfp_net_xstats_get_by_id,
 	.xstats_get_names_by_id = nfp_net_xstats_get_names_by_id,
-	.dev_infos_get		= nfp_net_infos_get,
+	.dev_infos_get          = nfp_net_infos_get,
 	.dev_supported_ptypes_get = nfp_net_supported_ptypes_get,
-	.mtu_set		= nfp_net_dev_mtu_set,
-	.mac_addr_set		= nfp_net_set_mac_addr,
-	.vlan_offload_set	= nfp_net_vlan_offload_set,
-	.reta_update		= nfp_net_reta_update,
-	.reta_query		= nfp_net_reta_query,
-	.rss_hash_update	= nfp_net_rss_hash_update,
-	.rss_hash_conf_get	= nfp_net_rss_hash_conf_get,
-	.rx_queue_setup		= nfp_net_rx_queue_setup,
-	.rx_queue_release	= nfp_net_rx_queue_release,
-	.tx_queue_setup		= nfp_net_tx_queue_setup,
-	.tx_queue_release	= nfp_net_tx_queue_release,
+	.mtu_set                = nfp_net_dev_mtu_set,
+	.mac_addr_set           = nfp_net_set_mac_addr,
+	.vlan_offload_set       = nfp_net_vlan_offload_set,
+	.reta_update            = nfp_net_reta_update,
+	.reta_query             = nfp_net_reta_query,
+	.rss_hash_update        = nfp_net_rss_hash_update,
+	.rss_hash_conf_get      = nfp_net_rss_hash_conf_get,
+	.rx_queue_setup         = nfp_net_rx_queue_setup,
+	.rx_queue_release       = nfp_net_rx_queue_release,
+	.tx_queue_setup         = nfp_net_tx_queue_setup,
+	.tx_queue_release       = nfp_net_tx_queue_release,
 	.rx_queue_intr_enable   = nfp_rx_queue_intr_enable,
 	.rx_queue_intr_disable  = nfp_rx_queue_intr_disable,
 };
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index fbcdb3d19e..1bf31146fc 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -496,6 +496,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx)
 			priv->stats_ids.init_unallocated--;
 			priv->active_mem_unit = 0;
 		}
+
 		return 0;
 	}
 
@@ -622,6 +623,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
 		PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
 		return -ENOMEM;
 	}
+
 	memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
 	tmp_entry->ref_count = 1;
 
@@ -1796,7 +1798,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
 			RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_eth){
+		.mask_support = &(const struct rte_flow_item_eth) {
 			.hdr = {
 				.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
 				.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff",
@@ -1811,7 +1813,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_VLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4,
 			RTE_FLOW_ITEM_TYPE_IPV6),
-		.mask_support = &(const struct rte_flow_item_vlan){
+		.mask_support = &(const struct rte_flow_item_vlan) {
 			.hdr = {
 				.vlan_tci  = RTE_BE16(0xefff),
 				.eth_proto = RTE_BE16(0xffff),
@@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv4){
+		.mask_support = &(const struct rte_flow_item_ipv4) {
 			.hdr = {
 				.type_of_service = 0xff,
 				.fragment_offset = RTE_BE16(0xffff),
@@ -1846,7 +1848,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 			RTE_FLOW_ITEM_TYPE_UDP,
 			RTE_FLOW_ITEM_TYPE_SCTP,
 			RTE_FLOW_ITEM_TYPE_GRE),
-		.mask_support = &(const struct rte_flow_item_ipv6){
+		.mask_support = &(const struct rte_flow_item_ipv6) {
 			.hdr = {
 				.vtc_flow   = RTE_BE32(0x0ff00000),
 				.proto      = 0xff,
@@ -1863,7 +1865,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_ipv6,
 	},
 	[RTE_FLOW_ITEM_TYPE_TCP] = {
-		.mask_support = &(const struct rte_flow_item_tcp){
+		.mask_support = &(const struct rte_flow_item_tcp) {
 			.hdr = {
 				.tcp_flags = 0xff,
 				.src_port  = RTE_BE16(0xffff),
@@ -1877,7 +1879,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	[RTE_FLOW_ITEM_TYPE_UDP] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
 			RTE_FLOW_ITEM_TYPE_GENEVE),
-		.mask_support = &(const struct rte_flow_item_udp){
+		.mask_support = &(const struct rte_flow_item_udp) {
 			.hdr = {
 				.src_port = RTE_BE16(0xffff),
 				.dst_port = RTE_BE16(0xffff),
@@ -1888,7 +1890,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 		.merge = nfp_flow_merge_udp,
 	},
 	[RTE_FLOW_ITEM_TYPE_SCTP] = {
-		.mask_support = &(const struct rte_flow_item_sctp){
+		.mask_support = &(const struct rte_flow_item_sctp) {
 			.hdr = {
 				.src_port  = RTE_BE16(0xffff),
 				.dst_port  = RTE_BE16(0xffff),
@@ -1900,7 +1902,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_vxlan){
+		.mask_support = &(const struct rte_flow_item_vxlan) {
 			.hdr = {
 				.vx_vni = RTE_BE32(0xffffff00),
 			},
@@ -1911,7 +1913,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
-		.mask_support = &(const struct rte_flow_item_geneve){
+		.mask_support = &(const struct rte_flow_item_geneve) {
 			.vni = "\xff\xff\xff",
 		},
 		.mask_default = &rte_flow_item_geneve_mask,
@@ -1920,7 +1922,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
 	},
 	[RTE_FLOW_ITEM_TYPE_GRE] = {
 		.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
-		.mask_support = &(const struct rte_flow_item_gre){
+		.mask_support = &(const struct rte_flow_item_gre) {
 			.c_rsvd0_ver = RTE_BE16(0xa000),
 			.protocol = RTE_BE16(0xffff),
 		},
@@ -1952,6 +1954,7 @@ nfp_flow_item_check(const struct rte_flow_item *item,
 					" without a corresponding 'spec'.");
 			return -EINVAL;
 		}
+
 		/* No spec, no mask, no problem. */
 		return 0;
 	}
@@ -3031,6 +3034,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
@@ -3057,6 +3061,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
 
 	*index = entry->mac_index;
 	priv->pre_tun_cnt++;
+
 	return 0;
 }
 
@@ -3091,12 +3096,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
 	for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
 		if (priv->pre_tun_bitmap[i] == 0)
 			continue;
+
 		entry->mac_index = i;
 		find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
 		if (find_entry != NULL) {
 			find_entry->ref_cnt--;
 			if (find_entry->ref_cnt != 0)
 				goto free_entry;
+
 			priv->pre_tun_bitmap[i] = 0;
 			break;
 		}
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index ab38dbe1f4..991629e6ed 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -126,11 +126,14 @@ struct nfp_ipv6_addr_entry {
 struct nfp_flow_priv {
 	uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
 	uint64_t flower_version; /**< Flow version, always increase. */
+
 	/* Mask hash table */
 	struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */
 	struct rte_hash *mask_table; /**< Hash table to store mask ids. */
+
 	/* Flow hash table */
 	struct rte_hash *flow_table; /**< Hash table to store flow rules. */
+
 	/* Flow stats */
 	uint32_t active_mem_unit; /**< The size of active mem units. */
 	uint32_t total_mem_units; /**< The size of total mem units. */
@@ -138,16 +141,20 @@ struct nfp_flow_priv {
 	struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
 	struct nfp_fl_stats *stats; /**< Store stats of flow. */
 	rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+
 	/* Pre tunnel rule */
 	uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
 	uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
 	struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+
 	/* IPv4 off */
 	LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
 	rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+
 	/* IPv6 off */
 	LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
 	rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
+
 	/* Neighbor next */
 	LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
 	/* Conntrack */
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 9e08e38955..74599747e8 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)
 		rxd->fld.dd = 0;
 		rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff;
 		rxd->fld.dma_addr_lo = dma_addr & 0xffffffff;
+
 		rxe[i].mbuf = mbuf;
 	}
 
@@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev)
 		if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0)
 			return -1;
 	}
+
 	return 0;
 }
 
@@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	struct nfp_net_rx_desc *rxds;
 
 	rxq = rx_queue;
-
 	idx = rxq->rd_p;
 
 	/*
@@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue)
 	 * performance. But ideally that should be done in descriptors
 	 * chunks belonging to the same cache line.
 	 */
-
 	while (count < rxq->rx_count) {
 		rxds = &rxq->rxds[idx];
 		if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0)
@@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta,
 
 	if (meta->vlan[0].offload == 0)
 		mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci);
+
 	mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci);
 	PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u",
 			mb->vlan_tci_outer, mb->vlan_tci);
@@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,
  * so looking at the implications of this type of allocation should be studied
  * deeply.
  */
-
 uint16_t
 nfp_net_recv_pkts(void *rx_queue,
 		struct rte_mbuf **rx_pkts,
@@ -896,7 +896,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,
 	tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx,
 			sizeof(struct nfp_net_rx_desc) * max_rx_desc,
 			NFP_MEMZONE_ALIGN, socket_id);
-
 	if (tz == NULL) {
 		PMD_DRV_LOG(ERR, "Error allocating rx dma");
 		nfp_net_rx_queue_release(dev, queue_idx);
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 18196d273c..f49c99e462 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -15,7 +15,7 @@
 #define NFP_RESOURCE_NFP_HWINFO         "nfp.info"
 
 /* Service Processor */
-#define NFP_RESOURCE_NSP		"nfp.sp"
+#define NFP_RESOURCE_NSP                "nfp.sp"
 
 /* Opaque handle to a NFP Resource */
 struct nfp_resource;
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 08/11] net/nfp: unify the guide line of header file
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (6 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 07/11] net/nfp: standard the blank character Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 09/11] net/nfp: rename some parameter and variable Chaoyong He
                       ` (3 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Unify the guide line of header file, we choose '__FOO_BAR_H__' style.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/flower/nfp_flower.h             | 6 +++---
 drivers/net/nfp/flower/nfp_flower_cmsg.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_ctrl.h        | 6 +++---
 drivers/net/nfp/flower/nfp_flower_representor.h | 6 +++---
 drivers/net/nfp/nfd3/nfp_nfd3.h                 | 6 +++---
 drivers/net/nfp/nfdk/nfp_nfdk.h                 | 6 +++---
 drivers/net/nfp/nfp_common.h                    | 6 +++---
 drivers/net/nfp/nfp_cpp_bridge.h                | 8 +++-----
 drivers/net/nfp/nfp_ctrl.h                      | 6 +++---
 drivers/net/nfp/nfp_flow.h                      | 6 +++---
 drivers/net/nfp/nfp_logs.h                      | 6 +++---
 drivers/net/nfp/nfp_rxtx.h                      | 6 +++---
 12 files changed, 36 insertions(+), 38 deletions(-)

diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 0b4e38cedd..b7ea830209 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_H_
-#define _NFP_FLOWER_H_
+#ifndef __NFP_FLOWER_H__
+#define __NFP_FLOWER_H__
 
 #include "../nfp_common.h"
 
@@ -118,4 +118,4 @@ int nfp_flower_pf_stop(struct rte_eth_dev *dev);
 uint32_t nfp_flower_pkt_add_metadata(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf, uint32_t port_id);
 
-#endif /* _NFP_FLOWER_H_ */
+#endif /* __NFP_FLOWER_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index cb019171b6..c2938fb6f6 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CMSG_H_
-#define _NFP_CMSG_H_
+#ifndef __NFP_CMSG_H__
+#define __NFP_CMSG_H__
 
 #include "../nfp_flow.h"
 #include "nfp_flower.h"
@@ -989,4 +989,4 @@ int nfp_flower_cmsg_qos_delete(struct nfp_app_fw_flower *app_fw_flower,
 int nfp_flower_cmsg_qos_stats(struct nfp_app_fw_flower *app_fw_flower,
 		struct nfp_cfg_head *head);
 
-#endif /* _NFP_CMSG_H_ */
+#endif /* __NFP_CMSG_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.h b/drivers/net/nfp/flower/nfp_flower_ctrl.h
index f73a024266..4c94d36847 100644
--- a/drivers/net/nfp/flower/nfp_flower_ctrl.h
+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_CTRL_H_
-#define _NFP_FLOWER_CTRL_H_
+#ifndef __NFP_FLOWER_CTRL_H__
+#define __NFP_FLOWER_CTRL_H__
 
 #include "nfp_flower.h"
 
@@ -13,4 +13,4 @@ uint16_t nfp_flower_ctrl_vnic_xmit(struct nfp_app_fw_flower *app_fw_flower,
 		struct rte_mbuf *mbuf);
 void nfp_flower_ctrl_vnic_xmit_register(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_CTRL_H_ */
+#endif /* __NFP_FLOWER_CTRL_H__ */
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h
index eda19cbb16..bcb4c3cdb5 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.h
+++ b/drivers/net/nfp/flower/nfp_flower_representor.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOWER_REPRESENTOR_H_
-#define _NFP_FLOWER_REPRESENTOR_H_
+#ifndef __NFP_FLOWER_REPRESENTOR_H__
+#define __NFP_FLOWER_REPRESENTOR_H__
 
 #include "nfp_flower.h"
 
@@ -24,4 +24,4 @@ struct nfp_flower_representor {
 
 int nfp_flower_repr_create(struct nfp_app_fw_flower *app_fw_flower);
 
-#endif /* _NFP_FLOWER_REPRESENTOR_H_ */
+#endif /* __NFP_FLOWER_REPRESENTOR_H__ */
diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h
index 0b0ca361f4..3ba562cc3f 100644
--- a/drivers/net/nfp/nfd3/nfp_nfd3.h
+++ b/drivers/net/nfp/nfd3/nfp_nfd3.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_NFD3_H_
-#define _NFP_NFD3_H_
+#ifndef __NFP_NFD3_H__
+#define __NFP_NFD3_H__
 
 #include "../nfp_rxtx.h"
 
@@ -84,4 +84,4 @@ int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev,
 		unsigned int socket_id,
 		const struct rte_eth_txconf *tx_conf);
 
-#endif /* _NFP_NFD3_H_ */
+#endif /* __NFP_NFD3_H__ */
diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h
index 04bd3c7600..2767fd51cd 100644
--- a/drivers/net/nfp/nfdk/nfp_nfdk.h
+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_NFDK_H_
-#define _NFP_NFDK_H_
+#ifndef __NFP_NFDK_H__
+#define __NFP_NFDK_H__
 
 #include "../nfp_rxtx.h"
 
@@ -178,4 +178,4 @@ int nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,
 int nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq,
 		struct rte_mbuf *pkt);
 
-#endif /* _NFP_NFDK_H_ */
+#endif /* __NFP_NFDK_H__ */
diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index 5439865c5e..cd0ca50c6b 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_COMMON_H_
-#define _NFP_COMMON_H_
+#ifndef __NFP_COMMON_H__
+#define __NFP_COMMON_H__
 
 #include <bus_pci_driver.h>
 #include <ethdev_driver.h>
@@ -450,4 +450,4 @@ bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version);
 #define NFP_PRIV_TO_APP_FW_FLOWER(app_fw_priv)\
 	((struct nfp_app_fw_flower *)app_fw_priv)
 
-#endif /* _NFP_COMMON_H_ */
+#endif /* __NFP_COMMON_H__ */
diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h
index e6a957a090..a1103e85e4 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.h
+++ b/drivers/net/nfp/nfp_cpp_bridge.h
@@ -1,16 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright (c) 2014-2021 Netronome Systems, Inc.
  * All rights reserved.
- *
- * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation.
  */
 
-#ifndef _NFP_CPP_BRIDGE_H_
-#define _NFP_CPP_BRIDGE_H_
+#ifndef __NFP_CPP_BRIDGE_H__
+#define __NFP_CPP_BRIDGE_H__
 
 #include "nfp_common.h"
 
 int nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev);
 int nfp_map_service(uint32_t service_id);
 
-#endif /* _NFP_CPP_BRIDGE_H_ */
+#endif /* __NFP_CPP_BRIDGE_H__ */
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 5cc83ff3e6..5c2065a537 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_CTRL_H_
-#define _NFP_CTRL_H_
+#ifndef __NFP_CTRL_H__
+#define __NFP_CTRL_H__
 
 #include <stdint.h>
 
@@ -573,4 +573,4 @@ nfp_net_cfg_ctrl_rss(uint32_t hw_cap)
 	return NFP_NET_CFG_CTRL_RSS;
 }
 
-#endif /* _NFP_CTRL_H_ */
+#endif /* __NFP_CTRL_H__ */
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 991629e6ed..aeb24458f3 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_FLOW_H_
-#define _NFP_FLOW_H_
+#ifndef __NFP_FLOW_H__
+#define __NFP_FLOW_H__
 
 #include "nfp_common.h"
 
@@ -202,4 +202,4 @@ int nfp_flow_destroy(struct rte_eth_dev *dev,
 		struct rte_flow *nfp_flow,
 		struct rte_flow_error *error);
 
-#endif /* _NFP_FLOW_H_ */
+#endif /* __NFP_FLOW_H__ */
diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h
index 16ff61700b..690adabffd 100644
--- a/drivers/net/nfp/nfp_logs.h
+++ b/drivers/net/nfp/nfp_logs.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_LOGS_H_
-#define _NFP_LOGS_H_
+#ifndef __NFP_LOGS_H__
+#define __NFP_LOGS_H__
 
 #include <rte_log.h>
 
@@ -41,4 +41,4 @@ extern int nfp_logtype_driver;
 	rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \
 		"%s(): " fmt "\n", __func__, ## args)
 
-#endif /* _NFP_LOGS_H_ */
+#endif /* __NFP_LOGS_H__ */
diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h
index 899cc42c97..956cc7a0d2 100644
--- a/drivers/net/nfp/nfp_rxtx.h
+++ b/drivers/net/nfp/nfp_rxtx.h
@@ -3,8 +3,8 @@
  * All rights reserved.
  */
 
-#ifndef _NFP_RXTX_H_
-#define _NFP_RXTX_H_
+#ifndef __NFP_RXTX_H__
+#define __NFP_RXTX_H__
 
 #include <ethdev_driver.h>
 
@@ -253,4 +253,4 @@ void nfp_net_set_meta_ipsec(struct nfp_net_meta_raw *meta_data,
 		uint8_t layer,
 		uint8_t ipsec_layer);
 
-#endif /* _NFP_RXTX_H_ */
+#endif /* __NFP_RXTX_H__ */
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 09/11] net/nfp: rename some parameter and variable
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (7 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 08/11] net/nfp: unify the guide line of header file Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
                       ` (2 subsequent siblings)
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Rename some parameter and variable to make the logic easier to
understand.
Also avoid the mix use of lowercase and uppercase in macro name.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.h    | 20 ++++++++++----------
 drivers/net/nfp/nfp_ethdev_vf.c |  8 ++++----
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h
index cd0ca50c6b..aad3c29ba8 100644
--- a/drivers/net/nfp/nfp_common.h
+++ b/drivers/net/nfp/nfp_common.h
@@ -19,9 +19,9 @@
 #define NFP_QCP_QUEUE_ADD_RPTR                  0x0000
 #define NFP_QCP_QUEUE_ADD_WPTR                  0x0004
 #define NFP_QCP_QUEUE_STS_LO                    0x0008
-#define NFP_QCP_QUEUE_STS_LO_READPTR_mask     (0x3ffff)
+#define NFP_QCP_QUEUE_STS_LO_READPTR_MASK     (0x3ffff)
 #define NFP_QCP_QUEUE_STS_HI                    0x000c
-#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask    (0x3ffff)
+#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK    (0x3ffff)
 
 /* Interrupt definitions */
 #define NFP_NET_IRQ_LSC_IDX             0
@@ -303,7 +303,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
 /**
  * Add the value to the selected pointer of a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Add to the read or write pointer
@@ -311,7 +311,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw,
  *   Value to add to the queue pointer
  */
 static inline void
-nfp_qcp_ptr_add(uint8_t *q,
+nfp_qcp_ptr_add(uint8_t *queue,
 		enum nfp_qcp_ptr ptr,
 		uint32_t val)
 {
@@ -322,19 +322,19 @@ nfp_qcp_ptr_add(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_ADD_WPTR;
 
-	nn_writel(rte_cpu_to_le_32(val), q + off);
+	nn_writel(rte_cpu_to_le_32(val), queue + off);
 }
 
 /**
  * Read the current read/write pointer value for a queue.
  *
- * @param q
+ * @param queue
  *   Base address for queue structure
  * @param ptr
  *   Read or Write pointer
  */
 static inline uint32_t
-nfp_qcp_read(uint8_t *q,
+nfp_qcp_read(uint8_t *queue,
 		enum nfp_qcp_ptr ptr)
 {
 	uint32_t off;
@@ -345,12 +345,12 @@ nfp_qcp_read(uint8_t *q,
 	else
 		off = NFP_QCP_QUEUE_STS_HI;
 
-	val = rte_cpu_to_le_32(nn_readl(q + off));
+	val = rte_cpu_to_le_32(nn_readl(queue + off));
 
 	if (ptr == NFP_QCP_READ_PTR)
-		return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_LO_READPTR_MASK;
 	else
-		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask;
+		return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK;
 }
 
 static inline uint32_t
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 7096695de6..7fb7b3efc5 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -396,7 +396,7 @@ nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)
 }
 
 static int
-eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 		struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev,
@@ -404,7 +404,7 @@ eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 }
 
 static int
-eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
+nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);
 }
@@ -412,8 +412,8 @@ eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)
 static struct rte_pci_driver rte_nfp_net_vf_pmd = {
 	.id_table = pci_id_nfp_vf_net_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC,
-	.probe = eth_nfp_vf_pci_probe,
-	.remove = eth_nfp_vf_pci_remove,
+	.probe = nfp_vf_pci_probe,
+	.remove = nfp_vf_pci_remove,
 };
 
 RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 10/11] net/nfp: adjust logic to make it more readable
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (8 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 09/11] net/nfp: rename some parameter and variable Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-13  6:06     ` [PATCH v3 11/11] net/nfp: refact the meson build file Chaoyong He
  2023-10-16 16:50     ` [PATCH v3 00/11] Unify the PMD coding style Ferruh Yigit
  11 siblings, 0 replies; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Adjust some logic to make it easier to understand.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/nfp_common.c     | 87 +++++++++++++++++---------------
 drivers/net/nfp/nfp_cpp_bridge.c |  5 +-
 drivers/net/nfp/nfp_ctrl.h       |  2 -
 drivers/net/nfp/nfp_ethdev.c     | 23 ++++-----
 drivers/net/nfp/nfp_ethdev_vf.c  | 15 +++---
 drivers/net/nfp/nfp_rxtx.c       |  2 +-
 6 files changed, 63 insertions(+), 71 deletions(-)

diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index a102c6f272..2d834b29d9 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -453,7 +453,7 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)
 }
 
 static inline void
-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,
+nfp_net_enable_rxvlan_cap(struct nfp_net_hw *hw,
 		uint32_t *ctrl)
 {
 	if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)
@@ -467,19 +467,19 @@ nfp_net_enable_queues(struct rte_eth_dev *dev)
 {
 	uint16_t i;
 	struct nfp_net_hw *hw;
-	uint64_t enabled_queues = 0;
+	uint64_t enabled_queues;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	/* Enabling the required TX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		enabled_queues |= (1 << i);
 
 	nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues);
 
-	enabled_queues = 0;
-
 	/* Enabling the required RX queues in the device */
+	enabled_queues = 0;
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		enabled_queues |= (1 << i);
 
@@ -619,33 +619,33 @@ uint32_t
 nfp_check_offloads(struct rte_eth_dev *dev)
 {
 	uint32_t ctrl = 0;
+	uint64_t rx_offload;
+	uint64_t tx_offload;
 	struct nfp_net_hw *hw;
 	struct rte_eth_conf *dev_conf;
-	struct rte_eth_rxmode *rxmode;
-	struct rte_eth_txmode *txmode;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	dev_conf = &dev->data->dev_conf;
-	rxmode = &dev_conf->rxmode;
-	txmode = &dev_conf->txmode;
+	rx_offload = dev_conf->rxmode.offloads;
+	tx_offload = dev_conf->txmode.offloads;
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
-		nfp_net_enbable_rxvlan_cap(hw, &ctrl);
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+		nfp_net_enable_rxvlan_cap(hw, &ctrl);
 
-	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
+	if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 	}
 
 	hw->mtu = dev->data->mtu;
 
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2;
 		else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0)
@@ -661,14 +661,14 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
-			(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||
+			(tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {
 		if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -676,7 +676,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+	if ((tx_offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -766,11 +766,10 @@ nfp_net_link_update(struct rte_eth_dev *dev,
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	/* Read link status */
-	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
-
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
+	/* Read link status */
+	nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS);
 	if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0)
 		link.link_status = RTE_ETH_LINK_UP;
 
@@ -828,6 +827,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_eth_stats nfp_dev_stats;
 
+	if (stats == NULL)
+		return -EINVAL;
+
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats));
@@ -892,11 +894,8 @@ nfp_net_stats_get(struct rte_eth_dev *dev,
 			nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);
 	nfp_dev_stats.imissed -= hw->eth_stats_base.imissed;
 
-	if (stats != NULL) {
-		memcpy(stats, &nfp_dev_stats, sizeof(*stats));
-		return 0;
-	}
-	return -EINVAL;
+	memcpy(stats, &nfp_dev_stats, sizeof(*stats));
+	return 0;
 }
 
 /*
@@ -1379,13 +1378,14 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),
 			NFP_NET_CFG_ICR_UNMASKED);
 	return 0;
@@ -1399,14 +1399,16 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,
 	struct nfp_net_hw *hw;
 	struct rte_pci_device *pci_dev;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)
 		base = 1;
 
 	/* Make sure all updates are written before un-masking */
 	rte_wmb();
-	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), 0x1);
+
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
+	nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_RXTX);
+
 	return 0;
 }
 
@@ -1445,13 +1447,13 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 
+	/* Make sure all updates are written before un-masking */
+	rte_wmb();
+
 	if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) {
 		/* If MSI-X auto-masking is used, clear the entry */
-		rte_wmb();
 		rte_intr_ack(pci_dev->intr_handle);
 	} else {
-		/* Make sure all updates are written before un-masking */
-		rte_wmb();
 		nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),
 				NFP_NET_CFG_ICR_UNMASKED);
 	}
@@ -1548,19 +1550,18 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t update;
 	uint32_t new_ctrl;
+	uint64_t rx_offload;
 	struct nfp_net_hw *hw;
 	uint32_t rxvlan_ctrl = 0;
-	struct rte_eth_conf *dev_conf;
 
 	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-	dev_conf = &dev->data->dev_conf;
+	rx_offload = dev->data->dev_conf.rxmode.offloads;
 	new_ctrl = hw->ctrl;
 
-	nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl);
-
 	/* VLAN stripping setting */
 	if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
+		nfp_net_enable_rxvlan_cap(hw, &rxvlan_ctrl);
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0)
 			new_ctrl |= rxvlan_ctrl;
 		else
 			new_ctrl &= ~rxvlan_ctrl;
@@ -1568,7 +1569,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 
 	/* QinQ stripping setting */
 	if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) {
-		if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
+		if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0)
 			new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ;
 		else
 			new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ;
@@ -1580,10 +1581,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev,
 	update = NFP_NET_CFG_UPDATE_GEN;
 
 	ret = nfp_net_reconfig(hw, new_ctrl, update);
-	if (ret == 0)
-		hw->ctrl = new_ctrl;
+	if (ret != 0)
+		return ret;
 
-	return ret;
+	hw->ctrl = new_ctrl;
+
+	return 0;
 }
 
 static int
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index bb2a6fdcda..36dcdca9de 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -22,9 +22,6 @@
 #define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t)
 
 /* Prototypes */
-static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp);
-static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp);
 static int nfp_cpp_bridge_service_func(void *args);
 
 int
@@ -438,7 +435,7 @@ nfp_cpp_bridge_service_func(void *args)
 			return -EIO;
 		}
 
-		while (1) {
+		for (;;) {
 			ret = recv(datafd, &op, 4, 0);
 			if (ret <= 0) {
 				PMD_CPP_LOG(DEBUG, "%s: socket close", __func__);
diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h
index 5c2065a537..9ec51e0a25 100644
--- a/drivers/net/nfp/nfp_ctrl.h
+++ b/drivers/net/nfp/nfp_ctrl.h
@@ -442,8 +442,6 @@ struct nfp_net_fw_ver {
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6    (NFP_MAC_STATS_BASE + 0x1f0)
 #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7    (NFP_MAC_STATS_BASE + 0x1f8)
 
-#define NFP_PF_CSR_SLICE_SIZE    (32 * 1024)
-
 /*
  * General use mailbox area (0x1800 - 0x19ff)
  * 4B used for update command and 4B return code followed by
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index b65c2c1fe0..c550c12e01 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -80,7 +80,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler.
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -525,7 +525,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 			return -ENODEV;
 
 		/* Use port offset in pf ctrl_bar for this ports control bar */
-		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE);
+		hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ);
 		hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE);
 	}
 
@@ -743,8 +743,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 		const struct nfp_dev_info *dev_info)
 {
 	uint8_t i;
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint32_t total_vnics;
 	struct nfp_net_hw *hw;
 	unsigned int numa_node;
@@ -765,8 +764,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 	pf_dev->app_fw_priv = app_fw_nic;
 
 	/* Read the number of vNIC's created for the PF */
-	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err);
-	if (err != 0 || total_vnics == 0 || total_vnics > 8) {
+	total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &ret);
+	if (ret != 0 || total_vnics == 0 || total_vnics > 8) {
 		PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value");
 		ret = -ENODEV;
 		goto app_cleanup;
@@ -874,8 +873,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,
 static int
 nfp_pf_init(struct rte_pci_device *pci_dev)
 {
-	int ret;
-	int err = 0;
+	int ret = 0;
 	uint64_t addr;
 	uint32_t cpp_id;
 	struct nfp_cpp *cpp;
@@ -943,8 +941,8 @@ nfp_pf_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		ret = -EIO;
 		goto sym_tbl_cleanup;
@@ -1080,7 +1078,6 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev,
 static int
 nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 {
-	int err = 0;
 	int ret = 0;
 	struct nfp_cpp *cpp;
 	enum nfp_app_fw_id app_fw_id;
@@ -1124,8 +1121,8 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)
 	}
 
 	/* Read the app ID of the firmware loaded */
-	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err);
-	if (err != 0) {
+	app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret);
+	if (ret != 0) {
 		PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw");
 		goto sym_tbl_cleanup;
 	}
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 7fb7b3efc5..ac6e67efc6 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -39,8 +39,6 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
 	struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
 
-	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
-
 	/* Disabling queues just in case... */
 	nfp_net_disable_queues(dev);
 
@@ -54,7 +52,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 			 * Better not to share LSC with RX interrupts.
 			 * Unregistering LSC interrupt handler.
 			 */
-			rte_intr_callback_unregister(pci_dev->intr_handle,
+			rte_intr_callback_unregister(intr_handle,
 					nfp_net_dev_interrupt_handler, (void *)dev);
 
 			if (dev->data->nb_rx_queues > 1) {
@@ -77,6 +75,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	new_ctrl = nfp_check_offloads(dev);
 
 	/* Writing configuration parameters in the device */
+	hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	nfp_net_params_setup(hw);
 
 	dev_conf = &dev->data->dev_conf;
@@ -244,15 +243,15 @@ static int
 nfp_netvf_init(struct rte_eth_dev *eth_dev)
 {
 	int err;
+	uint16_t port;
 	uint32_t start_q;
-	uint16_t port = 0;
 	struct nfp_net_hw *hw;
 	uint64_t tx_bar_off = 0;
 	uint64_t rx_bar_off = 0;
 	struct rte_pci_device *pci_dev;
 	const struct nfp_dev_info *dev_info;
-	struct rte_ether_addr *tmp_ether_addr;
 
+	port = eth_dev->data->port_id;
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	dev_info = nfp_dev_info_get(pci_dev->id.device_id);
@@ -325,9 +324,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 	}
 
 	nfp_netvf_read_mac(hw);
-
-	tmp_ether_addr = &hw->mac_addr;
-	if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {
+	if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) {
 		PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port);
 		/* Using random mac addresses for VFs */
 		rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);
@@ -344,7 +341,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x "
 			"mac=" RTE_ETHER_ADDR_PRT_FMT,
-			eth_dev->data->port_id, pci_dev->id.vendor_id,
+			port, pci_dev->id.vendor_id,
 			pci_dev->id.device_id,
 			RTE_ETHER_ADDR_BYTES(&hw->mac_addr));
 
diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c
index 74599747e8..efdca7fccf 100644
--- a/drivers/net/nfp/nfp_rxtx.c
+++ b/drivers/net/nfp/nfp_rxtx.c
@@ -284,7 +284,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base,
 			meta->vlan[meta->vlan_layer].tci =
 					vlan_info & NFP_NET_META_VLAN_MASK;
 			meta->vlan[meta->vlan_layer].tpid = NFP_NET_META_TPID(vlan_info);
-			++meta->vlan_layer;
+			meta->vlan_layer++;
 			break;
 		case NFP_NET_META_IPSEC:
 			meta->sa_idx = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset);
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* [PATCH v3 11/11] net/nfp: refact the meson build file
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (9 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
@ 2023-10-13  6:06     ` Chaoyong He
  2023-10-16 16:50       ` Ferruh Yigit
  2023-10-16 16:50     ` [PATCH v3 00/11] Unify the PMD coding style Ferruh Yigit
  11 siblings, 1 reply; 40+ messages in thread
From: Chaoyong He @ 2023-10-13  6:06 UTC (permalink / raw)
  To: dev; +Cc: oss-drivers, Chaoyong He, Long Wu, Peng Zhang

Make the source files follow the alphabeta sequence.
Also update the copyright header line.

Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
 drivers/net/nfp/meson.build | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build
index 7627c3e3f1..40e9ef8524 100644
--- a/drivers/net/nfp/meson.build
+++ b/drivers/net/nfp/meson.build
@@ -1,10 +1,11 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2018 Corigine, Inc.
 
 if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
     build = false
     reason = 'only supported on 64-bit Linux'
 endif
+
 sources = files(
         'flower/nfp_conntrack.c',
         'flower/nfp_flower.c',
@@ -13,30 +14,30 @@ sources = files(
         'flower/nfp_flower_representor.c',
         'nfd3/nfp_nfd3_dp.c',
         'nfdk/nfp_nfdk_dp.c',
-        'nfpcore/nfp_nsp.c',
         'nfpcore/nfp_cppcore.c',
-        'nfpcore/nfp_resource.c',
-        'nfpcore/nfp_mip.c',
-        'nfpcore/nfp_nffw.c',
-        'nfpcore/nfp_rtsym.c',
-        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_crc.c',
         'nfpcore/nfp_dev.c',
+        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_mip.c',
         'nfpcore/nfp_mutex.c',
+        'nfpcore/nfp_nffw.c',
+        'nfpcore/nfp_nsp.c',
+        'nfpcore/nfp_nsp_cmds.c',
         'nfpcore/nfp_nsp_eth.c',
-        'nfpcore/nfp_hwinfo.c',
+        'nfpcore/nfp_resource.c',
+        'nfpcore/nfp_rtsym.c',
         'nfpcore/nfp_target.c',
         'nfpcore/nfp6000_pcie.c',
         'nfp_common.c',
-        'nfp_ctrl.c',
-        'nfp_rxtx.c',
         'nfp_cpp_bridge.c',
-        'nfp_ethdev_vf.c',
+        'nfp_ctrl.c',
         'nfp_ethdev.c',
+        'nfp_ethdev_vf.c',
         'nfp_flow.c',
         'nfp_ipsec.c',
         'nfp_logs.c',
         'nfp_mtr.c',
+        'nfp_rxtx.c',
 )
 
 deps += ['hash', 'security']
-- 
2.39.1


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 11/11] net/nfp: refact the meson build file
  2023-10-13  6:06     ` [PATCH v3 11/11] net/nfp: refact the meson build file Chaoyong He
@ 2023-10-16 16:50       ` Ferruh Yigit
  0 siblings, 0 replies; 40+ messages in thread
From: Ferruh Yigit @ 2023-10-16 16:50 UTC (permalink / raw)
  To: Chaoyong He, dev; +Cc: oss-drivers, Long Wu, Peng Zhang

On 10/13/2023 7:06 AM, Chaoyong He wrote:
> Make the source files follow the alphabeta sequence.
> Also update the copyright header line.
> 
> Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
> Reviewed-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
> ---
>  drivers/net/nfp/meson.build | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build
> index 7627c3e3f1..40e9ef8524 100644
> --- a/drivers/net/nfp/meson.build
> +++ b/drivers/net/nfp/meson.build
> @@ -1,10 +1,11 @@
>  # SPDX-License-Identifier: BSD-3-Clause
> -# Copyright(c) 2018 Intel Corporation
> +# Copyright(c) 2018 Corigine, Inc.
>  

ack


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [PATCH v3 00/11] Unify the PMD coding style
  2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
                       ` (10 preceding siblings ...)
  2023-10-13  6:06     ` [PATCH v3 11/11] net/nfp: refact the meson build file Chaoyong He
@ 2023-10-16 16:50     ` Ferruh Yigit
  11 siblings, 0 replies; 40+ messages in thread
From: Ferruh Yigit @ 2023-10-16 16:50 UTC (permalink / raw)
  To: Chaoyong He, dev; +Cc: oss-drivers

On 10/13/2023 7:06 AM, Chaoyong He wrote:
> This patch series aims to unify the coding style of NFP PMD, make the
> logics following the same rules, to make it easier to understand and
> extend.
> Also prepare for the upcoming vDPA PMD patch series.
> 
> ---
> v2:
> * Add some missing modification.
> v3:
> * Remove the '\t' character in the log statement as the advice of
>   reviewer.
> ---
> 
> Chaoyong He (11):
>   net/nfp: explicitly compare to null and 0
>   net/nfp: unify the indent coding style
>   net/nfp: unify the type of integer variable
>   net/nfp: standard the local variable coding style
>   net/nfp: adjust the log statement
>   net/nfp: standard the comment style
>   net/nfp: standard the blank character
>   net/nfp: unify the guide line of header file
>   net/nfp: rename some parameter and variable
>   net/nfp: adjust logic to make it more readable
>   net/nfp: refact the meson build file
> 

It is good to take care of the code and update syntax, coding convention
etc, but it also creates noise in the git history and makes backporting
fixes/patches harder.

For a while the driver got lots of refactoring changes, I hope they are
completed with the patches in this release.


Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2023-10-16 16:50 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-07  2:33 [PATCH 00/11] Unify the PMD coding style Chaoyong He
2023-10-07  2:33 ` [PATCH 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
2023-10-07  2:33 ` [PATCH 02/11] net/nfp: unify the indent coding style Chaoyong He
2023-10-07  2:33 ` [PATCH 03/11] net/nfp: unify the type of integer variable Chaoyong He
2023-10-07  2:33 ` [PATCH 04/11] net/nfp: standard the local variable coding style Chaoyong He
2023-10-07  2:33 ` [PATCH 05/11] net/nfp: adjust the log statement Chaoyong He
2023-10-07  2:33 ` [PATCH 06/11] net/nfp: standard the comment style Chaoyong He
2023-10-07  2:33 ` [PATCH 07/11] net/nfp: standard the blank character Chaoyong He
2023-10-07  2:33 ` [PATCH 08/11] net/nfp: unify the guide line of header file Chaoyong He
2023-10-07  2:33 ` [PATCH 09/11] net/nfp: rename some parameter and variable Chaoyong He
2023-10-07  2:33 ` [PATCH 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
2023-10-07  2:33 ` [PATCH 11/11] net/nfp: refact the meson build file Chaoyong He
2023-10-12  1:26 ` [PATCH v2 00/11] Unify the PMD coding style Chaoyong He
2023-10-12  1:26   ` [PATCH v2 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
2023-10-12  1:26   ` [PATCH v2 02/11] net/nfp: unify the indent coding style Chaoyong He
2023-10-12  1:26   ` [PATCH v2 03/11] net/nfp: unify the type of integer variable Chaoyong He
2023-10-12  1:26   ` [PATCH v2 04/11] net/nfp: standard the local variable coding style Chaoyong He
2023-10-12  1:26   ` [PATCH v2 05/11] net/nfp: adjust the log statement Chaoyong He
2023-10-12  1:38     ` Stephen Hemminger
2023-10-12  1:40       ` Chaoyong He
2023-10-12  1:26   ` [PATCH v2 06/11] net/nfp: standard the comment style Chaoyong He
2023-10-12  1:27   ` [PATCH v2 07/11] net/nfp: standard the blank character Chaoyong He
2023-10-12  1:27   ` [PATCH v2 08/11] net/nfp: unify the guide line of header file Chaoyong He
2023-10-12  1:27   ` [PATCH v2 09/11] net/nfp: rename some parameter and variable Chaoyong He
2023-10-12  1:27   ` [PATCH v2 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
2023-10-12  1:27   ` [PATCH v2 11/11] net/nfp: refact the meson build file Chaoyong He
2023-10-13  6:06   ` [PATCH v3 00/11] Unify the PMD coding style Chaoyong He
2023-10-13  6:06     ` [PATCH v3 01/11] net/nfp: explicitly compare to null and 0 Chaoyong He
2023-10-13  6:06     ` [PATCH v3 02/11] net/nfp: unify the indent coding style Chaoyong He
2023-10-13  6:06     ` [PATCH v3 03/11] net/nfp: unify the type of integer variable Chaoyong He
2023-10-13  6:06     ` [PATCH v3 04/11] net/nfp: standard the local variable coding style Chaoyong He
2023-10-13  6:06     ` [PATCH v3 05/11] net/nfp: adjust the log statement Chaoyong He
2023-10-13  6:06     ` [PATCH v3 06/11] net/nfp: standard the comment style Chaoyong He
2023-10-13  6:06     ` [PATCH v3 07/11] net/nfp: standard the blank character Chaoyong He
2023-10-13  6:06     ` [PATCH v3 08/11] net/nfp: unify the guide line of header file Chaoyong He
2023-10-13  6:06     ` [PATCH v3 09/11] net/nfp: rename some parameter and variable Chaoyong He
2023-10-13  6:06     ` [PATCH v3 10/11] net/nfp: adjust logic to make it more readable Chaoyong He
2023-10-13  6:06     ` [PATCH v3 11/11] net/nfp: refact the meson build file Chaoyong He
2023-10-16 16:50       ` Ferruh Yigit
2023-10-16 16:50     ` [PATCH v3 00/11] Unify the PMD coding style Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).