From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2AC3A0C4B; Mon, 8 Nov 2021 09:23:52 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A3C5410E8; Mon, 8 Nov 2021 09:23:52 +0100 (CET) Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) by mails.dpdk.org (Postfix) with ESMTP id 6485040DF7 for ; Mon, 8 Nov 2021 09:23:51 +0100 (CET) Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1A88AGJs009774; Mon, 8 Nov 2021 08:23:50 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding; s=corp-2021-07-09; bh=ss/3M7Uw6Fq0LxYv7anZOHu5PRmKDi/hm41n6TF3NnM=; b=JBV6L34uGSrJG2qnS9skJ+Ar2nBJplj91gWLfnkK6ZU0eSrBJlhqlOUgSjQUIYJRpYnF 0dieI9Q1l3QfsP5q/J0WDWHyW4kzQX30ZIfNm/7LZutodb88Cli61VHZn8vbMBTfw8R5 GORWihLE1MEGbf+uFPelkyyAs2ci1DM+yW0fWZfZvatzQttw8BsJh01AteI4w+o3rAun s3v3HYrsNQuxCPyGPhUhrufgi9X4CFkIk8LzzcnUB5FThom3yCbnheP1ns+rWqWdvna/ 3cSpdjAoVRh4eXBro36/4LdspNvOTEaJoAfwNKmsvr6vtGQ+stSp0ExHpIOIeIedcO7r gQ== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by mx0b-00069f02.pphosted.com with ESMTP id 3c6vkqryr7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 08 Nov 2021 08:23:50 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.1.2/8.16.1.2) with SMTP id 1A88FJsI187956; Mon, 8 Nov 2021 08:23:49 GMT Received: from pps.reinject (localhost [127.0.0.1]) by userp3030.oracle.com with ESMTP id 3c5ettmep7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 08 Nov 2021 08:23:49 +0000 Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 1A88H1go192688; Mon, 8 Nov 2021 08:23:48 GMT Received: from sasingam-in.oradev.oraclecorp.com (dhcp-10-191-221-128.vpn.oracle.com [10.191.221.128]) by userp3030.oracle.com with ESMTP id 3c5ettmen1-1; Mon, 08 Nov 2021 08:23:47 +0000 From: sahithi.singam@oracle.com To: yongwang@vmware.com Cc: dev@dpdk.org, Sahithi Singam Date: Mon, 8 Nov 2021 13:53:31 +0530 Message-Id: <20211108082331.1407-1-sahithi.singam@oracle.com> X-Mailer: git-send-email 2.32.0.windows.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Proofpoint-GUID: tqyMvtyZNdYBeUZFvvDUlP05hXsFKCL2 X-Proofpoint-ORIG-GUID: tqyMvtyZNdYBeUZFvvDUlP05hXsFKCL2 Subject: [dpdk-dev] [PATCH] net/vmxnet3: add spinlocks to register command access X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sahithi Singam At present, there are no spinlocks around register command access. This resulted in a race condition when two threads running on two different cores invoked link_update function at the same time to get link status. Due to this race condition, one of the threads reported false link status value. Signed-off-by: Sahithi Singam --- drivers/net/vmxnet3/vmxnet3_ethdev.c | 37 ++++++++++++++++++++++++++++ drivers/net/vmxnet3/vmxnet3_ethdev.h | 1 + drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 ++ 3 files changed, 40 insertions(+) diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c index d1ef1cad08..d4a433e0db 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethdev.c +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c @@ -252,9 +252,11 @@ eth_vmxnet3_txdata_get(struct vmxnet3_hw *hw) { uint16 txdata_desc_size; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_TXDATA_DESC_SIZE); txdata_desc_size = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); return (txdata_desc_size < VMXNET3_TXDATA_DESC_MIN_SIZE || txdata_desc_size > VMXNET3_TXDATA_DESC_MAX_SIZE || @@ -285,6 +287,7 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = &vmxnet3_xmit_pkts; eth_dev->tx_pkt_prepare = vmxnet3_prep_pkts; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + rte_spinlock_init(&hw->cmd_lock); /* extra mbuf field is required to guess MSS */ vmxnet3_segs_dynfield_offset = @@ -375,7 +378,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev) hw->perm_addr[3], hw->perm_addr[4], hw->perm_addr[5]); /* Put device in Quiesce Mode */ + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV); + rte_spinlock_unlock(&hw->cmd_lock); /* allow untagged pkts */ VMXNET3_SET_VFTABLE_ENTRY(hw->shadow_vfta, 0); @@ -451,9 +456,11 @@ vmxnet3_alloc_intr_resources(struct rte_eth_dev *dev) int nvec = 1; /* for link event */ /* intr settings */ + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_CONF_INTR); cfg = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); hw->intr.type = cfg & 0x3; hw->intr.mask_mode = (cfg >> 2) & 0x3; @@ -910,8 +917,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev) VMXNET3_GET_ADDR_HI(hw->sharedPA)); /* Activate device by register write */ + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_ACTIVATE_DEV); ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); if (ret != 0) { PMD_INIT_LOG(ERR, "Device activation: UNSUCCESSFUL"); @@ -921,9 +930,11 @@ vmxnet3_dev_start(struct rte_eth_dev *dev) /* Setup memory region for rx buffers */ ret = vmxnet3_dev_setup_memreg(dev); if (ret == 0) { + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_REGISTER_MEMREGS); ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); if (ret != 0) PMD_INIT_LOG(DEBUG, "Failed in setup memory region cmd\n"); @@ -1027,12 +1038,16 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev) rte_intr_vec_list_free(intr_handle); /* quiesce the device first */ + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV); + rte_spinlock_unlock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_DSAL, 0); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_DSAH, 0); /* reset the device */ + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_RESET_DEV); + rte_spinlock_unlock(&hw->cmd_lock); PMD_INIT_LOG(DEBUG, "Device reset."); vmxnet3_dev_clear_queues(dev); @@ -1182,7 +1197,9 @@ vmxnet3_hw_stats_save(struct vmxnet3_hw *hw) { unsigned int i; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS); + rte_spinlock_unlock(&hw->cmd_lock); RTE_BUILD_BUG_ON(RTE_ETHDEV_QUEUE_STAT_CNTRS < VMXNET3_MAX_TX_QUEUES); @@ -1285,7 +1302,9 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) struct UPT1_TxStats txStats; struct UPT1_RxStats rxStats; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS); + rte_spinlock_unlock(&hw->cmd_lock); RTE_BUILD_BUG_ON(RTE_ETHDEV_QUEUE_STAT_CNTRS < VMXNET3_MAX_TX_QUEUES); for (i = 0; i < hw->num_tx_queues; i++) { @@ -1335,7 +1354,9 @@ vmxnet3_dev_stats_reset(struct rte_eth_dev *dev) struct UPT1_TxStats txStats = {0}; struct UPT1_RxStats rxStats = {0}; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS); + rte_spinlock_unlock(&hw->cmd_lock); RTE_BUILD_BUG_ON(RTE_ETHDEV_QUEUE_STAT_CNTRS < VMXNET3_MAX_TX_QUEUES); @@ -1443,8 +1464,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev, memset(&link, 0, sizeof(link)); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_LINK); ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); if (ret & 0x1) link.link_status = RTE_ETH_LINK_UP; @@ -1476,7 +1499,9 @@ vmxnet3_dev_set_rxmode(struct vmxnet3_hw *hw, uint32_t feature, int set) else rxConf->rxMode = rxConf->rxMode & (~feature); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_RX_MODE); + rte_spinlock_unlock(&hw->cmd_lock); } /* Promiscuous supported only if Vmxnet3_DriverShared is initialized in adapter */ @@ -1489,8 +1514,10 @@ vmxnet3_dev_promiscuous_enable(struct rte_eth_dev *dev) memset(vf_table, 0, VMXNET3_VFT_TABLE_SIZE); vmxnet3_dev_set_rxmode(hw, VMXNET3_RXM_PROMISC, 1); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_VLAN_FILTERS); + rte_spinlock_unlock(&hw->cmd_lock); return 0; } @@ -1508,8 +1535,10 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev) else memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE); vmxnet3_dev_set_rxmode(hw, VMXNET3_RXM_PROMISC, 0); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_VLAN_FILTERS); + rte_spinlock_unlock(&hw->cmd_lock); return 0; } @@ -1560,8 +1589,10 @@ vmxnet3_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vid, int on) else VMXNET3_CLEAR_VFTABLE_ENTRY(vf_table, vid); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_VLAN_FILTERS); + rte_spinlock_unlock(&hw->cmd_lock); return 0; } @@ -1579,8 +1610,10 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) else devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_FEATURE); + rte_spinlock_unlock(&hw->cmd_lock); } if (mask & RTE_ETH_VLAN_FILTER_MASK) { @@ -1589,8 +1622,10 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) else memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE); + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_UPDATE_VLAN_FILTERS); + rte_spinlock_unlock(&hw->cmd_lock); } return 0; @@ -1622,8 +1657,10 @@ vmxnet3_process_events(struct rte_eth_dev *dev) /* Check if there is an error on xmit/recv queues */ if (events & (VMXNET3_ECR_TQERR | VMXNET3_ECR_RQERR)) { + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_QUEUE_STATUS); + rte_spinlock_unlock(&hw->cmd_lock); if (hw->tqd_start->status.stopped) PMD_DRV_LOG(ERR, "tq error 0x%x", diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h index ef858ac951..d07a8f2757 100644 --- a/drivers/net/vmxnet3/vmxnet3_ethdev.h +++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h @@ -108,6 +108,7 @@ struct vmxnet3_hw { uint64_t queueDescPA; uint16_t queue_desc_len; uint16_t mtu; + rte_spinlock_t cmd_lock; VMXNET3_RSSConf *rss_conf; uint64_t rss_confPA; diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c index deba64be6a..c87f1c6470 100644 --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c @@ -1334,9 +1334,11 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev) if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6; + rte_spinlock_lock(&hw->cmd_lock); VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_SET_RSS_FIELDS); ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD); + rte_spinlock_unlock(&hw->cmd_lock); if (ret != VMXNET3_SUCCESS) { PMD_DRV_LOG(ERR, "Set RSS fields (v4) failed: %d", ret); -- 2.32.0.windows.1