From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBDC941DFA; Tue, 7 Mar 2023 04:28:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B06E6410FB; Tue, 7 Mar 2023 04:28:16 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id B2FD7410F9; Tue, 7 Mar 2023 04:28:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678159696; x=1709695696; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yG2JoKX1GSayWaBLre6PU7jOuXXspjbdTMOTDsS1qSA=; b=mN01M85eej4pynxUZvwj7yJ6d2QNld0Fs388C9YY2QmfQJGSm104WqSZ i5ZC091OzOrUeqogp0bTNLk0nEqKQTOli/RH6pXTgq3NOCgBioac73l0I spVZqebDV+O5LvY/a8olkwDzliAdDrsRuI+Jd2mLWwwGg19TkyLarot3L kijVuECknf6stqah2HTcU6iGY0JCnlqoUpjALBXO+xdIk6LLCJox1UmpJ vv2Bjm3Ngc8IOOpEFqMFsfEnt+Ng1Y++4liRhUosged1T2HiltCsdKC+l O+VBeEebLcMZgIPrpVchuWXVsKd1g8Arr/ZigubAlGgVCZxHj3Nkc9Yhu g==; X-IronPort-AV: E=McAfee;i="6500,9779,10641"; a="315407282" X-IronPort-AV: E=Sophos;i="5.98,238,1673942400"; d="scan'208";a="315407282" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2023 19:28:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10641"; a="745304362" X-IronPort-AV: E=Sophos;i="5.98,238,1673942400"; d="scan'208";a="745304362" Received: from shwdenpg561.ccr.corp.intel.com (HELO dpdk..) ([10.239.252.3]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Mar 2023 19:28:12 -0800 From: Kaiwen Deng To: dev@dpdk.org Cc: stable@dpdk.org, qiming.yang@intel.com, yidingx.zhou@intel.com, Kaiwen Deng , Jingjing Wu , Beilei Xing , Qi Zhang Subject: [PATCH v2] net/iavf: fix iavf query stats in intr thread Date: Tue, 7 Mar 2023 10:55:32 +0800 Message-Id: <20230307025533.1950861-1-kaiwenx.deng@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230222044001.1241845-1-kaiwenx.deng@intel.com> References: <20230222044001.1241845-1-kaiwenx.deng@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When iavf send query-stats command in eal-intr-thread through virtual channel, there will be no response received from iavf_dev_virtchnl_handler for this command during block and wait. Because iavf_dev_virtchnl_handler is also registered in eal-intr-thread. When vf device is bonded as BONDING_MODE_TLB mode, the slave device update callback will registered in alarm and called by eal-intr-thread, it would also raise the above issue. This commit adds local stats return to iavf_dev_stats_get immediately when it is called by eal-intr-thread. And updates local stats in iavf-virtchnl-thread. Fixes: cb5c1b91f76f ("net/iavf: add thread for event callbacks") Fixes: 22b123a36d07 ("net/avf: initialize PMD") Cc: stable@dpdk.org Signed-off-by: Kaiwen Deng --- Changes since v1: - Add lock to avoid race condition. --- --- drivers/net/iavf/iavf.h | 10 ++- drivers/net/iavf/iavf_ethdev.c | 37 ++++++++++-- drivers/net/iavf/iavf_vchnl.c | 107 ++++++++++++++++++++++++--------- 3 files changed, 118 insertions(+), 36 deletions(-) diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index 1edebab8dc..641dfa2e3b 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -128,6 +128,8 @@ struct iavf_vsi { uint16_t base_vector; uint16_t msix_intr; /* The MSIX interrupt binds to VSI */ struct iavf_eth_xstats eth_stats_offset; + struct virtchnl_eth_stats eth_stats; + rte_spinlock_t stats_lock; }; struct rte_flow; @@ -325,6 +327,8 @@ struct iavf_adapter { struct iavf_devargs devargs; }; +typedef void (*virtchnl_callback)(struct rte_eth_dev *dev, void *args); + /* IAVF_DEV_PRIVATE_TO */ #define IAVF_DEV_PRIVATE_TO_ADAPTER(adapter) \ ((struct iavf_adapter *)adapter) @@ -424,8 +428,10 @@ _atomic_set_async_response_cmd(struct iavf_info *vf, enum virtchnl_ops ops) } int iavf_check_api_version(struct iavf_adapter *adapter); int iavf_get_vf_resource(struct iavf_adapter *adapter); -void iavf_dev_event_handler_fini(void); -int iavf_dev_event_handler_init(void); +void iavf_dev_virtchnl_handler_fini(void); +void iavf_dev_virtchnl_callback_post(struct rte_eth_dev *dev, + virtchnl_callback cb, void *args); +int iavf_dev_virtchnl_handler_init(void); void iavf_handle_virtchnl_msg(struct rte_eth_dev *dev); int iavf_enable_vlan_strip(struct iavf_adapter *adapter); int iavf_disable_vlan_strip(struct iavf_adapter *adapter); diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c index 3196210f2c..772859d157 100644 --- a/drivers/net/iavf/iavf_ethdev.c +++ b/drivers/net/iavf/iavf_ethdev.c @@ -1729,6 +1729,22 @@ iavf_update_stats(struct iavf_vsi *vsi, struct virtchnl_eth_stats *nes) iavf_stat_update_32(&oes->tx_discards, &nes->tx_discards); } +static void iavf_dev_stats_get_callback(struct rte_eth_dev *dev, void *args) +{ + struct virtchnl_eth_stats *eth_stats = (struct virtchnl_eth_stats *)args; + struct virtchnl_eth_stats *pstats = NULL; + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_vsi *vsi = &vf->vsi; + int ret = iavf_query_stats(adapter, &pstats); + if (ret == 0) { + rte_spinlock_lock(&vsi->stats_lock); + rte_memcpy(eth_stats, pstats, sizeof(struct virtchnl_eth_stats)); + rte_spinlock_unlock(&vsi->stats_lock); + } +} + static int iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { @@ -1738,9 +1754,17 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) struct iavf_vsi *vsi = &vf->vsi; struct virtchnl_eth_stats *pstats = NULL; int ret; - - ret = iavf_query_stats(adapter, &pstats); + int is_intr_thread = rte_thread_is_intr(); + if (is_intr_thread) { + pstats = &vsi->eth_stats; + iavf_dev_virtchnl_callback_post(dev, iavf_dev_stats_get_callback, (void *)pstats); + ret = 0; + } else { + ret = iavf_query_stats(adapter, &pstats); + } if (ret == 0) { + if (is_intr_thread) + rte_spinlock_lock(&vsi->stats_lock); uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 : RTE_ETHER_CRC_LEN; @@ -1754,6 +1778,8 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) stats->ibytes = pstats->rx_bytes; stats->ibytes -= stats->ipackets * crc_stats_len; stats->obytes = pstats->tx_bytes; + if (is_intr_thread) + rte_spinlock_unlock(&vsi->stats_lock); } else { PMD_DRV_LOG(ERR, "Get statistics failed"); } @@ -2571,10 +2597,13 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter); struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter); struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct iavf_vsi *vsi = &vf->vsi; int ret = 0; PMD_INIT_FUNC_TRACE(); + rte_spinlock_init(&vsi->stats_lock); + /* assign ops func pointer */ eth_dev->dev_ops = &iavf_eth_dev_ops; eth_dev->rx_queue_count = iavf_dev_rxq_count; @@ -2634,7 +2663,7 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) rte_ether_addr_copy((struct rte_ether_addr *)hw->mac.addr, ð_dev->data->mac_addrs[0]); - if (iavf_dev_event_handler_init()) + if (iavf_dev_virtchnl_handler_init()) goto init_vf_err; if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) { @@ -2791,7 +2820,7 @@ iavf_dev_uninit(struct rte_eth_dev *dev) iavf_dev_close(dev); - iavf_dev_event_handler_fini(); + iavf_dev_virtchnl_handler_fini(); return 0; } diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index f92daf97f2..4136c97c45 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -31,24 +31,36 @@ #define MAX_EVENT_PENDING 16 -struct iavf_event_element { - TAILQ_ENTRY(iavf_event_element) next; +struct iavf_virtchnl_element { + TAILQ_ENTRY(iavf_virtchnl_element) next; struct rte_eth_dev *dev; - enum rte_eth_event_type event; - void *param; - size_t param_alloc_size; - uint8_t param_alloc_data[0]; + enum iavf_virchnl_handle_type { + EVENT_TYPE = 0, + CALL_TYPE + } handle_type; + union { + struct event_param { + enum rte_eth_event_type event; + void *param; + size_t param_alloc_size; + uint8_t param_alloc_data[0]; + } ep; + struct call_param { + virtchnl_callback cb; + void *args; + } cp; + }; }; -struct iavf_event_handler { +struct iavf_virtchnl_handler { uint32_t ndev; pthread_t tid; int fd[2]; pthread_mutex_t lock; - TAILQ_HEAD(event_list, iavf_event_element) pending; + TAILQ_HEAD(event_list, iavf_virtchnl_element) pending; }; -static struct iavf_event_handler event_handler = { +static struct iavf_virtchnl_handler event_handler = { .fd = {-1, -1}, }; @@ -60,10 +72,10 @@ static struct iavf_event_handler event_handler = { #endif static void * -iavf_dev_event_handle(void *param __rte_unused) +iavf_dev_virtchnl_handle(void *param __rte_unused) { - struct iavf_event_handler *handler = &event_handler; - TAILQ_HEAD(event_list, iavf_event_element) pending; + struct iavf_virtchnl_handler *handler = &event_handler; + TAILQ_HEAD(event_list, iavf_virtchnl_element) pending; while (true) { char unused[MAX_EVENT_PENDING]; @@ -76,10 +88,22 @@ iavf_dev_event_handle(void *param __rte_unused) TAILQ_CONCAT(&pending, &handler->pending, next); pthread_mutex_unlock(&handler->lock); - struct iavf_event_element *pos, *save_next; + struct iavf_virtchnl_element *pos, *save_next; TAILQ_FOREACH_SAFE(pos, &pending, next, save_next) { TAILQ_REMOVE(&pending, pos, next); - rte_eth_dev_callback_process(pos->dev, pos->event, pos->param); + + switch (pos->handle_type) { + case EVENT_TYPE: + rte_eth_dev_callback_process(pos->dev, + pos->ep.event, pos->ep.param); + break; + case CALL_TYPE: + pos->cp.cb(pos->dev, pos->cp.args); + break; + default: + break; + } + rte_free(pos); } } @@ -92,19 +116,20 @@ iavf_dev_event_post(struct rte_eth_dev *dev, enum rte_eth_event_type event, void *param, size_t param_alloc_size) { - struct iavf_event_handler *handler = &event_handler; + struct iavf_virtchnl_handler *handler = &event_handler; char notify_byte; - struct iavf_event_element *elem = rte_malloc(NULL, sizeof(*elem) + param_alloc_size, 0); + struct iavf_virtchnl_element *elem = rte_malloc(NULL, sizeof(*elem) + param_alloc_size, 0); if (!elem) return; - + elem->handle_type = EVENT_TYPE; + struct event_param *ep = &elem->ep; elem->dev = dev; - elem->event = event; - elem->param = param; - elem->param_alloc_size = param_alloc_size; + ep->event = event; + ep->param = param; + ep->param_alloc_size = param_alloc_size; if (param && param_alloc_size) { - rte_memcpy(elem->param_alloc_data, param, param_alloc_size); - elem->param = elem->param_alloc_data; + rte_memcpy(ep->param_alloc_data, param, param_alloc_size); + ep->param = ep->param_alloc_data; } pthread_mutex_lock(&handler->lock); @@ -115,10 +140,32 @@ iavf_dev_event_post(struct rte_eth_dev *dev, RTE_SET_USED(nw); } +void +iavf_dev_virtchnl_callback_post(struct rte_eth_dev *dev, virtchnl_callback cb, void *args) +{ + struct iavf_virtchnl_handler *handler = &event_handler; + char notify_byte; + struct iavf_virtchnl_element *elem = rte_malloc(NULL, sizeof(*elem), 0); + if (!elem) + return; + elem->dev = dev; + elem->handle_type = CALL_TYPE; + struct call_param *cp = &elem->cp; + cp->cb = cb; + cp->args = args; + + pthread_mutex_lock(&handler->lock); + TAILQ_INSERT_TAIL(&handler->pending, elem, next); + pthread_mutex_unlock(&handler->lock); + + ssize_t nw = write(handler->fd[1], ¬ify_byte, 1); + RTE_SET_USED(nw); +} + int -iavf_dev_event_handler_init(void) +iavf_dev_virtchnl_handler_init(void) { - struct iavf_event_handler *handler = &event_handler; + struct iavf_virtchnl_handler *handler = &event_handler; if (__atomic_add_fetch(&handler->ndev, 1, __ATOMIC_RELAXED) != 1) return 0; @@ -135,8 +182,8 @@ iavf_dev_event_handler_init(void) TAILQ_INIT(&handler->pending); pthread_mutex_init(&handler->lock, NULL); - if (rte_ctrl_thread_create(&handler->tid, "iavf-event-thread", - NULL, iavf_dev_event_handle, NULL)) { + if (rte_ctrl_thread_create(&handler->tid, "iavf-virtchnl-thread", + NULL, iavf_dev_virtchnl_handle, NULL)) { __atomic_sub_fetch(&handler->ndev, 1, __ATOMIC_RELAXED); return -1; } @@ -145,9 +192,9 @@ iavf_dev_event_handler_init(void) } void -iavf_dev_event_handler_fini(void) +iavf_dev_virtchnl_handler_fini(void) { - struct iavf_event_handler *handler = &event_handler; + struct iavf_virtchnl_handler *handler = &event_handler; if (__atomic_sub_fetch(&handler->ndev, 1, __ATOMIC_RELAXED) != 0) return; @@ -162,7 +209,7 @@ iavf_dev_event_handler_fini(void) pthread_join(handler->tid, NULL); pthread_mutex_destroy(&handler->lock); - struct iavf_event_element *pos, *save_next; + struct iavf_virtchnl_element *pos, *save_next; TAILQ_FOREACH_SAFE(pos, &handler->pending, next, save_next) { TAILQ_REMOVE(&handler->pending, pos, next); rte_free(pos); @@ -1408,7 +1455,7 @@ iavf_query_stats(struct iavf_adapter *adapter, struct iavf_cmd_info args; int err; - if (adapter->closed) + if (!adapter || adapter->closed) return -EIO; memset(&q_stats, 0, sizeof(q_stats)); -- 2.34.1