From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id ADA7DA052B; Thu, 30 Jul 2020 16:44:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9580137B7; Thu, 30 Jul 2020 16:44:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 3778510A3 for ; Thu, 30 Jul 2020 16:44:28 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06UEds2N020253; Thu, 30 Jul 2020 07:44:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=18szoqarxB9vleElY/79QHPwBoWai+TYUUvXpIXNXMU=; b=AOcJD69L2TkPXHYvU273+0FB94BH/VOykIVzWeZI5iGumVYzd/pfIaBxxzkUo945PmxK 3Om94Ycs7vlEA31EptOLyhzztVrwdeWVIcEpbndZrfUw1tcWTh0Uj5KTpqmKtUOdxiGt cVC5n5xV8J5dBKRNqBaIJC1ilEG3P67CTAwbCdh2UmGh1yrxES7Qg7EfgMi8Lav9gGxk XBiuGmw5aRQlswZF8JOtVrREsA7UjBbCVRBPxB7WYIe0AsfPDEIsdmIepv0BT9l+5t4l 82YvqyWYkuZrKmeJDOsdWJz06qxeDAlf02v9Dby+KCx8EhKUWno2yThggA3O+s1oaQn9 Ow== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 32gj3r6c0d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 30 Jul 2020 07:44:27 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 30 Jul 2020 07:44:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 30 Jul 2020 07:44:26 -0700 Received: from dut1171.mv.qlogic.com (unknown [10.112.88.18]) by maili.marvell.com (Postfix) with ESMTP id 4E2323F703F; Thu, 30 Jul 2020 07:44:26 -0700 (PDT) Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id 06UEiQ00029122; Thu, 30 Jul 2020 07:44:26 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id 06UEiQOH029120; Thu, 30 Jul 2020 07:44:26 -0700 From: Manish Chopra To: , , , CC: , , , , , , , , Date: Thu, 30 Jul 2020 07:42:19 -0700 Message-ID: <20200730144221.29051-5-manishc@marvell.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20200730144221.29051-1-manishc@marvell.com> References: <20200730144221.29051-1-manishc@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-30_11:2020-07-30, 2020-07-30 signatures=0 Subject: [dpdk-dev] [PATCH v5 4/6] net/qede: add infrastructure support for VF load X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds necessary infrastructure support (required to handle messages from VF and sending ramrod on behalf of VF's configuration request from alarm handler context) to start/load the VF-PMD driver instance on top of PF-PMD driver instance. Signed-off-by: Manish Chopra Signed-off-by: Igor Russkikh Signed-off-by: Rasesh Mody --- drivers/net/qede/base/bcm_osal.c | 26 ++++++++++++ drivers/net/qede/base/bcm_osal.h | 11 +++-- drivers/net/qede/base/ecore.h | 4 ++ drivers/net/qede/base/ecore_iov_api.h | 3 ++ drivers/net/qede/qede_ethdev.c | 2 + drivers/net/qede/qede_main.c | 4 +- drivers/net/qede/qede_sriov.c | 61 +++++++++++++++++++++++++++ drivers/net/qede/qede_sriov.h | 16 ++++++- 8 files changed, 121 insertions(+), 6 deletions(-) diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c index 65837b53d..ef47339df 100644 --- a/drivers/net/qede/base/bcm_osal.c +++ b/drivers/net/qede/base/bcm_osal.c @@ -14,6 +14,32 @@ #include "ecore_iov_api.h" #include "ecore_mcp_api.h" #include "ecore_l2_api.h" +#include "../qede_sriov.h" + +int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn) +{ + int rc; + + rc = qed_schedule_iov(p_hwfn, QED_IOV_WQ_MSG_FLAG); + if (rc) { + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, + "Failed to schedule alarm handler rc=%d\n", rc); + } + + return rc; +} + +void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie) +{ + struct ecore_hwfn *p_hwfn = (struct ecore_hwfn *)hwfn_cookie; + + if (!p_hwfn) + return; + + OSAL_SPIN_LOCK(&p_hwfn->spq_lock); + ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn)); + OSAL_SPIN_UNLOCK(&p_hwfn->spq_lock); +} /* Array of memzone pointers */ static const struct rte_memzone *ecore_mz_mapping[RTE_MAX_MEMZONE]; diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h index 5f55cc2ee..cf58db8bf 100644 --- a/drivers/net/qede/base/bcm_osal.h +++ b/drivers/net/qede/base/bcm_osal.h @@ -178,9 +178,12 @@ typedef pthread_mutex_t osal_mutex_t; /* DPC */ +void osal_poll_mode_dpc(osal_int_ptr_t hwfn_cookie); #define OSAL_DPC_ALLOC(hwfn) OSAL_ALLOC(hwfn, GFP, sizeof(osal_dpc_t)) -#define OSAL_DPC_INIT(dpc, hwfn) nothing -#define OSAL_POLL_MODE_DPC(hwfn) nothing +#define OSAL_DPC_INIT(dpc, hwfn) \ + OSAL_SPIN_LOCK_INIT(&(hwfn)->spq_lock) +#define OSAL_POLL_MODE_DPC(hwfn) \ + osal_poll_mode_dpc((osal_int_ptr_t)(p_hwfn)) #define OSAL_DPC_SYNC(hwfn) nothing /* Lists */ @@ -345,10 +348,12 @@ u32 qede_find_first_zero_bit(u32 *bitmap, u32 length); /* SR-IOV channel */ +int osal_pf_vf_msg(struct ecore_hwfn *p_hwfn); #define OSAL_VF_FLR_UPDATE(hwfn) nothing #define OSAL_VF_SEND_MSG2PF(dev, done, msg, reply_addr, msg_size, reply_size) 0 #define OSAL_VF_CQE_COMPLETION(_dev_p, _cqe, _protocol) (0) -#define OSAL_PF_VF_MSG(hwfn, vfid) 0 +#define OSAL_PF_VF_MSG(hwfn, vfid) \ + osal_pf_vf_msg(hwfn) #define OSAL_PF_VF_MALICIOUS(hwfn, vfid) nothing #define OSAL_IOV_CHK_UCAST(hwfn, vfid, params) 0 #define OSAL_IOV_POST_START_VPORT(hwfn, vf, vport_id, opaque_fid) nothing diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h index 750e99a8f..6c8e6d407 100644 --- a/drivers/net/qede/base/ecore.h +++ b/drivers/net/qede/base/ecore.h @@ -714,6 +714,10 @@ struct ecore_hwfn { /* @DPDK */ struct ecore_ptt *p_arfs_ptt; + + /* DPDK specific, not the part of vanilla ecore */ + osal_spinlock_t spq_lock; + u32 iov_task_flags; }; enum ecore_mf_mode { diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h index 545001812..bd7c5703f 100644 --- a/drivers/net/qede/base/ecore_iov_api.h +++ b/drivers/net/qede/base/ecore_iov_api.h @@ -14,6 +14,9 @@ #define ECORE_ETH_VF_NUM_VLAN_FILTERS 2 #define ECORE_VF_ARRAY_LENGTH (3) +#define ECORE_VF_ARRAY_GET_VFID(arr, vfid) \ + (((arr)[(vfid) / 64]) & (1ULL << ((vfid) % 64))) + #define IS_VF(p_dev) ((p_dev)->b_is_vf) #define IS_PF(p_dev) (!((p_dev)->b_is_vf)) #ifdef CONFIG_ECORE_SRIOV diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c index 0235c0798..210a3b10f 100644 --- a/drivers/net/qede/qede_ethdev.c +++ b/drivers/net/qede/qede_ethdev.c @@ -281,7 +281,9 @@ qede_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size) static void qede_interrupt_action(struct ecore_hwfn *p_hwfn) { + OSAL_SPIN_LOCK(&p_hwfn->spq_lock); ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn)); + OSAL_SPIN_UNLOCK(&p_hwfn->spq_lock); } static void diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c index c37e8ebe0..0afacc064 100644 --- a/drivers/net/qede/qede_main.c +++ b/drivers/net/qede/qede_main.c @@ -221,7 +221,9 @@ static void qed_stop_iov_task(struct ecore_dev *edev) for_each_hwfn(edev, i) { p_hwfn = &edev->hwfns[i]; - if (!IS_PF(edev)) + if (IS_PF(edev)) + rte_eal_alarm_cancel(qed_iov_pf_task, p_hwfn); + else rte_eal_alarm_cancel(qede_vf_task, p_hwfn); } } diff --git a/drivers/net/qede/qede_sriov.c b/drivers/net/qede/qede_sriov.c index ba4384e90..6d620dde8 100644 --- a/drivers/net/qede/qede_sriov.c +++ b/drivers/net/qede/qede_sriov.c @@ -4,6 +4,14 @@ * www.marvell.com */ +#include + +#include "base/bcm_osal.h" +#include "base/ecore.h" +#include "base/ecore_sriov.h" +#include "base/ecore_mcp.h" +#include "base/ecore_vf.h" + #include "qede_sriov.h" static void qed_sriov_enable_qid_config(struct ecore_hwfn *hwfn, @@ -83,3 +91,56 @@ void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param) if (num_vfs_param) qed_sriov_enable(edev, num_vfs_param); } + +static void qed_handle_vf_msg(struct ecore_hwfn *hwfn) +{ + u64 events[ECORE_VF_ARRAY_LENGTH]; + struct ecore_ptt *ptt; + int i; + + ptt = ecore_ptt_acquire(hwfn); + if (!ptt) { + DP_NOTICE(hwfn, true, "PTT acquire failed\n"); + qed_schedule_iov(hwfn, QED_IOV_WQ_MSG_FLAG); + return; + } + + ecore_iov_pf_get_pending_events(hwfn, events); + + ecore_for_each_vf(hwfn, i) { + /* Skip VFs with no pending messages */ + if (!ECORE_VF_ARRAY_GET_VFID(events, i)) + continue; + + DP_VERBOSE(hwfn, ECORE_MSG_IOV, + "Handling VF message from VF 0x%02x [Abs 0x%02x]\n", + i, hwfn->p_dev->p_iov_info->first_vf_in_pf + i); + + /* Copy VF's message to PF's request buffer for that VF */ + if (ecore_iov_copy_vf_msg(hwfn, ptt, i)) + continue; + + ecore_iov_process_mbx_req(hwfn, ptt, i); + } + + ecore_ptt_release(hwfn, ptt); +} + +void qed_iov_pf_task(void *arg) +{ + struct ecore_hwfn *p_hwfn = arg; + + if (OSAL_GET_BIT(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags)) { + OSAL_CLEAR_BIT(QED_IOV_WQ_MSG_FLAG, &p_hwfn->iov_task_flags); + qed_handle_vf_msg(p_hwfn); + } +} + +int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag) +{ + DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "Scheduling iov task [Flag: %d]\n", + flag); + + OSAL_SET_BIT(flag, &p_hwfn->iov_task_flags); + return rte_eal_alarm_set(1, qed_iov_pf_task, p_hwfn); +} diff --git a/drivers/net/qede/qede_sriov.h b/drivers/net/qede/qede_sriov.h index 6c85b1dd5..8b7fa7daa 100644 --- a/drivers/net/qede/qede_sriov.h +++ b/drivers/net/qede/qede_sriov.h @@ -4,6 +4,18 @@ * www.marvell.com */ -#include "qede_ethdev.h" - void qed_sriov_configure(struct ecore_dev *edev, int num_vfs_param); + +enum qed_iov_wq_flag { + QED_IOV_WQ_MSG_FLAG, + QED_IOV_WQ_SET_UNICAST_FILTER_FLAG, + QED_IOV_WQ_BULLETIN_UPDATE_FLAG, + QED_IOV_WQ_STOP_WQ_FLAG, + QED_IOV_WQ_FLR_FLAG, + QED_IOV_WQ_TRUST_FLAG, + QED_IOV_WQ_VF_FORCE_LINK_QUERY_FLAG, + QED_IOV_WQ_DB_REC_HANDLER, +}; + +int qed_schedule_iov(struct ecore_hwfn *p_hwfn, enum qed_iov_wq_flag flag); +void qed_iov_pf_task(void *arg); -- 2.17.1