From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A31E343C06; Tue, 27 Feb 2024 20:17:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5918342F6D; Tue, 27 Feb 2024 20:16:55 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0157440150 for ; Tue, 27 Feb 2024 20:16:51 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 41RFgbCx005488 for ; Tue, 27 Feb 2024 11:16:51 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type; s=pfpt0220; bh=99BwmvIT1zjt/9wAQ+O4i Ur5siezBA9p3iBJYQDC32g=; b=PLgtCdaip5j/iZo1O69/AUU92FxGSYbudHpcJ hg/wg4QXfdP7TIwnyZFJ+eC404+4BZwExqoNiAgvoEcVX86Q2z9h3r4O7+iAI5JF XwEGu75Vl18MH8QuPR/s6lWkjqNpesAHadGhj7xlYuNo9bZdwYSc+upPQrHR9TYk urKwaKiW2YYYNfn871SICXSqqP5eEVPIeJTSYuJB47wmrkWR8NDiDNg89i8xQ2gb bspiJ4hw63U9ewuI3H8tKnAor0vadA30ttX2iWpH3hFkdZs9tW5zgOOYm6ha1I54 YCPvPPbbEQjc23c6KbO2lv/ctuUD0Sxt+2Gy62e18t+/EtKtQ== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3whjm694nd-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 27 Feb 2024 11:16:51 -0800 (PST) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 27 Feb 2024 11:16:40 -0800 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Tue, 27 Feb 2024 11:16:40 -0800 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id CDB803F719D; Tue, 27 Feb 2024 11:16:37 -0800 (PST) From: Harman Kalra To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Harman Kalra CC: Subject: [PATCH v4 12/23] net/cnxk: handling representee notification Date: Wed, 28 Feb 2024 00:45:39 +0530 Message-ID: <20240227191550.137687-13-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20240227191550.137687-1-hkalra@marvell.com> References: <20230811163419.165790-1-hkalra@marvell.com> <20240227191550.137687-1-hkalra@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: Z8tydFUaPdOZuOucaDui4EHtJNblG6zK X-Proofpoint-GUID: Z8tydFUaPdOZuOucaDui4EHtJNblG6zK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-27_06,2024-02-27_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of any representee coming up or going down, kernel sends a mbox up call which signals a thread to process these messages and enable/disable HW resources accordingly. Signed-off-by: Harman Kalra --- drivers/net/cnxk/cnxk_eswitch.c | 8 + drivers/net/cnxk/cnxk_eswitch.h | 19 ++ drivers/net/cnxk/cnxk_rep.c | 326 ++++++++++++++++++++++++++++++++ drivers/net/cnxk/cnxk_rep.h | 37 ++++ 4 files changed, 390 insertions(+) diff --git a/drivers/net/cnxk/cnxk_eswitch.c b/drivers/net/cnxk/cnxk_eswitch.c index 14d0df8791..f420d01ef8 100644 --- a/drivers/net/cnxk/cnxk_eswitch.c +++ b/drivers/net/cnxk/cnxk_eswitch.c @@ -139,6 +139,14 @@ cnxk_eswitch_dev_remove(struct rte_pci_device *pci_dev) close(sock_fd); } + if (eswitch_dev->repte_msg_proc.start_thread) { + eswitch_dev->repte_msg_proc.start_thread = false; + pthread_cond_signal(&eswitch_dev->repte_msg_proc.repte_msg_cond); + rte_thread_join(eswitch_dev->repte_msg_proc.repte_msg_thread, NULL); + pthread_mutex_destroy(&eswitch_dev->repte_msg_proc.mutex); + pthread_cond_destroy(&eswitch_dev->repte_msg_proc.repte_msg_cond); + } + /* Remove representor devices associated with PF */ cnxk_rep_dev_remove(eswitch_dev); } diff --git a/drivers/net/cnxk/cnxk_eswitch.h b/drivers/net/cnxk/cnxk_eswitch.h index ecf10a8e08..0275e760fb 100644 --- a/drivers/net/cnxk/cnxk_eswitch.h +++ b/drivers/net/cnxk/cnxk_eswitch.h @@ -30,6 +30,22 @@ enum cnxk_esw_da_pattern_type { CNXK_ESW_DA_TYPE_PFVF, }; +struct cnxk_esw_repte_msg { + struct roc_eswitch_repte_notify_msg *notify_msg; + + TAILQ_ENTRY(cnxk_esw_repte_msg) next; +}; + +struct cnxk_esw_repte_msg_proc { + bool start_thread; + uint8_t msg_avail; + rte_thread_t repte_msg_thread; + pthread_cond_t repte_msg_cond; + pthread_mutex_t mutex; + + TAILQ_HEAD(esw_repte_msg_list, cnxk_esw_repte_msg) msg_list; +}; + struct cnxk_esw_repr_hw_info { /* Representee pcifunc value */ uint16_t hw_func; @@ -139,6 +155,9 @@ struct cnxk_eswitch_dev { bool client_connected; int sock_fd; + /* Representee notification */ + struct cnxk_esw_repte_msg_proc repte_msg_proc; + /* Port representor fields */ rte_spinlock_t rep_lock; uint16_t nb_switch_domain; diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c index 5b619ebb9e..dc00cdecc1 100644 --- a/drivers/net/cnxk/cnxk_rep.c +++ b/drivers/net/cnxk/cnxk_rep.c @@ -4,6 +4,8 @@ #include #include +#define REPTE_MSG_PROC_THRD_NAME_MAX_LEN 30 + #define PF_SHIFT 10 #define PF_MASK 0x3F @@ -86,6 +88,7 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) { int i, rc = 0; + roc_eswitch_nix_process_repte_notify_cb_unregister(&eswitch_dev->nix); for (i = 0; i < eswitch_dev->nb_switch_domain; i++) { rc = rte_eth_switch_domain_free(eswitch_dev->sw_dom[i].switch_domain_id); if (rc) @@ -95,6 +98,299 @@ cnxk_rep_dev_remove(struct cnxk_eswitch_dev *eswitch_dev) return rc; } +static int +cnxk_representee_release(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && + (!rep_dev->native_repte || rep_dev->is_vf_active)) { + rep_dev->is_vf_active = false; + rc = cnxk_rep_dev_stop(rep_eth_dev); + if (rc) { + plt_err("Failed to stop repr port %d, rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto done; + } + + cnxk_rep_rx_queue_release(rep_eth_dev, 0); + cnxk_rep_tx_queue_release(rep_eth_dev, 0); + plt_rep_dbg("Released representor ID %d representing %x", rep_dev->rep_id, + hw_func); + break; + } + } +done: + return rc; +} + +static int +cnxk_representee_setup(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, uint16_t rep_id) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int i, rc = 0; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->hw_func == hw_func && !rep_dev->is_vf_active) { + rep_dev->is_vf_active = true; + rep_dev->native_repte = true; + if (rep_dev->rep_id != rep_id) { + plt_err("Rep ID assigned during init %d does not match %d", + rep_dev->rep_id, rep_id); + rc = -EINVAL; + goto done; + } + + rc = cnxk_rep_rx_queue_setup(rep_eth_dev, rep_dev->rxq->qid, + rep_dev->rxq->nb_desc, 0, + rep_dev->rxq->rx_conf, rep_dev->rxq->mpool); + if (rc) { + plt_err("Failed to setup rxq repr port %d, rep id %d", + rep_dev->port_id, rep_dev->rep_id); + goto done; + } + + rc = cnxk_rep_tx_queue_setup(rep_eth_dev, rep_dev->txq->qid, + rep_dev->txq->nb_desc, 0, + rep_dev->txq->tx_conf); + if (rc) { + plt_err("Failed to setup txq repr port %d, rep id %d", + rep_dev->port_id, rep_dev->rep_id); + goto done; + } + + rc = cnxk_rep_dev_start(rep_eth_dev); + if (rc) { + plt_err("Failed to start repr port %d, rep id %d", rep_dev->port_id, + rep_dev->rep_id); + goto done; + } + break; + } + } +done: + return rc; +} + +static int +cnxk_representee_state_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, + bool enable) +{ + struct cnxk_eswitch_devargs *esw_da; + uint16_t rep_id = UINT16_MAX; + int rc = 0, i, j; + + /* Traversing the initialized represented list */ + for (i = 0; i < eswitch_dev->nb_esw_da; i++) { + esw_da = &eswitch_dev->esw_da[i]; + for (j = 0; j < esw_da->nb_repr_ports; j++) { + if (esw_da->repr_hw_info[j].hw_func == hw_func) { + rep_id = esw_da->repr_hw_info[j].rep_id; + break; + } + } + if (rep_id != UINT16_MAX) + break; + } + /* No action on PF func for which representor has not been created */ + if (rep_id == UINT16_MAX) + goto done; + + if (enable) { + rc = cnxk_representee_setup(eswitch_dev, hw_func, rep_id); + if (rc) { + plt_err("Failed to setup representee, err %d", rc); + goto fail; + } + plt_rep_dbg(" Representor ID %d representing %x", rep_id, hw_func); + rc = cnxk_eswitch_flow_rules_install(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to install rxtx flow rules for %x", hw_func); + goto fail; + } + } else { + rc = cnxk_eswitch_flow_rules_delete(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to delete flow rules for %x", hw_func); + goto fail; + } + rc = cnxk_representee_release(eswitch_dev, hw_func); + if (rc) { + plt_err("Failed to release representee, err %d", rc); + goto fail; + } + } + +done: + return 0; +fail: + return rc; +} + +static int +cnxk_representee_mtu_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t hw_func, + uint16_t rep_id, uint16_t mtu) +{ + struct cnxk_rep_dev *rep_dev = NULL; + struct rte_eth_dev *rep_eth_dev; + int rc = 0; + int i; + + for (i = 0; i < eswitch_dev->repr_cnt.nb_repr_probed; i++) { + rep_eth_dev = eswitch_dev->rep_info[i].rep_eth_dev; + if (!rep_eth_dev) { + plt_err("Failed to get rep ethdev handle"); + rc = -EINVAL; + goto done; + } + + rep_dev = cnxk_rep_pmd_priv(rep_eth_dev); + if (rep_dev->rep_id == rep_id) { + plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d\n", mtu, hw_func, + rep_id); + rep_dev->repte_mtu = mtu; + break; + } + } + +done: + return rc; +} + +static int +cnxk_representee_msg_process(struct cnxk_eswitch_dev *eswitch_dev, + struct roc_eswitch_repte_notify_msg *notify_msg) +{ + int rc = 0; + + switch (notify_msg->type) { + case ROC_ESWITCH_REPTE_STATE: + plt_rep_dbg(" type %d: hw_func %x action %s", notify_msg->type, + notify_msg->state.hw_func, + notify_msg->state.enable ? "enable" : "disable"); + rc = cnxk_representee_state_msg_process(eswitch_dev, notify_msg->state.hw_func, + notify_msg->state.enable); + break; + case ROC_ESWITCH_REPTE_MTU: + plt_rep_dbg(" type %d: hw_func %x rep_id %d mtu %d", notify_msg->type, + notify_msg->mtu.hw_func, notify_msg->mtu.rep_id, notify_msg->mtu.mtu); + rc = cnxk_representee_mtu_msg_process(eswitch_dev, notify_msg->mtu.hw_func, + notify_msg->mtu.rep_id, notify_msg->mtu.mtu); + break; + default: + plt_err("Invalid notification msg received %d", notify_msg->type); + break; + }; + + return rc; +} + +static uint32_t +cnxk_representee_msg_thread_main(void *arg) +{ + struct cnxk_eswitch_dev *eswitch_dev = (struct cnxk_eswitch_dev *)arg; + struct cnxk_esw_repte_msg_proc *repte_msg_proc; + struct cnxk_esw_repte_msg *msg, *next_msg; + int count, rc; + + repte_msg_proc = &eswitch_dev->repte_msg_proc; + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + while (eswitch_dev->repte_msg_proc.start_thread) { + do { + rc = pthread_cond_wait(&eswitch_dev->repte_msg_proc.repte_msg_cond, + &eswitch_dev->repte_msg_proc.mutex); + } while (rc != 0); + + /* Go through list pushed from interrupt context and process each message */ + next_msg = TAILQ_FIRST(&repte_msg_proc->msg_list); + count = 0; + while (next_msg) { + msg = next_msg; + count++; + plt_rep_dbg(" Processing msg %d: ", count); + /* Unlocking for interrupt thread to grab lock + * while thread process the message. + */ + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + /* Processing the message */ + cnxk_representee_msg_process(eswitch_dev, msg->notify_msg); + /* Locking as cond wait will unlock before wait */ + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + next_msg = TAILQ_NEXT(msg, next); + TAILQ_REMOVE(&repte_msg_proc->msg_list, msg, next); + rte_free(msg->notify_msg); + rte_free(msg); + } + } + + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + + return 0; +} + +static int +cnxk_representee_notification(void *roc_nix, struct roc_eswitch_repte_notify_msg *notify_msg) +{ + struct cnxk_esw_repte_msg_proc *repte_msg_proc; + struct cnxk_eswitch_dev *eswitch_dev; + struct cnxk_esw_repte_msg *msg; + int rc = 0; + + RTE_SET_USED(roc_nix); + eswitch_dev = cnxk_eswitch_pmd_priv(); + if (!eswitch_dev) { + plt_err("Failed to get PF ethdev handle"); + rc = -EINVAL; + goto done; + } + + repte_msg_proc = &eswitch_dev->repte_msg_proc; + msg = rte_zmalloc("msg", sizeof(struct cnxk_esw_repte_msg), 0); + if (!msg) { + plt_err("Failed to allocate memory for repte msg"); + rc = -ENOMEM; + goto done; + } + + msg->notify_msg = plt_zmalloc(sizeof(struct roc_eswitch_repte_notify_msg), 0); + if (!msg->notify_msg) { + plt_err("Failed to allocate memory"); + rc = -ENOMEM; + goto done; + } + + rte_memcpy(msg->notify_msg, notify_msg, sizeof(struct roc_eswitch_repte_notify_msg)); + plt_rep_dbg("Pushing new notification : msg type %d", msg->notify_msg->type); + pthread_mutex_lock(&eswitch_dev->repte_msg_proc.mutex); + TAILQ_INSERT_TAIL(&repte_msg_proc->msg_list, msg, next); + /* Signal vf message handler thread */ + pthread_cond_signal(&eswitch_dev->repte_msg_proc.repte_msg_cond); + pthread_mutex_unlock(&eswitch_dev->repte_msg_proc.mutex); + +done: + return rc; +} + static int cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev) { @@ -263,6 +559,7 @@ create_representor_ethdev(struct rte_pci_device *pci_dev, struct cnxk_eswitch_de int cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswitch_dev) { + char name[REPTE_MSG_PROC_THRD_NAME_MAX_LEN]; struct cnxk_eswitch_devargs *esw_da; uint16_t num_rep; int i, j, rc; @@ -302,7 +599,36 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi } } + if (!eswitch_dev->repte_msg_proc.start_thread) { + /* Register callback for representee notification */ + if (roc_eswitch_nix_process_repte_notify_cb_register(&eswitch_dev->nix, + cnxk_representee_notification)) { + plt_err("Failed to register callback for representee notification"); + rc = -EINVAL; + goto fail; + } + + /* Create a thread for handling msgs from VFs */ + TAILQ_INIT(&eswitch_dev->repte_msg_proc.msg_list); + pthread_cond_init(&eswitch_dev->repte_msg_proc.repte_msg_cond, NULL); + pthread_mutex_init(&eswitch_dev->repte_msg_proc.mutex, NULL); + + rte_strscpy(name, "repte_msg_proc_thrd", REPTE_MSG_PROC_THRD_NAME_MAX_LEN); + eswitch_dev->repte_msg_proc.start_thread = true; + rc = + rte_thread_create_internal_control(&eswitch_dev->repte_msg_proc.repte_msg_thread, + name, cnxk_representee_msg_thread_main, + eswitch_dev); + if (rc != 0) { + plt_err("Failed to create thread for VF mbox handling\n"); + goto thread_fail; + } + } + return 0; +thread_fail: + pthread_mutex_destroy(&eswitch_dev->repte_msg_proc.mutex); + pthread_cond_destroy(&eswitch_dev->repte_msg_proc.repte_msg_cond); fail: return rc; } diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h index da298823a7..5a85d4376e 100644 --- a/drivers/net/cnxk/cnxk_rep.h +++ b/drivers/net/cnxk/cnxk_rep.h @@ -10,6 +10,40 @@ /* Common ethdev ops */ extern struct eth_dev_ops cnxk_rep_dev_ops; +struct cnxk_rep_queue_stats { + uint64_t pkts; + uint64_t bytes; +}; + +struct cnxk_rep_rxq { + /* Parent rep device */ + struct cnxk_rep_dev *rep_dev; + /* Queue ID */ + uint16_t qid; + /* No of desc */ + uint16_t nb_desc; + /* mempool handle */ + struct rte_mempool *mpool; + /* RX config parameters */ + const struct rte_eth_rxconf *rx_conf; + /* Per queue TX statistics */ + struct cnxk_rep_queue_stats stats; +}; + +struct cnxk_rep_txq { + /* Parent rep device */ + struct cnxk_rep_dev *rep_dev; + /* Queue ID */ + uint16_t qid; + /* No of desc */ + uint16_t nb_desc; + /* TX config parameters */ + const struct rte_eth_txconf *tx_conf; + /* Per queue TX statistics */ + struct cnxk_rep_queue_stats stats; +}; + +/* Representor port configurations */ struct cnxk_rep_dev { uint16_t port_id; uint16_t rep_id; @@ -18,7 +52,10 @@ struct cnxk_rep_dev { uint16_t hw_func; bool is_vf_active; bool native_repte; + struct cnxk_rep_rxq *rxq; + struct cnxk_rep_txq *txq; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint16_t repte_mtu; }; static inline struct cnxk_rep_dev * -- 2.18.0