From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29E2145931; Sat, 7 Sep 2024 21:34:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9E944067C; Sat, 7 Sep 2024 21:33:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A4F264067B for ; Sat, 7 Sep 2024 21:33:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 487IuQFO015467; Sat, 7 Sep 2024 12:33:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=T zPQDNdXJXnJp/DMGw+7KejNBTaJzg41IgwbUGBTQa4=; b=iR7xykfWIRim3lVDv WF4yBJrACXsdfn1NI+eJZlbOhhhRsg8fuOcGLHuWJ5RTGijEJaMW7uU1/23hQ6PW lW880ffiixeW4gK0nCIFujnxrcNobH6bpzuozk+07CAQhK6UaQTNqRLfOW1RJoHn 0nU2rWtdt99GFys4FYtNDlEPeyXRXdeSkZlX1fl+4hUtsiehw4GlIoiWIU/d0mDx GGO0vZAafwt9kak2KngBx8oyv+UUuhYPsFBZ76Z3dDSmP3n6J/NRi5ITdY/vDkc3 Yz3NamJS3YCNBY8mUnKXNwkswYZC/ViJxtypiH3YdD6FcwwinQJhJLBjAhTY64EA 1nylg== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 41gm6k8tux-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Sat, 07 Sep 2024 12:33:40 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Sat, 7 Sep 2024 12:33:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Sat, 7 Sep 2024 12:33:39 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id 339CE3F7057; Sat, 7 Sep 2024 12:33:36 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , "Akhil Goyal" Subject: [PATCH 8/9] raw/cnxk_rvu_lf: process mailbox message Date: Sun, 8 Sep 2024 01:03:10 +0530 Message-ID: <20240907193311.1342310-9-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240907193311.1342310-1-gakhil@marvell.com> References: <20240907193311.1342310-1-gakhil@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: loKAoB-8XFW5RdEXh0lkWvCH1m4Y_vxX X-Proofpoint-GUID: loKAoB-8XFW5RdEXh0lkWvCH1m4Y_vxX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added PMD API rte_pmd_rvu_lf_msg_process() to process mailbox messages between rvu_lf devices. Signed-off-by: Akhil Goyal --- doc/guides/rawdevs/cnxk_rvu_lf.rst | 8 ++ drivers/common/cnxk/roc_dev.c | 117 +++++++++++++++++++++-- drivers/common/cnxk/roc_mbox.h | 1 + drivers/common/cnxk/roc_rvu_lf.c | 58 +++++++++++ drivers/common/cnxk/roc_rvu_lf.h | 5 + drivers/common/cnxk/roc_rvu_lf_priv.h | 5 + drivers/common/cnxk/version.map | 2 + drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c | 15 +++ drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h | 29 ++++++ 9 files changed, 234 insertions(+), 6 deletions(-) diff --git a/doc/guides/rawdevs/cnxk_rvu_lf.rst b/doc/guides/rawdevs/cnxk_rvu_lf.rst index 7f193e232c..7b321abd38 100644 --- a/doc/guides/rawdevs/cnxk_rvu_lf.rst +++ b/doc/guides/rawdevs/cnxk_rvu_lf.rst @@ -72,3 +72,11 @@ can register callbacks using ``rte_pmd_rvu_lf_msg_handler_register()`` and fill required responses as per the request and message id received. Application can also unregister already registered message callbacks using ``rte_pmd_rvu_lf_msg_handler_unregister()``. + +A PMD API ``rte_pmd_rvu_lf_msg_process()`` is created to send a request and +receive corresponding response from the other side(PF/VF). +It accepts an opaque pointer of a request and its size which can be defined by application +and provides an opaque pointer for a response and its length. +PF and VF application can define its own request and response based on the message id +of the mailbox. +For sample usage of the APIs, please refer to ``rvu_lf_rawdev_selftest()``. diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c index daf7684d8e..806d34143b 100644 --- a/drivers/common/cnxk/roc_dev.c +++ b/drivers/common/cnxk/roc_dev.c @@ -215,6 +215,51 @@ af_pf_wait_msg(struct dev *dev, uint16_t vf, int num_msg) return req_hdr->num_msgs; } +static int +process_rvu_lf_msgs(struct dev *dev, uint16_t vf, struct mbox_msghdr *msg, size_t size) +{ + uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8; + uint8_t req[MBOX_MSG_REQ_SIZE_MAX]; + struct msg_rsp *rsp; + uint16_t rsp_len; + void *resp; + int rc = 0; + + /* Handle BPHY mailbox message in PF */ + dev->active_vfs[vf / max_bits] |= BIT_ULL(vf % max_bits); + + if ((size - sizeof(struct mbox_msghdr)) > MBOX_MSG_REQ_SIZE_MAX) { + plt_err("MBOX request size greater than %d", MBOX_MSG_REQ_SIZE_MAX); + return -1; + } + mbox_memcpy(req, (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + + rc = dev->ops->msg_process_cb(dev_get_vf(msg->pcifunc), msg->id, req, + size - sizeof(struct mbox_msghdr), &resp, &rsp_len); + if (rc < 0) { + plt_err("Failed to process VF%d message", vf); + return -1; + } + + rsp = (struct msg_rsp *)mbox_alloc_msg(&dev->mbox_vfpf, vf, + rsp_len + sizeof(struct mbox_msghdr)); + if (!rsp) { + plt_err("Failed to alloc VF%d response message", vf); + return -1; + } + + mbox_rsp_init(msg->id, rsp); + + mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), resp, rsp_len); + free(resp); + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + + return 0; +} + /* PF receives mbox DOWN messages from VF and forwards to AF */ static int vf_pf_process_msgs(struct dev *dev, uint16_t vf) @@ -261,6 +306,9 @@ vf_pf_process_msgs(struct dev *dev, uint16_t vf) /* PF/VF function ID */ rsp->hdr.pcifunc = msg->pcifunc; rsp->hdr.rc = 0; + } else if (roc_rvu_lf_msg_id_range_check(dev->roc_rvu_lf, msg->id)) { + if (process_rvu_lf_msgs(dev, vf, msg, size) < 0) + continue; } else { struct mbox_msghdr *af_req; /* Reserve AF/PF mbox message */ @@ -339,8 +387,13 @@ vf_pf_process_up_msgs(struct dev *dev, uint16_t vf) dev_get_vf(msg->pcifunc)); break; default: - plt_err("Not handled UP msg 0x%x (%s) func:0x%x", - msg->id, mbox_id2name(msg->id), msg->pcifunc); + if (roc_rvu_lf_msg_id_range_check(dev->roc_rvu_lf, msg->id)) + plt_base_dbg("PF: Msg 0x%x fn:0x%x (pf:%d,vf:%d)", + msg->id, msg->pcifunc, dev_get_pf(msg->pcifunc), + dev_get_vf(msg->pcifunc)); + else + plt_err("Not handled UP msg 0x%x (%s) func:0x%x", + msg->id, mbox_id2name(msg->id), msg->pcifunc); } offset = mbox->rx_start + msg->next_msgoff; } @@ -789,6 +842,49 @@ mbox_process_msgs_up(struct dev *dev, struct mbox_msghdr *req) return -ENODEV; } +static int +process_rvu_lf_msgs_up(struct dev *dev, struct mbox_msghdr *msg, size_t size) +{ + uint8_t req[MBOX_MSG_REQ_SIZE_MAX]; + struct msg_rsp *rsp; + uint16_t rsp_len; + void *resp; + int rc = 0; + + /* Check if valid, if not reply with an invalid msg */ + if (msg->sig != MBOX_REQ_SIG) + return -EIO; + + if ((size - sizeof(struct mbox_msghdr)) > MBOX_MSG_REQ_SIZE_MAX) { + plt_err("MBOX request size greater than %d", MBOX_MSG_REQ_SIZE_MAX); + return -ENOMEM; + } + mbox_memcpy(req, (uint8_t *)msg + sizeof(struct mbox_msghdr), + size - sizeof(struct mbox_msghdr)); + rc = dev->ops->msg_process_cb(dev_get_vf(msg->pcifunc), msg->id, req, + size - sizeof(struct mbox_msghdr), &resp, &rsp_len); + if (rc < 0) { + plt_err("Failed to process VF%d message", dev->vf); + return rc; + } + + rsp = (struct msg_rsp *)mbox_alloc_msg(&dev->mbox_up, 0, + rsp_len + sizeof(struct mbox_msghdr)); + if (!rsp) { + plt_err("Failed to alloc VF%d response message", dev->vf); + return -ENOMEM; + } + + mbox_rsp_init(msg->id, rsp); + + mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr), resp, rsp_len); + /* PF/VF function ID */ + rsp->hdr.pcifunc = msg->pcifunc; + rsp->hdr.rc = 0; + + return rc; +} + /* Received up messages from AF (PF context) / PF (in context) */ static void process_msgs_up(struct dev *dev, struct mbox *mbox) @@ -797,6 +893,7 @@ process_msgs_up(struct dev *dev, struct mbox *mbox) struct mbox_hdr *req_hdr; struct mbox_msghdr *msg; int i, err, offset; + size_t size; req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start); if (req_hdr->num_msgs == 0) @@ -809,10 +906,17 @@ process_msgs_up(struct dev *dev, struct mbox *mbox) plt_base_dbg("Message 0x%x (%s) pf:%d/vf:%d", msg->id, mbox_id2name(msg->id), dev_get_pf(msg->pcifunc), dev_get_vf(msg->pcifunc)); - err = mbox_process_msgs_up(dev, msg); - if (err) - plt_err("Error %d handling 0x%x (%s)", err, msg->id, - mbox_id2name(msg->id)); + if (roc_rvu_lf_msg_id_range_check(dev->roc_rvu_lf, msg->id)) { + size = mbox->rx_start + msg->next_msgoff - offset; + err = process_rvu_lf_msgs_up(dev, msg, size); + if (err) + plt_err("Error %d handling 0x%x RVU_LF up msg", err, msg->id); + } else { + err = mbox_process_msgs_up(dev, msg); + if (err) + plt_err("Error %d handling 0x%x (%s)", err, msg->id, + mbox_id2name(msg->id)); + } offset = mbox->rx_start + msg->next_msgoff; } /* Send mbox responses */ @@ -1272,6 +1376,7 @@ dev_vf_hwcap_update(struct plt_pci_device *pci_dev, struct dev *dev) case PCI_DEVID_CNXK_RVU_VF: case PCI_DEVID_CNXK_RVU_SDP_VF: case PCI_DEVID_CNXK_RVU_NIX_INL_VF: + case PCI_DEVID_CNXK_RVU_BPHY_VF: case PCI_DEVID_CNXK_RVU_ESWITCH_VF: dev->hwcap |= DEV_HWCAP_F_VF; break; diff --git a/drivers/common/cnxk/roc_mbox.h b/drivers/common/cnxk/roc_mbox.h index 3018a4f73e..ea806ea6e8 100644 --- a/drivers/common/cnxk/roc_mbox.h +++ b/drivers/common/cnxk/roc_mbox.h @@ -55,6 +55,7 @@ struct mbox_msghdr { #define MBOX_MSG_INVALID 0xFFFE #define MBOX_MSG_MAX 0xFFFF #define MBOX_MSG_GENERIC_MAX_ID 0x1FF +#define MBOX_MSG_REQ_SIZE_MAX (16 * 1024) #define MBOX_MESSAGES \ /* Generic mbox IDs (range 0x000 - 0x1FF) */ \ diff --git a/drivers/common/cnxk/roc_rvu_lf.c b/drivers/common/cnxk/roc_rvu_lf.c index 1026ccc125..471dfa7a46 100644 --- a/drivers/common/cnxk/roc_rvu_lf.c +++ b/drivers/common/cnxk/roc_rvu_lf.c @@ -62,6 +62,15 @@ roc_rvu_lf_dev_fini(struct roc_rvu_lf *roc_rvu_lf) return dev_fini(&rvu->dev, rvu->pci_dev); } +static uint16_t +roc_rvu_lf_pf_func_get(struct roc_rvu_lf *roc_rvu_lf) +{ + struct rvu_lf *rvu = roc_rvu_lf_to_rvu_priv(roc_rvu_lf); + struct dev *dev = &rvu->dev; + + return dev->pf_func; +} + int roc_rvu_lf_msg_id_range_set(struct roc_rvu_lf *roc_rvu_lf, uint16_t from, uint16_t to) { @@ -92,6 +101,55 @@ roc_rvu_lf_msg_id_range_check(struct roc_rvu_lf *roc_rvu_lf, uint16_t msg_id) return 0; } +int +roc_rvu_lf_msg_process(struct roc_rvu_lf *roc_rvu_lf, uint16_t vf, uint16_t msg_id, + void *req_data, uint16_t req_len, void *rsp_data, uint16_t rsp_len) +{ + struct rvu_lf *rvu = roc_rvu_lf_to_rvu_priv(roc_rvu_lf); + struct mbox *mbox; + struct rvu_lf_msg *req; + struct rvu_lf_msg *rsp; + int rc = -ENOSPC; + int devid = 0; + + if (rvu->dev.vf == -1 && roc_rvu_lf_msg_id_range_check(roc_rvu_lf, msg_id)) { + /* This is PF context */ + if (vf >= rvu->dev.maxvf) + return -EINVAL; + devid = vf; + mbox = mbox_get(&rvu->dev.mbox_vfpf_up); + } else { + /* This is VF context */ + devid = 0; /* VF send all message to PF */ + mbox = mbox_get(rvu->dev.mbox); + } + req = (struct rvu_lf_msg *)mbox_alloc_msg_rsp(mbox, devid, + req_len + sizeof(struct rvu_lf_msg), + rsp_len + sizeof(struct rvu_lf_msg)); + if (!req) + goto fail; + mbox_memcpy(req->data, req_data, req_len); + req->hdr.sig = MBOX_REQ_SIG; + req->hdr.id = msg_id; + req->hdr.pcifunc = roc_rvu_lf_pf_func_get(roc_rvu_lf); + + if (rvu->dev.vf == -1) { + mbox_msg_send_up(mbox, devid); + rc = mbox_get_rsp(mbox, devid, (void *)&rsp); + if (rc) + goto fail; + } else { + rc = mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto fail; + } + if (rsp_len && rsp_data != NULL) + mbox_memcpy(rsp_data, rsp->data, rsp_len); +fail: + mbox_put(mbox); + return rc; +} + int roc_rvu_lf_irq_register(struct roc_rvu_lf *roc_rvu_lf, unsigned int irq, roc_rvu_lf_intr_cb_fn cb, void *data) diff --git a/drivers/common/cnxk/roc_rvu_lf.h b/drivers/common/cnxk/roc_rvu_lf.h index 7243e170b9..6b4819666a 100644 --- a/drivers/common/cnxk/roc_rvu_lf.h +++ b/drivers/common/cnxk/roc_rvu_lf.h @@ -21,6 +21,11 @@ TAILQ_HEAD(roc_rvu_lf_head, roc_rvu_lf); int __roc_api roc_rvu_lf_dev_init(struct roc_rvu_lf *roc_rvu_lf); int __roc_api roc_rvu_lf_dev_fini(struct roc_rvu_lf *roc_rvu_lf); +int __roc_api roc_rvu_lf_msg_process(struct roc_rvu_lf *roc_rvu_lf, + uint16_t vf, uint16_t msg_id, + void *req_data, uint16_t req_len, + void *rsp_data, uint16_t rsp_len); + int __roc_api roc_rvu_lf_msg_id_range_set(struct roc_rvu_lf *roc_rvu_lf, uint16_t from, uint16_t to); bool __roc_api roc_rvu_lf_msg_id_range_check(struct roc_rvu_lf *roc_rvu_lf, uint16_t msg_id); diff --git a/drivers/common/cnxk/roc_rvu_lf_priv.h b/drivers/common/cnxk/roc_rvu_lf_priv.h index 8feff82961..57bb713b21 100644 --- a/drivers/common/cnxk/roc_rvu_lf_priv.h +++ b/drivers/common/cnxk/roc_rvu_lf_priv.h @@ -17,6 +17,11 @@ struct rvu_lf { uint16_t msg_id_to; }; +struct rvu_lf_msg { + struct mbox_msghdr hdr; + uint8_t data[]; +}; + static inline struct rvu_lf * roc_rvu_lf_to_rvu_priv(struct roc_rvu_lf *roc_rvu_lf) { diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index adfe7ee789..a3e2170739 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -554,5 +554,7 @@ INTERNAL { roc_rvu_lf_msg_handler_unregister; roc_rvu_lf_msg_id_range_check; roc_rvu_lf_msg_id_range_set; + roc_rvu_lf_msg_process; + local: *; }; diff --git a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c index a21bca9ea6..66c8e241db 100644 --- a/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c +++ b/drivers/raw/cnxk_rvu_lf/cnxk_rvu_lf.c @@ -29,6 +29,21 @@ rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to) return roc_rvu_lf_msg_id_range_set(roc_rvu_lf, from, to); } +int +rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id, + void *req, uint16_t req_len, void *rsp, uint16_t rsp_len) +{ + struct rte_rawdev *rawdev = rte_rawdev_pmd_get_dev(dev_id); + struct roc_rvu_lf *roc_rvu_lf; + + if (rawdev == NULL) + return -EINVAL; + + roc_rvu_lf = (struct roc_rvu_lf *)rawdev->dev_private; + + return roc_rvu_lf_msg_process(roc_rvu_lf, vf, msg_id, req, req_len, rsp, rsp_len); +} + int rte_pmd_rvu_lf_msg_handler_register(uint8_t dev_id, rte_pmd_rvu_lf_msg_handler_cb_fn cb) { diff --git a/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h b/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h index 9ee9c57d7d..cb8bced718 100644 --- a/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h +++ b/drivers/raw/cnxk_rvu_lf/rte_pmd_rvu_lf.h @@ -47,6 +47,35 @@ extern int cnxk_logtype_rvu_lf; __rte_experimental int rte_pmd_rvu_lf_msg_id_range_set(uint8_t dev_id, uint16_t from, uint16_t to); +/** + * Process a RVU mailbox message. + * + * Message request and response to be sent/received, + * need to be allocated/deallocated by application + * before/after processing the message. + * + * @param dev_id + * device id of RVU LF device + * @param vf + * VF number(0 to N) in case of PF->VF message. 0 is valid as VF0. + * (For VF->PF message, this field is ignored) + * @param msg_id + * message id + * @param req + * pointer to message request data to be sent + * @param req_len + * length of request data + * @param rsp + * pointer to message response expected to be received, NULL if no response + * @param rsp_len + * length of message response expected, 0 if no response + * + * @return 0 on success, negative value otherwise + */ +__rte_experimental +int rte_pmd_rvu_lf_msg_process(uint8_t dev_id, uint16_t vf, uint16_t msg_id, + void *req, uint16_t req_len, void *rsp, uint16_t rsp_len); + /** * Signature of callback function called when a message process handler is called * on RVU LF device. -- 2.25.1