From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC5E342642; Tue, 26 Sep 2023 12:33:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 692F840648; Tue, 26 Sep 2023 12:33:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 67000402E0 for ; Tue, 26 Sep 2023 12:33:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38Q9u2Nq014777; Tue, 26 Sep 2023 03:33:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=fUhvxbMqHMiFYNDmU3CxzoE8q+pzgu8xBY7RVhWJXmg=; b=hUecA4L9mEffQoq6zYPTrUnTOnYNhk08xQbe5QVi7CKNtYvXGoXoKbu/ufHpqYuUGy0y vwsDJKN+n/wT+WRoGGgKj89t8VelFrDQNnQn6PlfbOxPNA7uJmRQ72djVrSkqmiD+CFf If8Ni+U97d60yeDKwvCAnXj3JcNpkWRVPxSt9S4xEAVWPKNkJN8eQGkaWyIvAPNj0ai9 CVAgPn4NNqM1TwL3KECn4eRgsiGrMGwRBUtwwaiRat8h2WuRfLq4LkzT9Arx2ro5ZdfS J1VgSeKV40eq+Rf2qGaywo/1+qYXb3ggCb7ikL/mPtpkqVhQMeGv1mxVYHKAq0WbQx8/ Lw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t9yhm0h9x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 26 Sep 2023 03:33:14 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 26 Sep 2023 03:33:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 26 Sep 2023 03:33:12 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id C1B7D3F70AA; Tue, 26 Sep 2023 03:33:07 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v4 04/12] eventdev: add API support for vchan add and delete Date: Tue, 26 Sep 2023 16:02:25 +0530 Message-ID: <20230926103233.3956431-5-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230926103233.3956431-1-amitprakashs@marvell.com> References: <20230923133449.3780841-1-amitprakashs@marvell.com> <20230926103233.3956431-1-amitprakashs@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: G56O15DnUw-lrnTkx2qzhZKXT9Ff5BqQ X-Proofpoint-ORIG-GUID: G56O15DnUw-lrnTkx2qzhZKXT9Ff5BqQ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-26_07,2023-09-25_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to add and delete vchan's from the DMA adapter. DMA devid and vchan are added to the addapter instance by calling rte_event_dma_adapter_vchan_add and deleted using rte_event_dma_adapter_vchan_del. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 204 +++++++++++++++++++++++++++ 1 file changed, 204 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index c7ffba1b47..dd58188bf3 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -42,8 +42,31 @@ struct dma_ops_circular_buffer { struct rte_event_dma_adapter_op **op_buffer; } __rte_cache_aligned; +/* Vchan information */ +struct dma_vchan_info { + /* Set to indicate vchan queue is enabled */ + bool vq_enabled; + + /* Circular buffer for batching DMA ops to dma_dev */ + struct dma_ops_circular_buffer dma_buf; +} __rte_cache_aligned; + /* DMA device information */ struct dma_device_info { + /* Pointer to vchan queue info */ + struct dma_vchan_info *vchanq; + + /* Pointer to vchan queue info. + * This holds ops passed by application till the + * dma completion is done. + */ + struct dma_vchan_info *tqmap; + + /* If num_vchanq > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_vchanq; + /* Number of vchans configured for a DMA device. */ uint16_t num_dma_dev_vchan; } __rte_cache_aligned; @@ -81,6 +104,9 @@ struct event_dma_adapter { /* Set if default_cb is being used */ int default_cb_arg; + + /* No. of vchan queue configured */ + uint16_t nb_vchanq; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -333,3 +359,181 @@ rte_event_dma_adapter_free(uint8_t id) return 0; } + +static void +edma_update_vchanq_info(struct event_dma_adapter *adapter, struct dma_device_info *dev_info, + uint16_t vchan, uint8_t add) +{ + struct dma_vchan_info *vchan_info; + struct dma_vchan_info *tqmap_info; + int enabled; + uint16_t i; + + if (dev_info->vchanq == NULL) + return; + + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dev_info->num_dma_dev_vchan; i++) + edma_update_vchanq_info(adapter, dev_info, i, add); + } else { + tqmap_info = &dev_info->tqmap[vchan]; + vchan_info = &dev_info->vchanq[vchan]; + enabled = vchan_info->vq_enabled; + if (add) { + adapter->nb_vchanq += !enabled; + dev_info->num_vchanq += !enabled; + } else { + adapter->nb_vchanq -= enabled; + dev_info->num_vchanq -= enabled; + } + vchan_info->vq_enabled = !!add; + tqmap_info->vq_enabled = !!add; + } +} + +int +rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, + const struct rte_event *event) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u dma_dev %u", id, dma_dev_id); + return ret; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) && (event == NULL)) { + RTE_EDEV_LOG_ERR("Event can not be NULL for dma_dev_id = %u", dma_dev_id); + return -EINVAL; + } + + dev_info = &adapter->dma_devs[dma_dev_id]; + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) { + RTE_EDEV_LOG_ERR("Invalid vhcan %u", vchan); + return -EINVAL; + } + + /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no + * need of service core as HW supports event forward capability. + */ + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->dma_adapter_vchan_add == NULL) + return -ENOTSUP; + if (dev_info->vchanq == NULL) { + dev_info->vchanq = rte_zmalloc_socket(adapter->mem_name, + dev_info->num_dma_dev_vchan * + sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->vchanq == NULL) { + printf("Queue pair add not supported\n"); + return -ENOMEM; + } + } + + if (dev_info->tqmap == NULL) { + dev_info->tqmap = rte_zmalloc_socket(adapter->mem_name, + dev_info->num_dma_dev_vchan * + sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->tqmap == NULL) { + printf("tq pair add not supported\n"); + return -ENOMEM; + } + } + + ret = (*dev->dev_ops->dma_adapter_vchan_add)(dev, dma_dev_id, vchan, event); + if (ret) + return ret; + + else + edma_update_vchanq_info(adapter, &adapter->dma_devs[dma_dev_id], vchan, 1); + } + + return 0; +} + +int +rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) + return ret; + + dev_info = &adapter->dma_devs[dma_dev_id]; + + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) { + RTE_EDEV_LOG_ERR("Invalid vhcan %" PRIu16, vchan); + return -EINVAL; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->dma_adapter_vchan_del == NULL) + return -ENOTSUP; + ret = (*dev->dev_ops->dma_adapter_vchan_del)(dev, dma_dev_id, vchan); + if (ret == 0) { + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + dev_info->vchanq = NULL; + } + } + } else { + if (adapter->nb_vchanq == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + rte_free(dev_info->tqmap); + dev_info->vchanq = NULL; + dev_info->tqmap = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + } + + return ret; +} -- 2.25.1