From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66D46A0C54; Fri, 3 Sep 2021 14:42:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D4828410ED; Fri, 3 Sep 2021 14:42:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 95B25410ED for ; Fri, 3 Sep 2021 14:42:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 1837toED013313; Fri, 3 Sep 2021 05:41:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=8T0n7jU2DASdaNpRqrdytRLrBOpryrIslZZ0kpqjE6E=; b=lGm8uVqxFfFFNQWCXzYuW620rxd/dP0IBECqwEppwSCrp1WG4LfU2JG876/5DfC/5yHO LmFdEXqm+Dx2ArwSl+3QziHpqSJzVXdoDlUw/81yFLF2Kybr0vb4qMkN9bSX5M1a5Gp4 nk3+tGt7nUev6c2K86hX4gt4M/r3Juq2R2n+SOD+XDxsnVWLFTLZCv6+AtSfl6RXPls1 UFpwomGGIHw1UjrG1ifZ90L3yylVcmf3rrGbjlRTbhe+WZmaqc4t5hoTgsNmNVzyNdKq Ee1SS/wEyAFc1pv9t3DE5qhqp6/BmKm4oI/T4ihe+9mKW2J1rFMZdKkFeM6qH4s+t0mF +A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3aufr890ty-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 03 Sep 2021 05:41:57 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 3 Sep 2021 05:41:55 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Fri, 3 Sep 2021 05:41:55 -0700 Received: from localhost.localdomain (unknown [10.29.52.211]) by maili.marvell.com (Postfix) with ESMTP id 75C635B6945; Fri, 3 Sep 2021 05:41:54 -0700 (PDT) From: Harman Kalra To: , Harman Kalra , Ray Kinsella Date: Fri, 3 Sep 2021 18:10:57 +0530 Message-ID: <20210903124102.47425-3-hkalra@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210903124102.47425-1-hkalra@marvell.com> References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-ORIG-GUID: O93CsMUF43P734Gc_9KreSELLEnwzbLz X-Proofpoint-GUID: O93CsMUF43P734Gc_9KreSELLEnwzbLz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-03_03,2021-09-03_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implementing get set APIs for interrupt handle fields. To make any change to the interrupt handle fields, one should make use of these APIs. Signed-off-by: Harman Kalra Acked-by: Ray Kinsella --- lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++ lib/eal/common/meson.build | 2 + lib/eal/include/rte_eal_interrupts.h | 6 +- lib/eal/version.map | 30 ++ 4 files changed, 543 insertions(+), 1 deletion(-) create mode 100644 lib/eal/common/eal_common_interrupts.c diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c new file mode 100644 index 0000000000..2e4fed96f0 --- /dev/null +++ b/lib/eal/common/eal_common_interrupts.c @@ -0,0 +1,506 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include + +#include +#include +#include + +#include + + +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, + bool from_hugepage) +{ + struct rte_intr_handle *intr_handle; + int i; + + if (from_hugepage) + intr_handle = rte_zmalloc(NULL, + size * sizeof(struct rte_intr_handle), + 0); + else + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle)); + if (!intr_handle) { + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + rte_errno = ENOMEM; + return NULL; + } + + for (i = 0; i < size; i++) { + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID; + intr_handle[i].alloc_from_hugepage = from_hugepage; + } + + return intr_handle; +} + +struct rte_intr_handle *rte_intr_handle_instance_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOMEM; + return NULL; + } + + return &intr_handle[index]; +} + +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, + const struct rte_intr_handle *src, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (src == NULL) { + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n"); + rte_errno = EINVAL; + goto fail; + } + + if (index < 0) { + RTE_LOG(ERR, EAL, "Index cany be negative"); + rte_errno = EINVAL; + goto fail; + } + + intr_handle[index].fd = src->fd; + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd; + intr_handle[index].type = src->type; + intr_handle[index].max_intr = src->max_intr; + intr_handle[index].nb_efd = src->nb_efd; + intr_handle[index].efd_counter_size = src->efd_counter_size; + + memcpy(intr_handle[index].efds, src->efds, src->nb_intr); + memcpy(intr_handle[index].elist, src->elist, src->nb_intr); + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + if (intr_handle->alloc_from_hugepage) + rte_free(intr_handle); + else + free(intr_handle); +} + +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->fd; +fail: + return rte_errno; +} + +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle, + enum rte_intr_handle_type type) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->type = type; + + return 0; +fail: + return rte_errno; +} + +enum rte_intr_handle_type rte_intr_handle_type_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return RTE_INTR_HANDLE_UNKNOWN; + } + + return intr_handle->type; +} + +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->vfio_dev_fd = fd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->vfio_dev_fd; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle, + int max_intr) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (max_intr > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", + max_intr, intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->max_intr = max_intr; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->max_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, + int nb_efd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->nb_efd = nb_efd; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_efd; +fail: + return rte_errno; +} + +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->nb_intr; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle, + uint8_t efd_counter_size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + intr_handle->efd_counter_size = efd_counter_size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_efd_counter_size_get( + const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efd_counter_size; +fail: + return rte_errno; +} + +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + return intr_handle->efds; +fail: + return NULL; +} + +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, + int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = EINVAL; + goto fail; + } + + return intr_handle->efds[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, + int index, int fd) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->efds[index] = fd; + + return 0; +fail: + return rte_errno; +} + +struct rte_epoll_event *rte_intr_handle_elist_index_get( + struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + return &intr_handle->elist[index]; +fail: + return NULL; +} + +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, + int index, struct rte_epoll_event elist) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index >= intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->elist[index] = elist; + + return 0; +fail: + return rte_errno; +} + +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + return NULL; + } + + return intr_handle->intr_vec; +} + +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle, + const char *name, int size) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + /* Vector list already allocated */ + if (intr_handle->intr_vec) + return 0; + + if (size > intr_handle->nb_intr) { + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size, + intr_handle->nb_intr); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0); + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size); + rte_errno = ENOMEM; + goto fail; + } + + intr_handle->vec_list_size = size; + + return 0; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_get( + const struct rte_intr_handle *intr_handle, int index) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + return intr_handle->intr_vec[index]; +fail: + return rte_errno; +} + +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle, + int index, int vec) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (!intr_handle->intr_vec) { + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); + rte_errno = ENOTSUP; + goto fail; + } + + if (index > intr_handle->vec_list_size) { + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", + index, intr_handle->vec_list_size); + rte_errno = ERANGE; + goto fail; + } + + intr_handle->intr_vec[index] = vec; + + return 0; +fail: + return rte_errno; +} + +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle) +{ + if (intr_handle == NULL) { + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); + rte_errno = ENOTSUP; + } + + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +} diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build index edfca77779..47f2977539 100644 --- a/lib/eal/common/meson.build +++ b/lib/eal/common/meson.build @@ -17,6 +17,7 @@ if is_windows 'eal_common_errno.c', 'eal_common_fbarray.c', 'eal_common_hexdump.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', @@ -53,6 +54,7 @@ sources += files( 'eal_common_fbarray.c', 'eal_common_hexdump.c', 'eal_common_hypervisor.c', + 'eal_common_interrupts.c', 'eal_common_launch.c', 'eal_common_lcore.c', 'eal_common_log.c', diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h index 68ca3a042d..216aece61b 100644 --- a/lib/eal/include/rte_eal_interrupts.h +++ b/lib/eal/include/rte_eal_interrupts.h @@ -55,13 +55,17 @@ struct rte_intr_handle { }; void *handle; /**< device driver handle (Windows) */ }; + bool alloc_from_hugepage; enum rte_intr_handle_type type; /**< handle type */ uint32_t max_intr; /**< max interrupt requested */ uint32_t nb_efd; /**< number of available efd(event fd) */ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ + uint16_t nb_intr; + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; - /**< intr vector epoll event */ + /**< intr vector epoll event */ + uint16_t vec_list_size; int *intr_vec; /**< intr vector number array */ }; diff --git a/lib/eal/version.map b/lib/eal/version.map index beeb986adc..56108d0998 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -426,6 +426,36 @@ EXPERIMENTAL { # added in 21.08 rte_power_monitor_multi; # WINDOWS_NO_EXPORT + + # added in 21.11 + rte_intr_handle_fd_set; + rte_intr_handle_fd_get; + rte_intr_handle_dev_fd_set; + rte_intr_handle_dev_fd_get; + rte_intr_handle_type_set; + rte_intr_handle_type_get; + rte_intr_handle_instance_alloc; + rte_intr_handle_instance_index_get; + rte_intr_handle_instance_free; + rte_intr_handle_instance_index_set; + rte_intr_handle_event_list_update; + rte_intr_handle_max_intr_set; + rte_intr_handle_max_intr_get; + rte_intr_handle_nb_efd_set; + rte_intr_handle_nb_efd_get; + rte_intr_handle_nb_intr_get; + rte_intr_handle_efds_index_set; + rte_intr_handle_efds_index_get; + rte_intr_handle_efds_base; + rte_intr_handle_elist_index_set; + rte_intr_handle_elist_index_get; + rte_intr_handle_efd_counter_size_set; + rte_intr_handle_efd_counter_size_get; + rte_intr_handle_vec_list_alloc; + rte_intr_handle_vec_list_index_set; + rte_intr_handle_vec_list_index_get; + rte_intr_handle_vec_list_free; + rte_intr_handle_vec_list_base; }; INTERNAL { -- 2.18.0