From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C92DDA0C43; Tue, 28 Sep 2021 17:46:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B775E410E8; Tue, 28 Sep 2021 17:46:55 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 5842F410E6 for ; Tue, 28 Sep 2021 17:46:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1632844013; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=SGdbf9YmxLw+8/+8zIMwF2iEwO0hB7Mux0VjuD/dDb8=; b=K26EhM5/qPhxF8MJK5W4nDANMBv+968ko3NklUlDDcnUkTsxwvHuc1Xd2CmVzc9YKN4rMR jGaeF2X8hA9ptW3AD0G3J/0VSqzAGvmVY604qjgGfTSdiqPi6SXTP8Io3cryz7uyRPL0Eu p9EwmGNFO9o/pMRl4JoJiYYGCCrjMWU= Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com [209.85.167.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-214-zyGo8Kx0Oy-rkwlPCjYgUA-1; Tue, 28 Sep 2021 11:46:52 -0400 X-MC-Unique: zyGo8Kx0Oy-rkwlPCjYgUA-1 Received: by mail-lf1-f70.google.com with SMTP id q4-20020ac25284000000b003fcebb305a6so6245595lfm.15 for ; Tue, 28 Sep 2021 08:46:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=SGdbf9YmxLw+8/+8zIMwF2iEwO0hB7Mux0VjuD/dDb8=; b=V34qWT+7isU19vUMqLN+sC93wx7JscdO/yJY6iAxYAwEfbhQJpYecJjjc93paewGcO 5Xx9CSxHq472K0pPtI4ak8n5pWzXnI+7MU2bSOUlA/4R069hZxPflcziImYpG848fpeu kwx+wh8HItUNZEa0o8rcAugxXUPc8KEdF3Xqy2X2fpxEoD+oUM++w+B2EaU9cAkUq8F1 zSmuVyVoEA5GKgm8JvnHf274JUn6WudUTv8e44wgAox7i/S1BfL9V/N4oAnweDvAJPzQ vPfutbdASKy3Yp0d16sFI0wqwnfgwAHTq4oxJuS6lkvelvuq6Y15s2p0uC8C0xnWk0tM uB/w== X-Gm-Message-State: AOAM530293YMuI6mv+Ys3cIGwBSo91MtKM0LH5jx0nmslDqJFRTdQX04 eVckTtilySp4FDOB/V7JkqiOV1CrJ0DSw+nCd5ZhaN2OjebOnpiio0BcVy/8Zlvq6duoHXpzR0/ +Xb3hytutdnLjIVpPOsY= X-Received: by 2002:a2e:9a98:: with SMTP id p24mr672885lji.55.1632844010738; Tue, 28 Sep 2021 08:46:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwGgQiIdZ5EHYvKPK3axuqM+agEbEMFXOKDxSX7Jfvt4cwYDCI+Ahr1hSUEQ+zVOQCjBEGFXtqP/bmONixGOY= X-Received: by 2002:a2e:9a98:: with SMTP id p24mr672862lji.55.1632844010402; Tue, 28 Sep 2021 08:46:50 -0700 (PDT) MIME-Version: 1.0 References: <20210826145726.102081-1-hkalra@marvell.com> <20210903124102.47425-1-hkalra@marvell.com> <20210903124102.47425-3-hkalra@marvell.com> In-Reply-To: <20210903124102.47425-3-hkalra@marvell.com> From: David Marchand Date: Tue, 28 Sep 2021 17:46:39 +0200 Message-ID: To: Harman Kalra Cc: dev , Ray Kinsella Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra wrote: > > Implementing get set APIs for interrupt handle fields. > To make any change to the interrupt handle fields, one > should make use of these APIs. Some global comments. - Please merge API prototype (from patch 1) and actual implementation in a single patch. - rte_intr_handle_ seems a rather long prefix, does it really matter to have the _handle part? - what part of this API needs to be exported to applications? Let's hide as much as we can with __rte_internal. > > Signed-off-by: Harman Kalra > Acked-by: Ray Kinsella > --- > lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++ > lib/eal/common/meson.build | 2 + > lib/eal/include/rte_eal_interrupts.h | 6 +- > lib/eal/version.map | 30 ++ > 4 files changed, 543 insertions(+), 1 deletion(-) > create mode 100644 lib/eal/common/eal_common_interrupts.c > > diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c > new file mode 100644 > index 0000000000..2e4fed96f0 > --- /dev/null > +++ b/lib/eal/common/eal_common_interrupts.c > @@ -0,0 +1,506 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(C) 2021 Marvell. > + */ > + > +#include > +#include > + > +#include > +#include > +#include > + > +#include > + > + > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size, > + bool from_hugepage) > +{ > + struct rte_intr_handle *intr_handle; > + int i; > + > + if (from_hugepage) > + intr_handle = rte_zmalloc(NULL, > + size * sizeof(struct rte_intr_handle), > + 0); > + else > + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle)); We can call DPDK allocator in all cases. That would avoid headaches on why multiprocess does not work in some rarely tested cases. Wdyt? Plus "from_hugepage" is misleading, you could be in --no-huge mode, rte_zmalloc still works. > + if (!intr_handle) { > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); > + rte_errno = ENOMEM; > + return NULL; > + } > + > + for (i = 0; i < size; i++) { > + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID; > + intr_handle[i].alloc_from_hugepage = from_hugepage; > + } > + > + return intr_handle; > +} > + > +struct rte_intr_handle *rte_intr_handle_instance_index_get( > + struct rte_intr_handle *intr_handle, int index) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOMEM; > + return NULL; > + } > + > + return &intr_handle[index]; > +} > + > +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle, > + const struct rte_intr_handle *src, > + int index) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (src == NULL) { > + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n"); > + rte_errno = EINVAL; > + goto fail; > + } > + > + if (index < 0) { > + RTE_LOG(ERR, EAL, "Index cany be negative"); > + rte_errno = EINVAL; > + goto fail; > + } > + > + intr_handle[index].fd = src->fd; > + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd; > + intr_handle[index].type = src->type; > + intr_handle[index].max_intr = src->max_intr; > + intr_handle[index].nb_efd = src->nb_efd; > + intr_handle[index].efd_counter_size = src->efd_counter_size; > + > + memcpy(intr_handle[index].efds, src->efds, src->nb_intr); > + memcpy(intr_handle[index].elist, src->elist, src->nb_intr); > + > + return 0; > +fail: > + return rte_errno; > +} > + > +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + } > + > + if (intr_handle->alloc_from_hugepage) > + rte_free(intr_handle); > + else > + free(intr_handle); > +} > + > +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + intr_handle->fd = fd; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->fd; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle, > + enum rte_intr_handle_type type) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + intr_handle->type = type; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +enum rte_intr_handle_type rte_intr_handle_type_get( > + const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + return RTE_INTR_HANDLE_UNKNOWN; > + } > + > + return intr_handle->type; > +} > + > +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + intr_handle->vfio_dev_fd = fd; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->vfio_dev_fd; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle, > + int max_intr) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (max_intr > intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d", > + max_intr, intr_handle->nb_intr); > + rte_errno = ERANGE; > + goto fail; > + } > + > + intr_handle->max_intr = max_intr; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->max_intr; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, > + int nb_efd) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + intr_handle->nb_efd = nb_efd; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->nb_efd; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->nb_intr; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle, > + uint8_t efd_counter_size) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + intr_handle->efd_counter_size = efd_counter_size; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_efd_counter_size_get( > + const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->efd_counter_size; > +fail: > + return rte_errno; > +} > + > +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + return intr_handle->efds; > +fail: > + return NULL; > +} We don't need this new accessor. It leaks the internal representation to the API caller. If the internal representation is later changed, we would have to maintain this array thing. The only user is drivers/raw/ifpga/ifpga_rawdev.c. This driver can build an array itself, and call rte_intr_handle_efds_index_get() as much as needed. > + > +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle, > + int index) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index >= intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, > + intr_handle->nb_intr); > + rte_errno = EINVAL; > + goto fail; > + } > + > + return intr_handle->efds[index]; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle, > + int index, int fd) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index >= intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, > + intr_handle->nb_intr); > + rte_errno = ERANGE; > + goto fail; > + } > + > + intr_handle->efds[index] = fd; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +struct rte_epoll_event *rte_intr_handle_elist_index_get( > + struct rte_intr_handle *intr_handle, int index) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index >= intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, > + intr_handle->nb_intr); > + rte_errno = ERANGE; > + goto fail; > + } > + > + return &intr_handle->elist[index]; > +fail: > + return NULL; > +} > + > +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle, > + int index, struct rte_epoll_event elist) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index >= intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index, > + intr_handle->nb_intr); > + rte_errno = ERANGE; > + goto fail; > + } > + > + intr_handle->elist[index] = elist; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + return NULL; > + } > + > + return intr_handle->intr_vec; > +} rte_intr_handle_vec_list_base leaks an internal representation too. Afaics with the whole series applied, it is always paired with a rte_intr_handle_vec_list_alloc or rte_intr_handle_vec_list_free. rte_intr_handle_vec_list_alloc could do this check itself. And rte_intr_handle_vec_list_free should already be fine, since it sets intr_vec to NULL. > + > +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle, > + const char *name, int size) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + /* Vector list already allocated */ > + if (intr_handle->intr_vec) > + return 0; > + > + if (size > intr_handle->nb_intr) { > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size, > + intr_handle->nb_intr); > + rte_errno = ERANGE; > + goto fail; > + } > + > + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0); > + if (!intr_handle->intr_vec) { > + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size); > + rte_errno = ENOMEM; > + goto fail; > + } > + > + intr_handle->vec_list_size = size; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_vec_list_index_get( > + const struct rte_intr_handle *intr_handle, int index) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (!intr_handle->intr_vec) { > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index > intr_handle->vec_list_size) { > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", > + index, intr_handle->vec_list_size); > + rte_errno = ERANGE; > + goto fail; > + } > + > + return intr_handle->intr_vec[index]; > +fail: > + return rte_errno; > +} > + > +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle, > + int index, int vec) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (!intr_handle->intr_vec) { > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n"); > + rte_errno = ENOTSUP; > + goto fail; > + } > + > + if (index > intr_handle->vec_list_size) { > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n", > + index, intr_handle->vec_list_size); > + rte_errno = ERANGE; > + goto fail; > + } > + > + intr_handle->intr_vec[index] = vec; > + > + return 0; > +fail: > + return rte_errno; > +} > + > +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle) > +{ > + if (intr_handle == NULL) { > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); > + rte_errno = ENOTSUP; > + } > + > + rte_free(intr_handle->intr_vec); > + intr_handle->intr_vec = NULL; > +} > diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build > index edfca77779..47f2977539 100644 > --- a/lib/eal/common/meson.build > +++ b/lib/eal/common/meson.build > @@ -17,6 +17,7 @@ if is_windows > 'eal_common_errno.c', > 'eal_common_fbarray.c', > 'eal_common_hexdump.c', > + 'eal_common_interrupts.c', > 'eal_common_launch.c', > 'eal_common_lcore.c', > 'eal_common_log.c', > @@ -53,6 +54,7 @@ sources += files( > 'eal_common_fbarray.c', > 'eal_common_hexdump.c', > 'eal_common_hypervisor.c', > + 'eal_common_interrupts.c', > 'eal_common_launch.c', > 'eal_common_lcore.c', > 'eal_common_log.c', > diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h > index 68ca3a042d..216aece61b 100644 > --- a/lib/eal/include/rte_eal_interrupts.h > +++ b/lib/eal/include/rte_eal_interrupts.h > @@ -55,13 +55,17 @@ struct rte_intr_handle { > }; > void *handle; /**< device driver handle (Windows) */ > }; > + bool alloc_from_hugepage; > enum rte_intr_handle_type type; /**< handle type */ > uint32_t max_intr; /**< max interrupt requested */ > uint32_t nb_efd; /**< number of available efd(event fd) */ > uint8_t efd_counter_size; /**< size of efd counter, used for vdev */ > + uint16_t nb_intr; > + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */ > int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */ > struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID]; > - /**< intr vector epoll event */ > + /**< intr vector epoll event */ > + uint16_t vec_list_size; > int *intr_vec; /**< intr vector number array */ > }; > > diff --git a/lib/eal/version.map b/lib/eal/version.map > index beeb986adc..56108d0998 100644 > --- a/lib/eal/version.map > +++ b/lib/eal/version.map > @@ -426,6 +426,36 @@ EXPERIMENTAL { > > # added in 21.08 > rte_power_monitor_multi; # WINDOWS_NO_EXPORT > + > + # added in 21.11 > + rte_intr_handle_fd_set; > + rte_intr_handle_fd_get; > + rte_intr_handle_dev_fd_set; > + rte_intr_handle_dev_fd_get; > + rte_intr_handle_type_set; > + rte_intr_handle_type_get; > + rte_intr_handle_instance_alloc; > + rte_intr_handle_instance_index_get; > + rte_intr_handle_instance_free; > + rte_intr_handle_instance_index_set; > + rte_intr_handle_event_list_update; > + rte_intr_handle_max_intr_set; > + rte_intr_handle_max_intr_get; > + rte_intr_handle_nb_efd_set; > + rte_intr_handle_nb_efd_get; > + rte_intr_handle_nb_intr_get; > + rte_intr_handle_efds_index_set; > + rte_intr_handle_efds_index_get; > + rte_intr_handle_efds_base; > + rte_intr_handle_elist_index_set; > + rte_intr_handle_elist_index_get; > + rte_intr_handle_efd_counter_size_set; > + rte_intr_handle_efd_counter_size_get; > + rte_intr_handle_vec_list_alloc; > + rte_intr_handle_vec_list_index_set; > + rte_intr_handle_vec_list_index_get; > + rte_intr_handle_vec_list_free; > + rte_intr_handle_vec_list_base; > }; > > INTERNAL { > -- > 2.18.0 > -- David Marchand