* [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
@ 2021-08-26 14:57 Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes Harman Kalra
` (12 more replies)
0 siblings, 13 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev; +Cc: Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal: interrupt handle API prototypes
This patch provides prototypes of all the new get set APIs, and
also rearranges the headers related to interrupt framework. Epoll
related definitions prototypes are moved into a new header i.e.
rte_epoll.h and APIs defined in rte_eal_interrupts.h which were
driver specific are moved to rte_interrupts.h (as anyways it was
accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: implement get set APIs
Implementing all get, set and alloc APIs. Alloc APIs are implemented
to allocate memory for interrupt handle instance. Currently most of
the drivers defines interrupt handle instance as static but now it cant
be static as size of rte_intr_handle is unknown to all the drivers.
Drivers are expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
Harman Kalra (7):
eal: interrupt handle API prototypes
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle fields
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 237 +++---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 37 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 84 ++-
drivers/net/ena/ena_ethdev.c | 36 +-
drivers/net/enic/enic_main.c | 27 +-
drivers/net/failsafe/failsafe.c | 24 +-
drivers/net/failsafe/failsafe_intr.c | 45 +-
drivers/net/failsafe/failsafe_ops.c | 23 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 +-
drivers/net/hns3/hns3_ethdev_vf.c | 57 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 +-
drivers/net/i40e/i40e_ethdev_vf.c | 43 +-
drivers/net/iavf/iavf_ethdev.c | 41 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 +-
drivers/net/ice/ice_ethdev.c | 51 +-
drivers/net/igc/igc_ethdev.c | 47 +-
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +-
drivers/net/memif/memif_socket.c | 99 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 +-
drivers/net/mlx5/linux/mlx5_os.c | 56 +-
drivers/net/mlx5/linux/mlx5_socket.c | 26 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_net.c | 42 +-
drivers/net/ngbe/ngbe_ethdev.c | 31 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 29 +-
drivers/net/tap/rte_eth_tap.c | 37 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +-
drivers/net/vhost/rte_eth_vhost.c | 78 +-
drivers/net/virtio/virtio_ethdev.c | 17 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 +-
drivers/raw/ifpga/ifpga_rawdev.c | 42 +-
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 668 +++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 2 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 57 +-
lib/eal/freebsd/eal_interrupts.c | 93 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 117 +++
lib/eal/include/rte_interrupts.h | 673 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 39 +-
lib/eal/linux/eal_dev.c | 65 +-
lib/eal/linux/eal_interrupts.c | 294 +++++---
lib/eal/version.map | 30 +
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
129 files changed, 3763 insertions(+), 1672 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-31 15:52 ` Kinsella, Ray
2021-08-26 14:57 ` [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs Harman Kalra
` (11 subsequent siblings)
12 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra
Defining protypes of get/set APIs for accessing/manipulating
interrupt handle fields.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
MAINTAINERS | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 201 ---------
lib/eal/include/rte_epoll.h | 116 +++++
lib/eal/include/rte_interrupts.h | 653 ++++++++++++++++++++++++++-
5 files changed, 769 insertions(+), 203 deletions(-)
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..53b092f532 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -208,6 +208,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..68ca3a042d 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -91,179 +65,4 @@ struct rte_intr_handle {
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..182353cfd4
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..afc3262967 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,10 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
+
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +39,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +168,650 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instances based on size provided by user
+ * i.e. whether a single handle or array of handles is defined by size. Memory
+ * to be allocated from a hugepage or normal allocation is also defined by user.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later as per the requirement.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @param size
+ * No of interrupt instances.
+ * @param from_hugepage
+ * Memory allocation from hugepage or normal allocation
+ *
+ * @return
+ * - On success, address of first interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
+ bool from_hugepage);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the address of interrupt handle instance as per the index
+ * provided.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ * @param index
+ * Index of the interrupt handle
+ *
+ * @return
+ * - On success, address of interrupt handle at index
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *rte_intr_handle_instance_index_get(
+ struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for event fds. event lists
+ * and interrupt handle array.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ *
+ */
+__rte_experimental
+void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to populate interrupt handle at a given index of array
+ * of interrupt handles, with the values defined in src handler.
+ *
+ * @param intr_handle
+ * Start address of interrupt handles
+ * @param
+ * Source interrupt handle to be cloned.
+ * @param index
+ * Index of the interrupt handle
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src,
+ int index);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type rte_intr_handle_type_get(
+ const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the no of event fd field of interrupt handle with
+ * user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the no of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the no of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter, used for vdev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efd_counter_size_get(
+ const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the base address of the event fds array field of given interrupt
+ * handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efds base address
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event list array index with the given elist
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * event list instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the address of elist instance of event list array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value.
+ */
+__rte_experimental
+struct rte_epoll_event *rte_intr_handle_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * no of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * No of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_index_get(
+ const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Freeing the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the base address of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, base address of intr_vec array
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
+ * arrays as per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle,
+ int size);
#ifdef __cplusplus
}
#endif
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-31 15:53 ` Kinsella, Ray
2021-08-26 14:57 ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (10 subsequent siblings)
12 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Harman Kalra, Ray Kinsella
Implementing get set APIs for interrupt handle fields.
To make any change to the interrupt handle fields, one
should make use of these APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++
lib/eal/common/meson.build | 2 +
lib/eal/include/rte_eal_interrupts.h | 6 +-
lib/eal/version.map | 30 ++
4 files changed, 543 insertions(+), 1 deletion(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..2e4fed96f0
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,506 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include <rte_interrupts.h>
+
+
+struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
+ bool from_hugepage)
+{
+ struct rte_intr_handle *intr_handle;
+ int i;
+
+ if (from_hugepage)
+ intr_handle = rte_zmalloc(NULL,
+ size * sizeof(struct rte_intr_handle),
+ 0);
+ else
+ intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ for (i = 0; i < size; i++) {
+ intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle[i].alloc_from_hugepage = from_hugepage;
+ }
+
+ return intr_handle;
+}
+
+struct rte_intr_handle *rte_intr_handle_instance_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ return &intr_handle[index];
+}
+
+int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ if (index < 0) {
+ RTE_LOG(ERR, EAL, "Index cany be negative");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle[index].fd = src->fd;
+ intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle[index].type = src->type;
+ intr_handle[index].max_intr = src->max_intr;
+ intr_handle[index].nb_efd = src->nb_efd;
+ intr_handle[index].efd_counter_size = src->efd_counter_size;
+
+ memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
+ memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ if (intr_handle->alloc_from_hugepage)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->fd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_handle_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ return RTE_INTR_HANDLE_UNKNOWN;
+ }
+
+ return intr_handle->type;
+}
+
+int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
+ max_intr, intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->max_intr;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_efd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_intr;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efd_counter_size_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->efd_counter_size;
+fail:
+ return rte_errno;
+}
+
+int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->efds;
+fail:
+ return NULL;
+}
+
+int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_handle_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ return intr_handle->intr_vec;
+}
+
+int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_vec_list_index_get(
+ const struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index edfca77779..47f2977539 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -17,6 +17,7 @@ if is_windows
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
@@ -53,6 +54,7 @@ sources += files(
'eal_common_fbarray.c',
'eal_common_hexdump.c',
'eal_common_hypervisor.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 68ca3a042d..216aece61b 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -55,13 +55,17 @@ struct rte_intr_handle {
};
void *handle; /**< device driver handle (Windows) */
};
+ bool alloc_from_hugepage;
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
diff --git a/lib/eal/version.map b/lib/eal/version.map
index beeb986adc..56108d0998 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -426,6 +426,36 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_handle_fd_set;
+ rte_intr_handle_fd_get;
+ rte_intr_handle_dev_fd_set;
+ rte_intr_handle_dev_fd_get;
+ rte_intr_handle_type_set;
+ rte_intr_handle_type_get;
+ rte_intr_handle_instance_alloc;
+ rte_intr_handle_instance_index_get;
+ rte_intr_handle_instance_free;
+ rte_intr_handle_instance_index_set;
+ rte_intr_handle_event_list_update;
+ rte_intr_handle_max_intr_set;
+ rte_intr_handle_max_intr_get;
+ rte_intr_handle_nb_efd_set;
+ rte_intr_handle_nb_efd_get;
+ rte_intr_handle_nb_intr_get;
+ rte_intr_handle_efds_index_set;
+ rte_intr_handle_efds_index_get;
+ rte_intr_handle_efds_base;
+ rte_intr_handle_elist_index_set;
+ rte_intr_handle_elist_index_get;
+ rte_intr_handle_efd_counter_size_set;
+ rte_intr_handle_efd_counter_size_get;
+ rte_intr_handle_vec_list_alloc;
+ rte_intr_handle_vec_list_index_set;
+ rte_intr_handle_vec_list_index_get;
+ rte_intr_handle_vec_list_free;
+ rte_intr_handle_vec_list_base;
};
INTERNAL {
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (9 subsequent siblings)
12 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 93 ++++++----
lib/eal/linux/eal_interrupts.c | 294 +++++++++++++++++++------------
2 files changed, 241 insertions(+), 146 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..5724948d81 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,10 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- }
+ src->intr_handle =
+ rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
/* we had no interrupts for this */
@@ -151,7 +163,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +186,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_handle_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +227,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +242,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +283,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +297,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +330,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +382,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +406,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +424,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +448,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +460,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +483,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +496,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +567,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +579,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +591,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..570eddf088 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ?
+ (rte_intr_handle_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_handle_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_handle_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_handle_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_handle_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_handle_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_handle_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_handle_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(intr_handle,
+ NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
- sizeof(union rte_intr_read_buffer)) {
+ if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle)
+ > sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_handle_efds_index_set(intr_handle, 0,
+ rte_intr_handle_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_handle_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_handle_max_intr_get(intr_handle) >
+ rte_intr_handle_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++)
+ close(rte_intr_handle_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+ rte_intr_handle_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_handle_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_handle_max_intr_get(intr_handle) -
+ rte_intr_handle_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 4/7] test/interrupt: apply get set interrupt handle APIs
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (2 preceding siblings ...)
2021-08-26 14:57 ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
` (8 subsequent siblings)
12 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Harman Kalra
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 237 ++++++++++++++++++++++++-------------
1 file changed, 152 insertions(+), 85 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..289bca66dd 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles;
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,70 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ intr_handles = rte_intr_handle_instance_alloc(TEST_INTERRUPT_HANDLE_MAX,
+ false);
+ if (!intr_handles)
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
+
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +136,7 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ rte_intr_handle_instance_free(intr_handles);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +165,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_handle_fd_get(intr_handle_l) !=
+ rte_intr_handle_fd_get(intr_handle_r) ||
+ rte_intr_handle_type_get(intr_handle_l) !=
+ rte_intr_handle_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +220,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +242,9 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ test_intr_type);
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +268,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -232,46 +277,52 @@ test_interrupt_enable(void)
}
/* check with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
}
/* check with valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with valid handler and its type */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +337,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -296,46 +347,52 @@ test_interrupt_disable(void)
}
/* check with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
}
/* check with valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with valid handler and its type */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +408,14 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
- test_intr_handle = intr_handles[intr_type];
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ intr_type);
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +429,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +454,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -444,17 +502,20 @@ test_interrupt(void)
}
/* check if it will fail to register cb with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
}
/* check if it will fail to register without callback */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -469,39 +530,41 @@ test_interrupt(void)
}
/* check if it will fail to unregister cb with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
}
/* check if it is ok to register the same intr_handle twice */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -528,28 +591,32 @@ test_interrupt(void)
out:
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 5/7] drivers: remove direct access to interrupt handle fields
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (3 preceding siblings ...)
2021-08-26 14:57 ` [dpdk-dev] [RFC 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
` (7 subsequent siblings)
12 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Heinrich Kuhn, Jiawen Wu,
Devendra Singh Rawat, Andrew Rybchenko, Keith Wiles,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Tianfei zhang, Xiaoyun Li, Guy Kaneti, Bruce Richardson,
Thomas Monjalon
Cc: Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 ++-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 29 ++++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 ++
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 106 +++++++++-------
drivers/common/cnxk/roc_nix_irq.c | 37 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +++++
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 ++--
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 ++--
drivers/net/e1000/igb_ethdev.c | 84 ++++++-------
drivers/net/ena/ena_ethdev.c | 36 +++---
drivers/net/enic/enic_main.c | 27 ++--
drivers/net/failsafe/failsafe.c | 24 +++-
drivers/net/failsafe/failsafe_intr.c | 45 ++++---
drivers/net/failsafe/failsafe_ops.c | 23 +++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 ++++----
drivers/net/hns3/hns3_ethdev_vf.c | 57 +++++----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 ++++----
drivers/net/i40e/i40e_ethdev_vf.c | 43 +++----
drivers/net/iavf/iavf_ethdev.c | 41 +++---
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 ++--
drivers/net/ice/ice_ethdev.c | 51 ++++----
drivers/net/igc/igc_ethdev.c | 47 ++++---
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +++++------
drivers/net/memif/memif_socket.c | 99 ++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 ++++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 56 ++++++---
drivers/net/mlx5/linux/mlx5_socket.c | 26 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_net.c | 42 ++++---
drivers/net/ngbe/ngbe_ethdev.c | 31 +++--
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 29 ++---
drivers/net/tap/rte_eth_tap.c | 37 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +++--
drivers/net/thunderx/nicvf_ethdev.c | 13 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +++---
drivers/net/vhost/rte_eth_vhost.c | 78 +++++++-----
drivers/net/virtio/virtio_ethdev.c | 17 +--
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 42 +++++--
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 7 ++
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 31 +++--
lib/eal/linux/eal_dev.c | 65 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
115 files changed, 1808 insertions(+), 1163 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..5097b240ee 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1096,8 +1098,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1109,8 +1112,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4178,7 +4182,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..34a6da9a46 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,12 +743,13 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ rte_intr_handle_fd_get(dev->intr_handle)))
+ return -rte_errno;
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
rte_bbdev_log(ERR, "Failed to allocate %u vectors",
dev->data->num_queues);
return -ENOMEM;
@@ -1879,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..0a718fbcd9 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,18 +1014,20 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ rte_intr_handle_fd_get(dev->intr_handle)))
+ return -rte_errno;
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
rte_bbdev_log(ERR, "Failed to allocate %u vectors",
dev->data->num_queues);
return -ENOMEM;
}
}
+
ret = rte_intr_enable(dev->intr_handle);
if (ret < 0) {
rte_bbdev_log(ERR,
@@ -2369,7 +2371,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..7298a03d86 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_handle_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..236fdc9bf7 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,15 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +76,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..7642964622 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -116,7 +116,7 @@ struct rte_auxiliary_device {
TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..52b2a4883e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 48d5cf4625..f32cb038b4 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -101,7 +101,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..3a1b0d0a45 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,16 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +232,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..b002b5e443 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_handle_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..479d3d71d7 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_handle_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,15 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +499,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +547,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_handle_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +560,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_handle_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 37d45dffe5..e46110b3ea 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -125,7 +125,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..bebb584796 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,15 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +198,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_handle_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +408,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_handle_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..38caaf2e8f 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..2529377f9b 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..f920163580 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_handle_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_handle_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 79a6fcffbd..35fe48117d 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +271,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_handle_instance_free(dev->intr_handle);
+ rte_intr_handle_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +318,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_handle_instance_free(dev->intr_handle);
+ rte_intr_handle_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..9b9a2e4a20 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..fe679c467c 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -70,12 +70,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 3c924eee14..bce94d5d72 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -297,6 +297,13 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index b52ca5bf1d..f506811d98 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -29,9 +29,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_handle_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -40,7 +42,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_handle_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -60,15 +63,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_dev_fd_get(dev->intr_handle));
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -77,16 +81,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..07916478ef 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -74,7 +74,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 8582e32c1d..fb0f051f81 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -149,9 +149,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -223,12 +229,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_dev_fd_get(dev->intr_handle));
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index c001497f74..b0d16bf81c 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -62,7 +62,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -82,7 +82,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -126,7 +126,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -149,7 +149,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index c14f189f9b..2dce7936fe 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -608,7 +608,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -658,7 +658,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -691,7 +691,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -722,7 +722,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -806,7 +806,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -827,7 +827,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1143,7 +1143,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c30d7..40c472e7d3 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_handle_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_handle_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_handle_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_handle_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_handle_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_handle_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_handle_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_handle_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,42 +136,50 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_handle_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_handle_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_handle_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_handle_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_handle_nb_efd_get(intr_handle);
+ plt_intr_handle_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = plt_intr_handle_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_handle_max_intr_get(intr_handle))
+ plt_intr_handle_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_handle_nb_efd_get(intr_handle),
+ plt_intr_handle_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -174,24 +189,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_handle_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_handle_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -205,12 +223,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_handle_nb_efd_get(intr_handle),
+ plt_intr_handle_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_handle_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_handle_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_handle_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..9c29f4272b 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,21 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
+ if (!plt_intr_handle_vec_list_base(handle)) {
+ rc = plt_intr_handle_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_handle_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +452,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +467,9 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ if (plt_intr_handle_vec_list_base(handle))
+ plt_intr_handle_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index d064d125c1..69b6254870 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b82d..872af26acc 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -101,6 +101,40 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_handle_efd_counter_size_get \
+ rte_intr_handle_efd_counter_size_get
+#define plt_intr_handle_efd_counter_size_set \
+ rte_intr_handle_efd_counter_size_set
+#define plt_intr_handle_vec_list_index_get rte_intr_handle_vec_list_index_get
+#define plt_intr_handle_vec_list_index_set rte_intr_handle_vec_list_index_set
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_vec_list_alloc rte_intr_handle_vec_list_alloc
+#define plt_intr_handle_vec_list_free rte_intr_handle_vec_list_free
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_fd_set rte_intr_handle_fd_set
+#define plt_intr_handle_fd_get rte_intr_handle_fd_get
+#define plt_intr_handle_dev_fd_get rte_intr_handle_dev_fd_get
+#define plt_intr_handle_dev_fd_set rte_intr_handle_dev_fd_set
+#define plt_intr_handle_type_get rte_intr_handle_type_get
+#define plt_intr_handle_type_set rte_intr_handle_type_set
+#define plt_intr_handle_instance_alloc rte_intr_handle_instance_alloc
+#define plt_intr_handle_instance_index_get rte_intr_handle_instance_index_get
+#define plt_intr_handle_instance_index_set rte_intr_handle_instance_index_set
+#define plt_intr_handle_instance_free rte_intr_handle_instance_free
+#define plt_intr_handle_event_list_update rte_intr_handle_event_list_update
+#define plt_intr_handle_max_intr_get rte_intr_handle_max_intr_get
+#define plt_intr_handle_max_intr_set rte_intr_handle_max_intr_set
+#define plt_intr_handle_nb_efd_get rte_intr_handle_nb_efd_get
+#define plt_intr_handle_nb_efd_set rte_intr_handle_nb_efd_set
+#define plt_intr_handle_nb_intr_get rte_intr_handle_nb_intr_get
+#define plt_intr_handle_nb_intr_set rte_intr_handle_nb_intr_set
+#define plt_intr_handle_efds_index_get rte_intr_handle_efds_index_get
+#define plt_intr_handle_efds_index_set rte_intr_handle_efds_index_set
+#define plt_intr_handle_efds_base rte_intr_handle_efds_base
+#define plt_intr_handle_elist_index_get rte_intr_handle_elist_index_get
+#define plt_intr_handle_elist_index_set rte_intr_handle_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf2626bd..88165ad236 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -491,7 +491,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -521,7 +521,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index 1485e2b357..906b283cde 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -640,7 +640,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -690,7 +690,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -723,7 +723,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -838,7 +838,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -859,7 +859,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1036,7 +1036,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..6efa4c6646 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_handle_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_handle_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_handle_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_handle_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_handle_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_handle_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_handle_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_handle_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_handle_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_handle_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_handle_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_handle_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ rte_intr_handle_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_handle_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_handle_max_intr_get(intr_handle))
+ rte_intr_handle_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_handle_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_handle_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_handle_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_handle_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_handle_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_handle_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519..03c37960eb 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -525,10 +525,10 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -608,7 +608,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -638,10 +638,8 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -692,7 +690,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..f32619e05c 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af1..c26e0a199e 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a..a34b2f078b 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb..02598d8030 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -831,12 +831,10 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
@@ -844,13 +842,15 @@ static int bnxt_start_nic(struct bnxt *bp)
}
PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
"intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ rte_intr_handle_vec_list_base(intr_handle),
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1459,7 +1459,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1501,10 +1501,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843..1f4336b4a7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_handle_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_handle_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_handle_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..f95d3bbf53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1157,7 +1157,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1228,8 +1228,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1268,8 +1268,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..fe20fc5e6c 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -752,10 +750,8 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -767,7 +763,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1008,7 +1004,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f3341..66a6380496 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -1001,7 +1001,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1205,7 +1205,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1264,11 +1264,11 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1427,7 +1427,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1471,10 +1471,8 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1514,7 +1512,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1540,10 +1538,9 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2784,7 +2781,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3301,7 +3298,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3332,11 +3329,11 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3358,7 +3355,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3382,10 +3379,10 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3423,7 +3420,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5145,7 +5142,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5165,7 +5162,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5243,7 +5240,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5284,8 +5281,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5303,8 +5301,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5314,8 +5312,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68..f73d7bb5bc 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -947,7 +947,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -969,10 +969,10 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -988,7 +988,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1008,7 +1008,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1665,7 +1668,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -2817,7 +2820,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -2844,9 +2847,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -2865,7 +2868,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -2873,8 +2878,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..0045dbd3f5 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,9 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +668,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1112,8 +1113,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d..b5c53e4286 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -266,11 +266,25 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
mac->addr_bytes[4], mac->addr_bytes[5]);
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_handle_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_handle_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -299,6 +313,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_handle_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..57df67c6c5 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,11 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +438,10 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
+ RTE_ASSERT(rte_intr_handle_vec_list_base(intr_handle) == NULL);
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +454,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +467,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +506,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +537,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e0..a3f5f34dd3 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -398,15 +398,24 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -440,12 +449,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_handle_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
@@ -458,10 +467,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
}
}
fs_unlock(dev, 0);
+ rte_intr_handle_instance_free(intr_handle);
return 0;
free_rxq:
fs_rx_queue_release(rxq);
fs_unlock(dev, 0);
+ rte_intr_handle_instance_free(intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..6f58c2543f 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_handle_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_handle_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a72401546..89c576a902 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1225,13 +1225,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3134,7 +3134,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3144,7 +3144,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3174,7 +3174,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index a374fa7915..41d33aac5e 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5192,7 +5192,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5205,7 +5205,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5264,8 +5264,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5298,8 +5298,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5631,7 +5631,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5654,11 +5654,10 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate vector list */
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
hw->used_rx_queues);
ret = -ENOMEM;
@@ -5676,20 +5675,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5700,7 +5700,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5710,8 +5710,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5846,7 +5847,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5866,16 +5867,15 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c8..2ee2a837dd 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2136,16 +2136,17 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static int
@@ -2301,7 +2302,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2324,11 +2325,10 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate vector list */
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
hns3_err(hw, "Failed to allocate %u rx_queues"
" intr_vec", hw->used_rx_queues);
ret = -ENOMEM;
@@ -2346,20 +2346,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2370,7 +2372,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2380,8 +2382,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2845,7 +2848,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2871,7 +2874,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index d3fbe082e6..0bdf44488d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1037,7 +1037,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..05f2b3c53c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1451,7 +1451,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
@@ -1985,7 +1985,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2101,10 +2101,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_handle_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2154,8 +2155,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2163,7 +2164,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2177,7 +2180,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2204,7 +2207,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2370,7 +2373,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2388,12 +2391,10 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2534,7 +2535,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2575,10 +2576,10 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2597,7 +2598,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_mirror_rule *p_mirror;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
@@ -11404,11 +11405,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11423,7 +11424,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11432,11 +11433,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b2..4ecc160a75 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -678,7 +678,7 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
uint8_t *cmd_buffer = NULL;
struct virtchnl_irq_map_info *map_info;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec, cmd_buffer_size, max_vectors, nb_msix, msix_base, i;
uint16_t rxq_map[vf->vf_res->max_vectors];
int err;
@@ -689,12 +689,14 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
msix_base = I40E_RX_VEC_START;
/* For interrupt mode, available vector id is from 1. */
max_vectors = vf->vf_res->max_vectors - 1;
- nb_msix = RTE_MIN(max_vectors, intr_handle->nb_efd);
+ nb_msix = RTE_MIN(max_vectors,
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle));
vec = msix_base;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle, i,
+ vec++);
if (vec >= vf->vf_res->max_vectors)
vec = msix_base;
}
@@ -705,7 +707,8 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq_map[msix_base] |= 1 << i;
if (rte_intr_dp_is_en(intr_handle))
- intr_handle->intr_vec[i] = msix_base;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, msix_base);
}
}
@@ -2003,7 +2006,7 @@ i40evf_enable_queues_intr(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (!rte_intr_allow_others(intr_handle)) {
I40E_WRITE_REG(hw,
@@ -2023,7 +2026,7 @@ i40evf_disable_queues_intr(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (!rte_intr_allow_others(intr_handle)) {
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01,
@@ -2039,13 +2042,13 @@ static int
i40evf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t interval =
i40e_calc_itr_interval(0, 0);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01,
I40E_VFINT_DYN_CTL01_INTENA_MASK |
@@ -2072,11 +2075,11 @@ static int
i40evf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01, 0);
else
@@ -2166,7 +2169,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
PMD_INIT_FUNC_TRACE();
@@ -2185,11 +2188,10 @@ i40evf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2243,7 +2245,7 @@ static int
i40evf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -2260,10 +2262,9 @@ i40evf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* remove all mac addrs */
i40evf_add_del_all_mac_addr(dev, FALSE);
/* remove all multicast addresses */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e..f768fd02b1 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -658,17 +658,17 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -728,7 +728,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -738,14 +739,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_handle_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -909,10 +912,8 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1661,7 +1662,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1679,7 +1681,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1691,7 +1693,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2325,12 +2328,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
ð_dev->data->mac_addrs[0]);
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* configure and enable device interrupt */
iavf_enable_irq0(hw);
@@ -2351,7 +2354,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 06dc663947..13425f3005 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1691,9 +1691,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num)
* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
if (err) {
PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
return err;
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216..fc4111fe63 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -535,13 +535,13 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
if (ice_dcf_get_vf_resource(hw) || ice_dcf_get_vf_vsi_map(hw) < 0)
err = -1;
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -680,9 +680,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -704,7 +704,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..2e091a0ec0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -153,11 +153,10 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -202,7 +201,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -212,12 +212,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_handle_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -614,10 +615,8 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..6c6caeb4aa 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2013,7 +2013,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2204,7 +2204,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2234,7 +2234,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2259,10 +2259,8 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2276,7 +2274,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3167,10 +3165,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_handle_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3198,8 +3197,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3208,7 +3208,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3220,7 +3222,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3246,7 +3248,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3266,11 +3268,10 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4539,19 +4540,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4560,11 +4561,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..86ac297ca3 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -668,10 +668,8 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -731,7 +729,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -755,8 +753,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -773,8 +771,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -810,7 +808,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -819,7 +817,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -913,7 +912,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -951,10 +950,10 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1169,7 +1168,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1339,11 +1338,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2100,7 +2099,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2119,7 +2118,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e620793966..3076fe7eab 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1071,7 +1071,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1085,11 +1085,9 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ adapter->nintrs)) {
IONIC_PRINT(ERR, "Failed to allocate %u vectors",
adapter->nintrs);
return -ENOMEM;
@@ -1122,7 +1120,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b5..48ee463e7d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1034,7 +1034,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1529,7 +1529,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2548,7 +2548,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2603,11 +2603,10 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2843,7 +2842,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2894,10 +2893,8 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2981,7 +2978,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4626,7 +4623,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5307,7 +5304,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5368,11 +5365,10 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -5411,7 +5407,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5439,10 +5435,8 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5454,7 +5448,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5937,7 +5931,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5963,7 +5957,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5979,7 +5973,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -6106,7 +6100,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -6133,8 +6127,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -6155,7 +6151,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6199,8 +6195,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb..4d6c5ad1b8 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_handle_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_handle_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_handle_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_handle_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_handle_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_handle_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_handle_fd_get(ih));
+ rte_intr_handle_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_handle_fd_get(mq->intr_handle));
+ rte_intr_handle_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_handle_fd_get(mq->intr_handle));
+ rte_intr_handle_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_handle_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_handle_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_handle_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,18 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_handle_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -914,9 +926,23 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +955,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_handle_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1046,6 +1074,7 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ rte_intr_handle_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1108,13 +1137,26 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1129,6 +1171,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_handle_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e..38fd93d2a7 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -325,7 +325,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_handle_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -461,7 +462,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_handle_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -678,7 +680,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_handle_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -829,7 +832,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_handle_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1089,8 +1093,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_handle_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1112,8 +1119,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_handle_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1307,12 +1317,26 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1336,11 +1360,25 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1356,6 +1394,7 @@ memif_queue_release(void *queue)
if (!mq)
return;
+ rte_intr_handle_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index c522157a0a..8d32694613 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1045,9 +1045,20 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_handle_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_handle_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1060,7 +1071,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1105,6 +1116,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_handle_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..1e28b8e4b2 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,13 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +68,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +83,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +96,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +262,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_handle_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +295,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_handle_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..117a3ded16 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2589,9 +2589,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle = rte_intr_handle_instance_alloc
+ (RTE_INTR_HANDLE_DEFAULT_SIZE, true);
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2745,7 +2744,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2929,7 +2928,16 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int ret;
int flags;
- sh->intr_handle.fd = -1;
+ sh->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_handle_fd_set(sh->intr_handle, -1);
+
flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL);
ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd,
F_SETFL, flags | O_NONBLOCK);
@@ -2937,17 +2945,26 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_handle_fd_set(sh->intr_handle,
+ ((struct ibv_context *)sh->ctx)->async_fd);
+ rte_intr_handle_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_handle_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp =
(void *)mlx5_glue->devx_create_cmd_comp(sh->ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
@@ -2962,13 +2979,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_handle_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_handle_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_handle_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2985,13 +3003,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_handle_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_handle_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_handle_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_handle_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..9007333c61 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,20 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_handle_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +169,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(server_intr_handle, 0);
+ rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e231..b4666fd379 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1016,7 +1016,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1184,8 +1184,8 @@ struct mlx5_dev_ctx_shared {
/* Memory Pool for mlx5 flow resources. */
struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce7989..75bcb82bf9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -837,10 +837,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -848,7 +845,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -859,9 +859,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -888,14 +888,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -916,11 +922,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (!rte_intr_handle_vec_list_base(intr_handle))
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_handle_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -930,10 +936,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54173bfacb..d349e5df44 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1129,7 +1129,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_handle_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1138,7 +1138,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_handle_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..9567c4866d 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_handle_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_handle_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_handle_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -784,13 +785,23 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_handle_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_handle_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..caf64ccfc2 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index a30e78db16..2070655dfa 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -583,24 +583,22 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -609,9 +607,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -684,7 +685,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -711,12 +712,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -941,10 +943,10 @@ nfp_net_close(struct rte_eth_dev *dev)
rte_free(pf_dev);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -1398,7 +1400,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -1418,7 +1421,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -1468,7 +1472,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
@@ -2998,7 +3002,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..fe4d675c0f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,10 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +502,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +539,8 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +556,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1090,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1125,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..3cdd19dc68 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,21 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
+ if (!rte_intr_handle_vec_list_base(handle)) {
+ rc = rte_intr_handle_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_handle_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +396,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6eb..b04e446030 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2569,22 +2569,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..7cf17d3e38 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -215,15 +214,17 @@ sfc_intr_start(struct sfc_adapter *sa)
}
sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ rte_intr_handle_type_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle),
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_vec_list_base(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_handle_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +251,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +323,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf7..d6c92f8d30 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_handle_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_handle_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_handle_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_handle_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_handle_fd_get(pmd->intr_handle));
+ rte_intr_handle_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1923,6 +1925,15 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1940,9 +1951,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_handle_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_handle_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2093,6 +2104,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_handle_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..b1a339f8bd 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,14 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+
+ rte_intr_handle_instance_free(intr_handle);
}
/**
@@ -52,15 +54,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +75,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfc..8dacae980c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_handle_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2175,6 +2178,16 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index e62675520a..44f4bc7d81 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1679,17 +1679,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1870,7 +1868,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1920,10 +1918,8 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1985,7 +1981,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -3103,7 +3099,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3636,7 +3632,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3718,7 +3714,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3752,8 +3748,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 0bae6ffd1f..6e51e4d9c2 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -613,7 +613,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -673,11 +673,10 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -716,7 +715,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -743,10 +742,8 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -758,7 +755,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -919,7 +916,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -941,7 +938,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -981,7 +978,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1007,8 +1004,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..a595352e63 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_handle_elist_index_get(handle, rxq_idx);
+ if (rte_intr_handle_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_handle_efds_index_get(handle, rxq_idx);
+ if (rte_intr_handle_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_handle_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +666,32 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_handle_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_handle_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +710,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_handle_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +929,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c9..4de1c929a9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -722,8 +722,8 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ if (rte_intr_handle_vec_list_base(dev->intr_handle))
+ rte_intr_handle_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1634,7 +1634,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1673,11 +1675,10 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle,
+ "intr_vec",
+ hw->max_queue_pairs)) {
PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
hw->max_queue_pairs);
return -ENOMEM;
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 16c58710d7..3d0ce9458c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,40 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
if (!eth_dev->intr_handle) {
- PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
+ PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle",
+ dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_handle_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
+
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_handle_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +675,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_handle_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +980,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +990,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1015,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a..1d0b61d9f2 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -620,11 +620,10 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -635,8 +634,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -644,17 +642,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_handle_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_handle_nb_efd_get(intr_handle) + 1);
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -802,7 +802,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -825,7 +827,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1022,10 +1026,8 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1677,7 +1679,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_handle_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1687,7 +1691,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_handle_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..4fbe25080e 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle;
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,23 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle =
+ rte_intr_handle_instance_index_get(ifpga_irq_handle, 0);
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = rte_intr_handle_instance_index_get(
+ ifpga_irq_handle, vec_start + 1);
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ rte_intr_handle_instance_free(ifpga_irq_handle);
+ return rc;
}
int
@@ -1370,6 +1376,10 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ ifpga_irq_handle = rte_intr_handle_instance_alloc(IFPGA_MAX_IRQ, false);
+ if (!ifpga_irq_handle)
+ return -ENOMEM;
+
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
return -ENODEV;
@@ -1379,29 +1389,35 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle =
+ rte_intr_handle_instance_index_get(ifpga_irq_handle, 0);
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = rte_intr_handle_instance_index_get(
+ ifpga_irq_handle, vec_start + 1);
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_handle_fd_set(intr_handle,
+ rte_intr_handle_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_handle_dev_fd_get(intr_handle),
+ rte_intr_handle_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_handle_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1412,7 +1428,7 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -EINVAL;
ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
+ rte_intr_handle_efds_base(intr_handle));
if (ret)
return -EINVAL;
}
@@ -1491,7 +1507,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_handle_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..5497ef2906 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,11 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1400,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 1dc813d0a3..90b9a73f6a 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_handle_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..27dc50cc57 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -698,6 +698,13 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -716,6 +723,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_handle_instance_free(priv->err_intr_handle);
rte_free(priv);
}
if (ctx)
@@ -750,6 +759,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev)
rte_vdpa_unregister_device(priv->vdev);
mlx5_glue->close_device(priv->ctx);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_handle_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 2a04e36607..f72cb358ec 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -92,7 +92,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -142,7 +142,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 3541c652ce..1f3da2461a 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_handle_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_handle_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_handle_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_handle_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058..b9d03953ac 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_handle_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_handle_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_handle_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_handle_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,34 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_handle_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_handle_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_handle_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -501,7 +518,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_handle_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index fc37236195..fdc9aeb894 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1093,7 +1093,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle || !rte_intr_handle_vec_list_base(intr_handle)) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1104,7 +1104,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..07baecd64f 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle intr_handle = {.fd = -1 };
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..e959fba27b 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,37 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_handle_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_handle_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +124,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +156,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +186,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..14d693cd88 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_handle_fd_get(intr_handle));
+ rte_intr_handle_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_handle_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,40 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_handle_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_handle_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_handle_fd_set(intr_handle, -1);
+ rte_intr_handle_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +366,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_handle_fd_get(intr_handle));
+ rte_intr_handle_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..b6722f2db5 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4778,13 +4778,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4819,15 +4819,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_handle_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5005,12 +5005,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 6/7] eal/interrupts: make interrupt handle structure opaque
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (4 preceding siblings ...)
2021-08-26 14:57 ` [dpdk-dev] [RFC 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
` (6 subsequent siblings)
12 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 172 ++++++++++++++++++++++++-
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 -----------
lib/eal/include/rte_interrupts.h | 24 +++-
5 files changed, 196 insertions(+), 80 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index f920163580..6af8279189 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_handle_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 2e4fed96f0..cee3ea2338 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -11,6 +11,29 @@
#include <rte_interrupts.h>
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *handle; /**< device driver handle (Windows) */
+ };
+ bool alloc_from_hugepage;
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
bool from_hugepage)
@@ -31,11 +54,40 @@ struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
}
for (i = 0; i < size; i++) {
+ if (from_hugepage)
+ intr_handle[i].efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t), 0);
+ else
+ intr_handle[i].efds = calloc(1,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t));
+ if (!intr_handle[i].efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (from_hugepage)
+ intr_handle[i].elist = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle[i].elist = calloc(1,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle[i].elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle[i].alloc_from_hugepage = from_hugepage;
}
return intr_handle;
+fail:
+ free(intr_handle->efds);
+ free(intr_handle);
+ return NULL;
}
struct rte_intr_handle *rte_intr_handle_instance_index_get(
@@ -73,12 +125,48 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
}
intr_handle[index].fd = src->fd;
- intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle[index].dev_fd = src->dev_fd;
+
intr_handle[index].type = src->type;
intr_handle[index].max_intr = src->max_intr;
intr_handle[index].nb_efd = src->nb_efd;
intr_handle[index].efd_counter_size = src->efd_counter_size;
+ if (intr_handle[index].nb_intr != src->nb_intr) {
+ if (src->alloc_from_hugepage)
+ intr_handle[index].efds =
+ rte_realloc(intr_handle[index].efds,
+ src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle[index].efds =
+ realloc(intr_handle[index].efds,
+ src->nb_intr * sizeof(uint32_t));
+ if (intr_handle[index].efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (src->alloc_from_hugepage)
+ intr_handle[index].elist =
+ rte_realloc(intr_handle[index].elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle[index].elist =
+ realloc(intr_handle[index].elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (intr_handle[index].elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle[index].nb_intr = src->nb_intr;
+ }
+
memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
@@ -87,6 +175,45 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
return rte_errno;
}
+int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size cant be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->efds = realloc(intr_handle->efds,
+ size * sizeof(uint32_t));
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->elist = realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event));
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->nb_intr = size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+
void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle == NULL) {
@@ -94,10 +221,15 @@ void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
rte_errno = ENOTSUP;
}
- if (intr_handle->alloc_from_hugepage)
+ if (intr_handle->alloc_from_hugepage) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd)
@@ -164,7 +296,7 @@ int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
goto fail;
}
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -179,7 +311,7 @@ int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
goto fail;
}
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return rte_errno;
}
@@ -300,6 +432,12 @@ int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle)
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
return intr_handle->efds;
fail:
return NULL;
@@ -314,6 +452,12 @@ int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -335,6 +479,12 @@ int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -358,6 +508,12 @@ struct rte_epoll_event *rte_intr_handle_elist_index_get(
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -379,6 +535,12 @@ int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 216aece61b..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *handle; /**< device driver handle (Windows) */
- };
- bool alloc_from_hugepage;
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index afc3262967..7dfb849eea 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -25,9 +25,29 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
-#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
-#include "rte_eal_interrupts.h"
+#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [RFC 7/7] eal/alarm: introduce alarm fini routine
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (5 preceding siblings ...)
2021-08-26 14:57 ` [dpdk-dev] [RFC 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-08-26 14:57 ` Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (5 subsequent siblings)
12 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
To: dev, Bruce Richardson; +Cc: Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 ++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 50 +++++++++++++++++++++++++++++++-----
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 10 +++++++-
5 files changed, 66 insertions(+), 7 deletions(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..ed429dec9d 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -162,6 +162,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 6cee5ae369..7efead4f48 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -973,6 +973,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index 07baecd64f..33855393a6 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,7 +43,7 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
void
@@ -56,16 +56,40 @@ rte_eal_alarm_fini(void)
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_handle_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ goto fail;
return 0;
+fail:
+ close(fd);
+error:
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+
+ rte_intr_handle_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -125,7 +149,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -143,7 +167,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -171,6 +195,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_handle_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -209,6 +235,12 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -263,6 +295,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_handle_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -289,6 +324,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_handle_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 3577eaeaa4..5c8af85ad5 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index e959fba27b..5dd804f83c 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
@@ -70,7 +77,8 @@ rte_eal_alarm_init(void)
goto error;
}
- rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
/* create a timerfd file descriptor */
if (rte_intr_handle_fd_set(intr_handle,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes
2021-08-26 14:57 ` [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes Harman Kalra
@ 2021-08-31 15:52 ` Kinsella, Ray
0 siblings, 0 replies; 152+ messages in thread
From: Kinsella, Ray @ 2021-08-31 15:52 UTC (permalink / raw)
To: Harman Kalra, dev, Thomas Monjalon
On 26/08/2021 15:57, Harman Kalra wrote:
> Defining protypes of get/set APIs for accessing/manipulating
> interrupt handle fields.
>
> Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
> as APIs defined are moved to rte_interrupts.h and epoll specific
> definitions are moved to a new header rte_epoll.h.
> Later in the series rte_eal_interrupt.h will be removed.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> MAINTAINERS | 1 +
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_eal_interrupts.h | 201 ---------
> lib/eal/include/rte_epoll.h | 116 +++++
> lib/eal/include/rte_interrupts.h | 653 ++++++++++++++++++++++++++-
> 5 files changed, 769 insertions(+), 203 deletions(-)
> create mode 100644 lib/eal/include/rte_epoll.h
>
Seems strange putting the API changes as a separate patch?
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs
2021-08-26 14:57 ` [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-08-31 15:53 ` Kinsella, Ray
0 siblings, 0 replies; 152+ messages in thread
From: Kinsella, Ray @ 2021-08-31 15:53 UTC (permalink / raw)
To: Harman Kalra, dev
On 26/08/2021 15:57, Harman Kalra wrote:
> Implementing get set APIs for interrupt handle fields.
> To make any change to the interrupt handle fields, one
> should make use of these APIs.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++
> lib/eal/common/meson.build | 2 +
> lib/eal/include/rte_eal_interrupts.h | 6 +-
> lib/eal/version.map | 30 ++
> 4 files changed, 543 insertions(+), 1 deletion(-)
> create mode 100644 lib/eal/common/eal_common_interrupts.c
>
Seems strange putting the API changes as a separate patch?
Acked-by: Ray Kinsella <mdr@ashroe.eu>
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (6 preceding siblings ...)
2021-08-26 14:57 ` [dpdk-dev] [RFC 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
@ 2021-09-03 12:40 ` Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes Harman Kalra
` (8 more replies)
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
` (4 subsequent siblings)
12 siblings, 9 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev; +Cc: Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal: interrupt handle API prototypes
This patch provides prototypes of all the new get set APIs, and
also rearranges the headers related to interrupt framework. Epoll
related definitions prototypes are moved into a new header i.e.
rte_epoll.h and APIs defined in rte_eal_interrupts.h which were
driver specific are moved to rte_interrupts.h (as anyways it was
accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: implement get set APIs
Implementing all get, set and alloc APIs. Alloc APIs are implemented
to allocate memory for interrupt handle instance. Currently most of
the drivers defines interrupt handle instance as static but now it cant
be static as size of rte_intr_handle is unknown to all the drivers.
Drivers are expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
Harman Kalra (7):
eal: interrupt handle API prototypes
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle fields
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 237 +++---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 37 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 84 ++-
drivers/net/ena/ena_ethdev.c | 36 +-
drivers/net/enic/enic_main.c | 27 +-
drivers/net/failsafe/failsafe.c | 24 +-
drivers/net/failsafe/failsafe_intr.c | 45 +-
drivers/net/failsafe/failsafe_ops.c | 23 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 +-
drivers/net/hns3/hns3_ethdev_vf.c | 57 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 +-
drivers/net/i40e/i40e_ethdev_vf.c | 43 +-
drivers/net/iavf/iavf_ethdev.c | 41 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 +-
drivers/net/ice/ice_ethdev.c | 51 +-
drivers/net/igc/igc_ethdev.c | 47 +-
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +-
drivers/net/memif/memif_socket.c | 114 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 +-
drivers/net/mlx5/linux/mlx5_os.c | 56 +-
drivers/net/mlx5/linux/mlx5_socket.c | 26 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 28 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 31 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 29 +-
drivers/net/tap/rte_eth_tap.c | 37 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 13 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +-
drivers/net/vhost/rte_eth_vhost.c | 78 +-
drivers/net/virtio/virtio_ethdev.c | 17 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 +-
drivers/raw/ifpga/ifpga_rawdev.c | 42 +-
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 668 +++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 2 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 56 +-
lib/eal/freebsd/eal_interrupts.c | 94 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 116 +++
lib/eal/include/rte_interrupts.h | 673 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 39 +-
lib/eal/linux/eal_dev.c | 65 +-
lib/eal/linux/eal_interrupts.c | 294 +++++---
lib/eal/version.map | 30 +
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3797 insertions(+), 1685 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-09-03 12:40 ` Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
` (7 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra
Defining protypes of get/set APIs for accessing/manipulating
interrupt handle fields.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 201 ---------
lib/eal/include/rte_epoll.h | 116 +++++
lib/eal/include/rte_interrupts.h | 653 ++++++++++++++++++++++++++-
5 files changed, 769 insertions(+), 203 deletions(-)
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 266f5ac1da..53b092f532 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -208,6 +208,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..68ca3a042d 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -91,179 +65,4 @@ struct rte_intr_handle {
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..182353cfd4
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..afc3262967 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,10 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
+
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +39,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +168,650 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instances based on size provided by user
+ * i.e. whether a single handle or array of handles is defined by size. Memory
+ * to be allocated from a hugepage or normal allocation is also defined by user.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later as per the requirement.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @param size
+ * No of interrupt instances.
+ * @param from_hugepage
+ * Memory allocation from hugepage or normal allocation
+ *
+ * @return
+ * - On success, address of first interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
+ bool from_hugepage);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the address of interrupt handle instance as per the index
+ * provided.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ * @param index
+ * Index of the interrupt handle
+ *
+ * @return
+ * - On success, address of interrupt handle at index
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *rte_intr_handle_instance_index_get(
+ struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for event fds. event lists
+ * and interrupt handle array.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ *
+ */
+__rte_experimental
+void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to populate interrupt handle at a given index of array
+ * of interrupt handles, with the values defined in src handler.
+ *
+ * @param intr_handle
+ * Start address of interrupt handles
+ * @param
+ * Source interrupt handle to be cloned.
+ * @param index
+ * Index of the interrupt handle
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src,
+ int index);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type rte_intr_handle_type_get(
+ const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the no of event fd field of interrupt handle with
+ * user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the no of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the no of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter, used for vdev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efd_counter_size_get(
+ const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the base address of the event fds array field of given interrupt
+ * handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efds base address
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the event list array index with the given elist
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * event list instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the address of elist instance of event list array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value.
+ */
+__rte_experimental
+struct rte_epoll_event *rte_intr_handle_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * no of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * No of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_vec_list_index_get(
+ const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Freeing the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the base address of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, base address of intr_vec array
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
+ * arrays as per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle,
+ int size);
#ifdef __cplusplus
}
#endif
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes Harman Kalra
@ 2021-09-03 12:40 ` Harman Kalra
2021-09-28 15:46 ` David Marchand
2021-10-03 18:05 ` [dpdk-dev] " Dmitry Kozlyuk
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (6 subsequent siblings)
8 siblings, 2 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev, Harman Kalra, Ray Kinsella
Implementing get set APIs for interrupt handle fields.
To make any change to the interrupt handle fields, one
should make use of these APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++
lib/eal/common/meson.build | 2 +
lib/eal/include/rte_eal_interrupts.h | 6 +-
lib/eal/version.map | 30 ++
4 files changed, 543 insertions(+), 1 deletion(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..2e4fed96f0
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,506 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include <rte_interrupts.h>
+
+
+struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
+ bool from_hugepage)
+{
+ struct rte_intr_handle *intr_handle;
+ int i;
+
+ if (from_hugepage)
+ intr_handle = rte_zmalloc(NULL,
+ size * sizeof(struct rte_intr_handle),
+ 0);
+ else
+ intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ for (i = 0; i < size; i++) {
+ intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle[i].alloc_from_hugepage = from_hugepage;
+ }
+
+ return intr_handle;
+}
+
+struct rte_intr_handle *rte_intr_handle_instance_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ return &intr_handle[index];
+}
+
+int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ if (index < 0) {
+ RTE_LOG(ERR, EAL, "Index cany be negative");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle[index].fd = src->fd;
+ intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle[index].type = src->type;
+ intr_handle[index].max_intr = src->max_intr;
+ intr_handle[index].nb_efd = src->nb_efd;
+ intr_handle[index].efd_counter_size = src->efd_counter_size;
+
+ memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
+ memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ if (intr_handle->alloc_from_hugepage)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->fd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_handle_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ return RTE_INTR_HANDLE_UNKNOWN;
+ }
+
+ return intr_handle->type;
+}
+
+int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
+ max_intr, intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->max_intr;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_efd;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_intr;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efd_counter_size_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->efd_counter_size;
+fail:
+ return rte_errno;
+}
+
+int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->efds;
+fail:
+ return NULL;
+}
+
+int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_handle_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ return intr_handle->intr_vec;
+}
+
+int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_vec_list_index_get(
+ const struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return rte_errno;
+}
+
+int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index edfca77779..47f2977539 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -17,6 +17,7 @@ if is_windows
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
@@ -53,6 +54,7 @@ sources += files(
'eal_common_fbarray.c',
'eal_common_hexdump.c',
'eal_common_hypervisor.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 68ca3a042d..216aece61b 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -55,13 +55,17 @@ struct rte_intr_handle {
};
void *handle; /**< device driver handle (Windows) */
};
+ bool alloc_from_hugepage;
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
diff --git a/lib/eal/version.map b/lib/eal/version.map
index beeb986adc..56108d0998 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -426,6 +426,36 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_handle_fd_set;
+ rte_intr_handle_fd_get;
+ rte_intr_handle_dev_fd_set;
+ rte_intr_handle_dev_fd_get;
+ rte_intr_handle_type_set;
+ rte_intr_handle_type_get;
+ rte_intr_handle_instance_alloc;
+ rte_intr_handle_instance_index_get;
+ rte_intr_handle_instance_free;
+ rte_intr_handle_instance_index_set;
+ rte_intr_handle_event_list_update;
+ rte_intr_handle_max_intr_set;
+ rte_intr_handle_max_intr_get;
+ rte_intr_handle_nb_efd_set;
+ rte_intr_handle_nb_efd_get;
+ rte_intr_handle_nb_intr_get;
+ rte_intr_handle_efds_index_set;
+ rte_intr_handle_efds_index_get;
+ rte_intr_handle_efds_base;
+ rte_intr_handle_elist_index_set;
+ rte_intr_handle_elist_index_get;
+ rte_intr_handle_efd_counter_size_set;
+ rte_intr_handle_efd_counter_size_get;
+ rte_intr_handle_vec_list_alloc;
+ rte_intr_handle_vec_list_index_set;
+ rte_intr_handle_vec_list_index_get;
+ rte_intr_handle_vec_list_free;
+ rte_intr_handle_vec_list_base;
};
INTERNAL {
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-09-03 12:40 ` Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (5 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 94 ++++++----
lib/eal/linux/eal_interrupts.c | 294 +++++++++++++++++++------------
2 files changed, 242 insertions(+), 146 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..171006f19f 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_handle_fd_get(ih);
return 0;
}
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ src->intr_handle =
+ rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +164,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_handle_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +187,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_handle_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +228,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +243,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +284,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +298,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +331,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +383,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +407,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +425,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+ rte_intr_handle_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +449,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +461,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +484,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +497,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +568,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +580,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_handle_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +592,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..570eddf088 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ?
+ (rte_intr_handle_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_handle_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_handle_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_handle_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_handle_instance_index_set(
+ src->intr_handle, intr_handle, 0);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
+ rte_intr_handle_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 ||
+ uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return -1;
}
@@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_handle_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_handle_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_handle_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_handle_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_handle_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_handle_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_handle_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_handle_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_handle_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_handle_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(intr_handle,
+ NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
- sizeof(union rte_intr_read_buffer)) {
+ if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle)
+ > sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_handle_efds_index_set(intr_handle, 0,
+ rte_intr_handle_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_handle_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_handle_max_intr_get(intr_handle) >
+ rte_intr_handle_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++)
+ close(rte_intr_handle_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+ rte_intr_handle_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_handle_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_handle_max_intr_get(intr_handle) -
+ rte_intr_handle_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 4/7] test/interrupt: apply get set interrupt handle APIs
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (2 preceding siblings ...)
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-09-03 12:40 ` Harman Kalra
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
` (4 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:40 UTC (permalink / raw)
To: dev, Harman Kalra
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 237 ++++++++++++++++++++++++-------------
1 file changed, 152 insertions(+), 85 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..289bca66dd 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles;
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,70 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ intr_handles = rte_intr_handle_instance_alloc(TEST_INTERRUPT_HANDLE_MAX,
+ false);
+ if (!intr_handles)
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
+
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (!test_intr_handle)
+ return -1;
+ if (rte_intr_handle_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_handle_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +136,7 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ rte_intr_handle_instance_free(intr_handles);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +165,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_handle_fd_get(intr_handle_l) !=
+ rte_intr_handle_fd_get(intr_handle_r) ||
+ rte_intr_handle_type_get(intr_handle_l) !=
+ rte_intr_handle_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +220,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +242,9 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ test_intr_type);
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +268,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -232,46 +277,52 @@ test_interrupt_enable(void)
}
/* check with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
}
/* check with valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with valid handler and its type */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +337,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -296,46 +347,52 @@ test_interrupt_disable(void)
}
/* check with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
}
/* check with valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with specific valid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
}
/* check with valid handler and its type */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_CASE1);
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +408,14 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
- test_intr_handle = intr_handles[intr_type];
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ intr_type);
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +429,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +454,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -444,17 +502,20 @@ test_interrupt(void)
}
/* check if it will fail to register cb with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
}
/* check if it will fail to register without callback */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -469,39 +530,41 @@ test_interrupt(void)
}
/* check if it will fail to unregister cb with invalid intr_handle */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_INVALID);
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
}
/* check if it is ok to register the same intr_handle twice */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -528,28 +591,32 @@ test_interrupt(void)
out:
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_UIO);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_ALARM);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
- test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ test_intr_handle = rte_intr_handle_instance_index_get(intr_handles,
+ TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT);
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 5/7] drivers: remove direct access to interrupt handle fields
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (3 preceding siblings ...)
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-09-03 12:41 ` Harman Kalra
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
` (3 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:41 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Shahaf Shuler,
Viacheslav Ovsiienko, Heinrich Kuhn, Jiawen Wu,
Devendra Singh Rawat, Andrew Rybchenko, Keith Wiles,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Tianfei zhang, Xiaoyun Li, Guy Kaneti, Bruce Richardson,
Thomas Monjalon
Cc: Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 ++-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 11 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 17 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 16 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 ++--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 29 ++++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 7 ++
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 106 +++++++++-------
drivers/common/cnxk/roc_nix_irq.c | 37 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 34 +++++
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 22 ++--
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 32 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 24 ++--
drivers/net/e1000/igb_ethdev.c | 84 ++++++-------
drivers/net/ena/ena_ethdev.c | 36 +++---
drivers/net/enic/enic_main.c | 27 ++--
drivers/net/failsafe/failsafe.c | 24 +++-
drivers/net/failsafe/failsafe_intr.c | 45 ++++---
drivers/net/failsafe/failsafe_ops.c | 23 +++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 50 ++++----
drivers/net/hns3/hns3_ethdev_vf.c | 57 +++++----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 55 ++++----
drivers/net/i40e/i40e_ethdev_vf.c | 43 +++----
drivers/net/iavf/iavf_ethdev.c | 41 +++---
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 23 ++--
drivers/net/ice/ice_ethdev.c | 51 ++++----
drivers/net/igc/igc_ethdev.c | 47 ++++---
drivers/net/ionic/ionic_ethdev.c | 12 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 70 +++++------
drivers/net/memif/memif_socket.c | 114 ++++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 63 ++++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 20 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 48 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 56 ++++++---
drivers/net/mlx5/linux/mlx5_socket.c | 26 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 27 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 28 +++--
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 31 +++--
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 29 ++---
drivers/net/tap/rte_eth_tap.c | 37 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +++--
drivers/net/thunderx/nicvf_ethdev.c | 13 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 36 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +++---
drivers/net/vhost/rte_eth_vhost.c | 78 +++++++-----
drivers/net/virtio/virtio_ethdev.c | 17 +--
.../net/virtio/virtio_user/virtio_user_dev.c | 53 +++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 42 +++++--
drivers/raw/ntb/ntb.c | 10 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 11 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 49 +++++++-
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 31 +++--
lib/eal/linux/eal_dev.c | 65 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
118 files changed, 1879 insertions(+), 1182 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..5097b240ee 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1096,8 +1098,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1109,8 +1112,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_handle_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4178,7 +4182,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..34a6da9a46 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,12 +743,13 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ rte_intr_handle_fd_get(dev->intr_handle)))
+ return -rte_errno;
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
rte_bbdev_log(ERR, "Failed to allocate %u vectors",
dev->data->num_queues);
return -ENOMEM;
@@ -1879,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..0a718fbcd9 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,18 +1014,20 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ rte_intr_handle_fd_get(dev->intr_handle)))
+ return -rte_errno;
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
rte_bbdev_log(ERR, "Failed to allocate %u vectors",
dev->data->num_queues);
return -ENOMEM;
}
}
+
ret = rte_intr_enable(dev->intr_handle);
if (ret < 0) {
rte_bbdev_log(ERR,
@@ -2369,7 +2371,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..7298a03d86 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_handle_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..236fdc9bf7 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,15 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +76,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..7642964622 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -116,7 +116,7 @@ struct rte_auxiliary_device {
TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..52b2a4883e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 48d5cf4625..f32cb038b4 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -101,7 +101,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..3a1b0d0a45 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,16 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +232,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_handle_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..b002b5e443 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_handle_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_handle_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_handle_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_handle_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..479d3d71d7 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_handle_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,15 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +499,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +547,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_handle_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +560,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_handle_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index 37d45dffe5..e46110b3ea 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -125,7 +125,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..bebb584796 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,15 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +198,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_handle_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +408,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_handle_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..38caaf2e8f 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..8a84eb15ea 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_handle_fd_get(dev->intr_handle)) {
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_handle_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..2529377f9b 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_handle_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..f920163580 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_handle_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_handle_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_handle_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_handle_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..b3feb4e40e 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle = rte_intr_handle_instance_alloc(
+ RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +271,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_handle_instance_free(dev->intr_handle);
+ rte_intr_handle_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +318,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_handle_instance_free(dev->intr_handle);
+ rte_intr_handle_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..9b9a2e4a20 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..fe679c467c 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -70,12 +70,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 3c924eee14..bce94d5d72 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -297,6 +297,13 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index b52ca5bf1d..f506811d98 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -29,9 +29,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_handle_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_handle_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -40,7 +42,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_handle_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -60,15 +63,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_dev_fd_get(dev->intr_handle));
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_handle_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -77,16 +81,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_handle_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..07916478ef 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -74,7 +74,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 8582e32c1d..fb0f051f81 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -149,9 +149,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_handle_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -223,12 +229,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_handle_fd_get(dev->intr_handle));
+ if (rte_intr_handle_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_handle_dev_fd_get(dev->intr_handle));
+ rte_intr_handle_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(dev->intr_handle, -1);
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index c001497f74..b0d16bf81c 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -62,7 +62,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -82,7 +82,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -126,7 +126,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -149,7 +149,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index c14f189f9b..2dce7936fe 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -608,7 +608,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -658,7 +658,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -691,7 +691,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -722,7 +722,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -806,7 +806,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -827,7 +827,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1143,7 +1143,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c30d7..40c472e7d3 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_handle_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_handle_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_handle_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_handle_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_handle_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_handle_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_handle_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_handle_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,42 +136,50 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_handle_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_handle_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_handle_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_handle_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_handle_nb_efd_get(intr_handle);
+ plt_intr_handle_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = plt_intr_handle_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_handle_max_intr_get(intr_handle))
+ plt_intr_handle_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_handle_nb_efd_get(intr_handle),
+ plt_intr_handle_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -174,24 +189,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_handle_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_handle_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_handle_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_handle_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -205,12 +223,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_handle_nb_efd_get(intr_handle),
+ plt_intr_handle_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_handle_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_handle_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_handle_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..9c29f4272b 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,21 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
+ if (!plt_intr_handle_vec_list_base(handle)) {
+ rc = plt_intr_handle_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_handle_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +452,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +467,9 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ if (plt_intr_handle_vec_list_base(handle))
+ plt_intr_handle_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index d064d125c1..69b6254870 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b82d..872af26acc 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -101,6 +101,40 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_handle_efd_counter_size_get \
+ rte_intr_handle_efd_counter_size_get
+#define plt_intr_handle_efd_counter_size_set \
+ rte_intr_handle_efd_counter_size_set
+#define plt_intr_handle_vec_list_index_get rte_intr_handle_vec_list_index_get
+#define plt_intr_handle_vec_list_index_set rte_intr_handle_vec_list_index_set
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_vec_list_alloc rte_intr_handle_vec_list_alloc
+#define plt_intr_handle_vec_list_free rte_intr_handle_vec_list_free
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_vec_list_base rte_intr_handle_vec_list_base
+#define plt_intr_handle_fd_set rte_intr_handle_fd_set
+#define plt_intr_handle_fd_get rte_intr_handle_fd_get
+#define plt_intr_handle_dev_fd_get rte_intr_handle_dev_fd_get
+#define plt_intr_handle_dev_fd_set rte_intr_handle_dev_fd_set
+#define plt_intr_handle_type_get rte_intr_handle_type_get
+#define plt_intr_handle_type_set rte_intr_handle_type_set
+#define plt_intr_handle_instance_alloc rte_intr_handle_instance_alloc
+#define plt_intr_handle_instance_index_get rte_intr_handle_instance_index_get
+#define plt_intr_handle_instance_index_set rte_intr_handle_instance_index_set
+#define plt_intr_handle_instance_free rte_intr_handle_instance_free
+#define plt_intr_handle_event_list_update rte_intr_handle_event_list_update
+#define plt_intr_handle_max_intr_get rte_intr_handle_max_intr_get
+#define plt_intr_handle_max_intr_set rte_intr_handle_max_intr_set
+#define plt_intr_handle_nb_efd_get rte_intr_handle_nb_efd_get
+#define plt_intr_handle_nb_efd_set rte_intr_handle_nb_efd_set
+#define plt_intr_handle_nb_intr_get rte_intr_handle_nb_intr_get
+#define plt_intr_handle_nb_intr_set rte_intr_handle_nb_intr_set
+#define plt_intr_handle_efds_index_get rte_intr_handle_efds_index_get
+#define plt_intr_handle_efds_index_set rte_intr_handle_efds_index_set
+#define plt_intr_handle_efds_base rte_intr_handle_efds_base
+#define plt_intr_handle_elist_index_get rte_intr_handle_elist_index_get
+#define plt_intr_handle_elist_index_set rte_intr_handle_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf2626bd..88165ad236 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -491,7 +491,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -521,7 +521,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index 1485e2b357..906b283cde 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -640,7 +640,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -690,7 +690,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -723,7 +723,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -838,7 +838,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -859,7 +859,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1036,7 +1036,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..6efa4c6646 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_handle_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_handle_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_handle_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_handle_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_handle_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_handle_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_handle_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_handle_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_handle_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_handle_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_handle_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_handle_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_handle_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_handle_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+ rte_intr_handle_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_handle_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_handle_max_intr_get(intr_handle))
+ rte_intr_handle_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_handle_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_handle_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_handle_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_handle_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_handle_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_handle_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_handle_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519..03c37960eb 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -525,10 +525,10 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -608,7 +608,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -638,10 +638,8 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -692,7 +690,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..f32619e05c 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af1..c26e0a199e 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a..a34b2f078b 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb..02598d8030 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -831,12 +831,10 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
@@ -844,13 +842,15 @@ static int bnxt_start_nic(struct bnxt *bp)
}
PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
"intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ rte_intr_handle_vec_list_base(intr_handle),
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1459,7 +1459,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1501,10 +1501,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843..1f4336b4a7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_handle_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_handle_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_handle_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_handle_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..f95d3bbf53 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1157,7 +1157,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1228,8 +1228,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1268,8 +1268,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_handle_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..fe20fc5e6c 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -752,10 +750,8 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -767,7 +763,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1008,7 +1004,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f3341..66a6380496 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -1001,7 +1001,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1205,7 +1205,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1264,11 +1264,11 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1427,7 +1427,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1471,10 +1471,8 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1514,7 +1512,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1540,10 +1538,9 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2784,7 +2781,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3301,7 +3298,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3332,11 +3329,11 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3358,7 +3355,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3382,10 +3379,10 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3423,7 +3420,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5145,7 +5142,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5165,7 +5162,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5243,7 +5240,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5284,8 +5281,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5303,8 +5301,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5314,8 +5312,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68..f73d7bb5bc 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -947,7 +947,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -969,10 +969,10 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -988,7 +988,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1008,7 +1008,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1665,7 +1668,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -2817,7 +2820,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -2844,9 +2847,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -2865,7 +2868,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -2873,8 +2878,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..0045dbd3f5 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,9 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +668,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1112,8 +1113,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d..b5c53e4286 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -266,11 +266,25 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
mac->addr_bytes[4], mac->addr_bytes[5]);
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_handle_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_handle_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -299,6 +313,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_handle_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..57df67c6c5 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,11 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +438,10 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
+ RTE_ASSERT(rte_intr_handle_vec_list_base(intr_handle) == NULL);
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +454,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +467,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +506,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +537,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e0..a3f5f34dd3 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -398,15 +398,24 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -440,12 +449,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_handle_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
@@ -458,10 +467,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
}
}
fs_unlock(dev, 0);
+ rte_intr_handle_instance_free(intr_handle);
return 0;
free_rxq:
fs_rx_queue_release(rxq);
fs_unlock(dev, 0);
+ rte_intr_handle_instance_free(intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..6f58c2543f 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_handle_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_handle_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a72401546..89c576a902 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1225,13 +1225,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3134,7 +3134,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3144,7 +3144,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3174,7 +3174,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972..1b46e81b5b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5275,7 +5275,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5288,7 +5288,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5347,8 +5347,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5381,8 +5381,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5716,7 +5716,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5739,11 +5739,10 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate vector list */
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
hw->used_rx_queues);
ret = -ENOMEM;
@@ -5761,20 +5760,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5785,7 +5785,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5795,8 +5795,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5939,7 +5940,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5959,16 +5960,15 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c8..2ee2a837dd 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2136,16 +2136,17 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
}
static int
@@ -2301,7 +2302,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2324,11 +2325,10 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate vector list */
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
hns3_err(hw, "Failed to allocate %u rx_queues"
" intr_vec", hw->used_rx_queues);
ret = -ENOMEM;
@@ -2346,20 +2346,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2370,7 +2372,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2380,8 +2382,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2845,7 +2848,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2871,7 +2874,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9..eabec24dcc 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1038,7 +1038,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..05f2b3c53c 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1451,7 +1451,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
@@ -1985,7 +1985,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2101,10 +2101,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_handle_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2154,8 +2155,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2163,7 +2164,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2177,7 +2180,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2204,7 +2207,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2370,7 +2373,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2388,12 +2391,10 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2534,7 +2535,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2575,10 +2576,10 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2597,7 +2598,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_mirror_rule *p_mirror;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
@@ -11404,11 +11405,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11423,7 +11424,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11432,11 +11433,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b2..4ecc160a75 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -678,7 +678,7 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
uint8_t *cmd_buffer = NULL;
struct virtchnl_irq_map_info *map_info;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec, cmd_buffer_size, max_vectors, nb_msix, msix_base, i;
uint16_t rxq_map[vf->vf_res->max_vectors];
int err;
@@ -689,12 +689,14 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
msix_base = I40E_RX_VEC_START;
/* For interrupt mode, available vector id is from 1. */
max_vectors = vf->vf_res->max_vectors - 1;
- nb_msix = RTE_MIN(max_vectors, intr_handle->nb_efd);
+ nb_msix = RTE_MIN(max_vectors,
+ (uint32_t)rte_intr_handle_nb_efd_get(intr_handle));
vec = msix_base;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle, i,
+ vec++);
if (vec >= vf->vf_res->max_vectors)
vec = msix_base;
}
@@ -705,7 +707,8 @@ i40evf_config_irq_map(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
rxq_map[msix_base] |= 1 << i;
if (rte_intr_dp_is_en(intr_handle))
- intr_handle->intr_vec[i] = msix_base;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, msix_base);
}
}
@@ -2003,7 +2006,7 @@ i40evf_enable_queues_intr(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (!rte_intr_allow_others(intr_handle)) {
I40E_WRITE_REG(hw,
@@ -2023,7 +2026,7 @@ i40evf_disable_queues_intr(struct rte_eth_dev *dev)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (!rte_intr_allow_others(intr_handle)) {
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01,
@@ -2039,13 +2042,13 @@ static int
i40evf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t interval =
i40e_calc_itr_interval(0, 0);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01,
I40E_VFINT_DYN_CTL01_INTENA_MASK |
@@ -2072,11 +2075,11 @@ static int
i40evf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_VFINT_DYN_CTL01, 0);
else
@@ -2166,7 +2169,7 @@ i40evf_dev_start(struct rte_eth_dev *dev)
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
PMD_INIT_FUNC_TRACE();
@@ -2185,11 +2188,10 @@ i40evf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2243,7 +2245,7 @@ static int
i40evf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vf *vf = I40EVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
@@ -2260,10 +2262,9 @@ i40evf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* remove all mac addrs */
i40evf_add_del_all_mac_addr(dev, FALSE);
/* remove all multicast addresses */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e..f768fd02b1 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -658,17 +658,17 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -728,7 +728,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -738,14 +739,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_handle_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -909,10 +912,8 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1661,7 +1662,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1679,7 +1681,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1691,7 +1693,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2325,12 +2328,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
ð_dev->data->mac_addrs[0]);
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* configure and enable device interrupt */
iavf_enable_irq0(hw);
@@ -2351,7 +2354,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 06dc663947..13425f3005 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1691,9 +1691,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num)
* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
if (err) {
PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES");
return err;
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216..fc4111fe63 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -535,13 +535,13 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
if (ice_dcf_get_vf_resource(hw) || ice_dcf_get_vf_vsi_map(hw) < 0)
err = -1;
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -680,9 +680,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -704,7 +704,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..2e091a0ec0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -153,11 +153,10 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -202,7 +201,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -212,12 +212,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_handle_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -614,10 +615,8 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..6c6caeb4aa 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2013,7 +2013,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2204,7 +2204,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2234,7 +2234,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2259,10 +2259,8 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2276,7 +2274,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3167,10 +3165,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_handle_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3198,8 +3197,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3208,7 +3208,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3220,7 +3222,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3246,7 +3248,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3266,11 +3268,10 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4539,19 +4540,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4560,11 +4561,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..86ac297ca3 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -668,10 +668,8 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
return 0;
}
@@ -731,7 +729,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -755,8 +753,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -773,8 +771,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -810,7 +808,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -819,7 +817,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_handle_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -913,7 +912,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -951,10 +950,10 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1169,7 +1168,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1339,11 +1338,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2100,7 +2099,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2119,7 +2118,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e620793966..3076fe7eab 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1071,7 +1071,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1085,11 +1085,9 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ adapter->nintrs)) {
IONIC_PRINT(ERR, "Failed to allocate %u vectors",
adapter->nintrs);
return -ENOMEM;
@@ -1122,7 +1120,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b5..48ee463e7d 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1034,7 +1034,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1529,7 +1529,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2548,7 +2548,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2603,11 +2603,10 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2843,7 +2842,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2894,10 +2893,8 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2981,7 +2978,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4626,7 +4623,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5307,7 +5304,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5368,11 +5365,10 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -5411,7 +5407,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5439,10 +5435,8 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5454,7 +5448,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5937,7 +5931,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5963,7 +5957,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5979,7 +5973,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -6106,7 +6100,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -6133,8 +6127,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -6155,7 +6151,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6199,8 +6195,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb..1f558a6997 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_handle_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_handle_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_handle_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_handle_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_handle_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_handle_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_handle_fd_get(ih));
+ rte_intr_handle_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_handle_fd_get(mq->intr_handle));
+ rte_intr_handle_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_handle_fd_get(mq->intr_handle));
+ rte_intr_handle_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_handle_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_handle_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_handle_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,27 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +878,11 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ if (cc->intr_handle)
+ rte_intr_handle_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +938,23 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +967,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_handle_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1046,6 +1086,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_handle_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1108,13 +1150,26 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_handle_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1129,6 +1184,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_handle_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e..38fd93d2a7 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -325,7 +325,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_handle_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -461,7 +462,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_handle_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -678,7 +680,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_handle_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -829,7 +832,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_handle_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1089,8 +1093,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_handle_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1112,8 +1119,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_handle_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1307,12 +1317,26 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1336,11 +1360,25 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1356,6 +1394,7 @@ memif_queue_release(void *queue)
if (!mq)
return;
+ rte_intr_handle_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index c522157a0a..8d32694613 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1045,9 +1045,20 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_handle_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_handle_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1060,7 +1071,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1105,6 +1116,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_handle_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..1e28b8e4b2 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,13 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +68,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +83,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +96,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_handle_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +262,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_handle_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_handle_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +295,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_handle_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..117a3ded16 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2589,9 +2589,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle = rte_intr_handle_instance_alloc
+ (RTE_INTR_HANDLE_DEFAULT_SIZE, true);
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2745,7 +2744,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2929,7 +2928,16 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int ret;
int flags;
- sh->intr_handle.fd = -1;
+ sh->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_handle_fd_set(sh->intr_handle, -1);
+
flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL);
ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd,
F_SETFL, flags | O_NONBLOCK);
@@ -2937,17 +2945,26 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_handle_fd_set(sh->intr_handle,
+ ((struct ibv_context *)sh->ctx)->async_fd);
+ rte_intr_handle_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_handle_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp =
(void *)mlx5_glue->devx_create_cmd_comp(sh->ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
@@ -2962,13 +2979,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_handle_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_handle_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_handle_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2985,13 +3003,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_handle_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_handle_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_handle_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_handle_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..9007333c61 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,20 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_handle_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +169,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_handle_fd_set(server_intr_handle, 0);
+ rte_intr_handle_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_handle_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e231..b4666fd379 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1016,7 +1016,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1184,8 +1184,8 @@ struct mlx5_dev_ctx_shared {
/* Memory Pool for mlx5 flow resources. */
struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce7989..75bcb82bf9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -837,10 +837,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -848,7 +845,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -859,9 +859,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -888,14 +888,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -916,11 +922,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (!rte_intr_handle_vec_list_base(intr_handle))
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_handle_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -930,10 +936,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54173bfacb..d349e5df44 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1129,7 +1129,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_handle_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1138,7 +1138,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_handle_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..9567c4866d 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_handle_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_handle_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_handle_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -784,13 +785,23 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_handle_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_handle_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..caf64ccfc2 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593..ee083359b5 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,11 +307,9 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -320,11 +318,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +332,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -808,7 +810,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -828,7 +831,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_handle_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -878,7 +882,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 534a38c14f..f9086f806f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -81,7 +81,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -108,12 +108,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -328,10 +329,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -574,7 +575,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865..e167d364fc 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -49,7 +49,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -69,12 +69,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_handle_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -223,10 +224,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -439,7 +440,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..fe4d675c0f 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,10 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +502,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +539,8 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +556,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1090,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1125,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..3cdd19dc68 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,21 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
+ if (!rte_intr_handle_vec_list_base(handle)) {
+ rc = rte_intr_handle_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_handle_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +396,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6eb..b04e446030 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2569,22 +2569,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..7cf17d3e38 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -215,15 +214,17 @@ sfc_intr_start(struct sfc_adapter *sa)
}
sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ rte_intr_handle_type_get(intr_handle),
+ rte_intr_handle_max_intr_get(intr_handle),
+ rte_intr_handle_nb_efd_get(intr_handle),
+ rte_intr_handle_vec_list_base(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_handle_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +251,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +323,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_handle_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf7..d6c92f8d30 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_handle_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_handle_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_handle_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_handle_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_handle_fd_get(pmd->intr_handle));
+ rte_intr_handle_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1923,6 +1925,15 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1940,9 +1951,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_handle_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_handle_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2093,6 +2104,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_handle_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..b1a339f8bd 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,14 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+ rte_intr_handle_nb_efd_set(intr_handle, 0);
+
+ rte_intr_handle_instance_free(intr_handle);
}
/**
@@ -52,15 +54,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_handle_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +75,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_handle_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfc..8dacae980c 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_handle_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2175,6 +2178,16 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 0063994688..7095e7a4d2 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1680,17 +1680,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1871,7 +1869,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1921,10 +1919,8 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1987,7 +1983,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -3107,7 +3103,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3640,7 +3636,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3722,7 +3718,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3756,8 +3752,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 18ed94bd27..24222daafd 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -613,7 +613,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -674,11 +674,10 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -717,7 +716,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -744,10 +743,8 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -760,7 +757,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -921,7 +918,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -943,7 +940,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -983,7 +980,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1009,8 +1006,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_handle_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_handle_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..a595352e63 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_handle_elist_index_get(handle, rxq_idx);
+ if (rte_intr_handle_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_handle_efds_index_get(handle, rxq_idx);
+ if (rte_intr_handle_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_handle_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
+
+ rte_intr_handle_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +666,32 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_handle_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_handle_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +710,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_handle_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_handle_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +929,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c9..4de1c929a9 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -722,8 +722,8 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ if (rte_intr_handle_vec_list_base(dev->intr_handle))
+ rte_intr_handle_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1634,7 +1634,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1673,11 +1675,10 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(dev->intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(dev->intr_handle,
+ "intr_vec",
+ hw->max_queue_pairs)) {
PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
hw->max_queue_pairs);
return -ENOMEM;
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 16c58710d7..3d0ce9458c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,40 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
if (!eth_dev->intr_handle) {
- PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
+ PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle",
+ dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_handle_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_handle_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_handle_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_handle_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
+
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_handle_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +675,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_handle_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +980,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +990,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1015,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_handle_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_handle_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a..1d0b61d9f2 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -620,11 +620,10 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle) &&
+ !rte_intr_handle_vec_list_base(intr_handle)) {
+ if (rte_intr_handle_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -635,8 +634,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -644,17 +642,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_handle_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_handle_nb_efd_get(intr_handle) + 1);
+ rte_intr_handle_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_handle_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -802,7 +802,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -825,7 +827,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_handle_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1022,10 +1026,8 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1677,7 +1679,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_handle_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1687,7 +1691,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_handle_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..4fbe25080e 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle;
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,23 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle =
+ rte_intr_handle_instance_index_get(ifpga_irq_handle, 0);
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = rte_intr_handle_instance_index_get(
+ ifpga_irq_handle, vec_start + 1);
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ rte_intr_handle_instance_free(ifpga_irq_handle);
+ return rc;
}
int
@@ -1370,6 +1376,10 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ ifpga_irq_handle = rte_intr_handle_instance_alloc(IFPGA_MAX_IRQ, false);
+ if (!ifpga_irq_handle)
+ return -ENOMEM;
+
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
return -ENODEV;
@@ -1379,29 +1389,35 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle =
+ rte_intr_handle_instance_index_get(ifpga_irq_handle, 0);
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = rte_intr_handle_instance_index_get(
+ ifpga_irq_handle, vec_start + 1);
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_handle_fd_set(intr_handle,
+ rte_intr_handle_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_handle_dev_fd_get(intr_handle),
+ rte_intr_handle_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_handle_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1412,7 +1428,7 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -EINVAL;
ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
+ rte_intr_handle_efds_base(intr_handle));
if (ret)
return -EINVAL;
}
@@ -1491,7 +1507,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_handle_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..5497ef2906 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,11 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ if (rte_intr_handle_vec_list_base(intr_handle))
+ rte_intr_handle_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1400,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 1dc813d0a3..90b9a73f6a 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_handle_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_handle_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..27dc50cc57 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -698,6 +698,13 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -716,6 +723,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_handle_instance_free(priv->err_intr_handle);
rte_free(priv);
}
if (ctx)
@@ -750,6 +759,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev)
rte_vdpa_unregister_device(priv->vdev);
mlx5_glue->close_device(priv->ctx);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_handle_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 2a04e36607..f72cb358ec 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -92,7 +92,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -142,7 +142,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 3541c652ce..1f3da2461a 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_handle_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_handle_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_handle_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_handle_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_handle_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058..b9d03953ac 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_handle_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_handle_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_handle_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_handle_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,34 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ true);
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_handle_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_handle_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_handle_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_handle_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_handle_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -501,7 +518,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_handle_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index fc37236195..fdc9aeb894 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1093,7 +1093,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle || !rte_intr_handle_vec_list_base(intr_handle)) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1104,7 +1104,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..b4a0dd533f 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,45 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_handle_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+
+ rte_intr_handle_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -118,7 +141,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +159,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +187,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_handle_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +227,12 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -256,6 +287,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_handle_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -282,6 +316,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_handle_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..e959fba27b 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,37 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_handle_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_handle_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +124,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +156,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +186,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_handle_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..14d693cd88 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_handle_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_handle_fd_get(intr_handle));
+ rte_intr_handle_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_handle_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,40 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle =
+ rte_intr_handle_instance_alloc(RTE_INTR_HANDLE_DEFAULT_SIZE,
+ false);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_handle_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_handle_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_handle_fd_set(intr_handle, -1);
+ rte_intr_handle_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +366,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_handle_fd_get(intr_handle));
+ rte_intr_handle_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..1f1a0291b6 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4777,13 +4777,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4818,15 +4818,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_handle_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5004,12 +5004,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (!rte_intr_handle_vec_list_base(intr_handle)) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_handle_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (4 preceding siblings ...)
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
@ 2021-09-03 12:41 ` Harman Kalra
2021-10-03 18:16 ` Dmitry Kozlyuk
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
` (2 subsequent siblings)
8 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:41 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 172 ++++++++++++++++++++++++-
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 -----------
lib/eal/include/rte_interrupts.h | 24 +++-
5 files changed, 196 insertions(+), 80 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index f920163580..6af8279189 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_handle_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 2e4fed96f0..caddf9b0ad 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -11,6 +11,29 @@
#include <rte_interrupts.h>
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *handle; /**< device driver handle (Windows) */
+ };
+ bool alloc_from_hugepage;
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
bool from_hugepage)
@@ -31,11 +54,40 @@ struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
}
for (i = 0; i < size; i++) {
+ if (from_hugepage)
+ intr_handle[i].efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t), 0);
+ else
+ intr_handle[i].efds = calloc(1,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t));
+ if (!intr_handle[i].efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (from_hugepage)
+ intr_handle[i].elist = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle[i].elist = calloc(1,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle[i].elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle[i].alloc_from_hugepage = from_hugepage;
}
return intr_handle;
+fail:
+ free(intr_handle->efds);
+ free(intr_handle);
+ return NULL;
}
struct rte_intr_handle *rte_intr_handle_instance_index_get(
@@ -73,12 +125,48 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
}
intr_handle[index].fd = src->fd;
- intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle[index].dev_fd = src->dev_fd;
+
intr_handle[index].type = src->type;
intr_handle[index].max_intr = src->max_intr;
intr_handle[index].nb_efd = src->nb_efd;
intr_handle[index].efd_counter_size = src->efd_counter_size;
+ if (intr_handle[index].nb_intr != src->nb_intr) {
+ if (src->alloc_from_hugepage)
+ intr_handle[index].efds =
+ rte_realloc(intr_handle[index].efds,
+ src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle[index].efds =
+ realloc(intr_handle[index].efds,
+ src->nb_intr * sizeof(uint32_t));
+ if (intr_handle[index].efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (src->alloc_from_hugepage)
+ intr_handle[index].elist =
+ rte_realloc(intr_handle[index].elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle[index].elist =
+ realloc(intr_handle[index].elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (intr_handle[index].elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle[index].nb_intr = src->nb_intr;
+ }
+
memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
@@ -87,6 +175,45 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
return rte_errno;
}
+int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->efds = realloc(intr_handle->efds,
+ size * sizeof(uint32_t));
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->elist = realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event));
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->nb_intr = size;
+
+ return 0;
+fail:
+ return rte_errno;
+}
+
+
void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle == NULL) {
@@ -94,10 +221,15 @@ void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
rte_errno = ENOTSUP;
}
- if (intr_handle->alloc_from_hugepage)
+ if (intr_handle->alloc_from_hugepage) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd)
@@ -164,7 +296,7 @@ int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
goto fail;
}
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -179,7 +311,7 @@ int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
goto fail;
}
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return rte_errno;
}
@@ -300,6 +432,12 @@ int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle)
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
return intr_handle->efds;
fail:
return NULL;
@@ -314,6 +452,12 @@ int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -335,6 +479,12 @@ int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -358,6 +508,12 @@ struct rte_epoll_event *rte_intr_handle_elist_index_get(
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -379,6 +535,12 @@ int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 216aece61b..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *handle; /**< device driver handle (Windows) */
- };
- bool alloc_from_hugepage;
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index afc3262967..7dfb849eea 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -25,9 +25,29 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
-#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
-#include "rte_eal_interrupts.h"
+#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v1 7/7] eal/alarm: introduce alarm fini routine
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (5 preceding siblings ...)
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-09-03 12:41 ` Harman Kalra
2021-09-15 14:13 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-23 8:20 ` David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-03 12:41 UTC (permalink / raw)
To: dev, Bruce Richardson; +Cc: Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 +++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++++++
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 10 +++++++++-
5 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..ed429dec9d 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -162,6 +162,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 6cee5ae369..7efead4f48 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -973,6 +973,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index b4a0dd533f..13c81518ed 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 3577eaeaa4..5c8af85ad5 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index e959fba27b..5dd804f83c 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_handle_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
@@ -70,7 +77,8 @@ rte_eal_alarm_init(void)
goto error;
}
- rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+ if (rte_intr_handle_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
/* create a timerfd file descriptor */
if (rte_intr_handle_fd_set(intr_handle,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (6 preceding siblings ...)
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
@ 2021-09-15 14:13 ` Harman Kalra
2021-09-23 8:20 ` David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-09-15 14:13 UTC (permalink / raw)
To: dev
Ping...
Kindly review the series. Also would like to request PMD maintainers(who uses interrupt APIs) to validate the series for their respective drivers,
as many drivers underwent interrupt related changes in patch 5 of the series.
Thanks
Harman
> -----Original Message-----
> From: Harman Kalra <hkalra@marvell.com>
> Sent: Friday, September 3, 2021 6:11 PM
> To: dev@dpdk.org
> Cc: Harman Kalra <hkalra@marvell.com>
> Subject: [PATCH v1 0/7] make rte_intr_handle internal
>
> Moving struct rte_intr_handle as an internal structure to avoid any ABI
> breakages in future. Since this structure defines some static arrays and
> changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI specification
> allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on PCI
> device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
>
> This series makes struct rte_intr_handle totally opaque to the outside world
> by wrapping it inside a .c file and providing get set wrapper APIs to read or
> manipulate its fields.. Any changes to be made to any of the fields should be
> done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined and also hides struct rte_intr_handle definition.
>
> Details on each patch of the series:
> Patch 1: eal: interrupt handle API prototypes This patch provides prototypes
> of all the new get set APIs, and also rearranges the headers related to
> interrupt framework. Epoll related definitions prototypes are moved into a
> new header i.e.
> rte_epoll.h and APIs defined in rte_eal_interrupts.h which were driver
> specific are moved to rte_interrupts.h (as anyways it was accessible and used
> outside DPDK library. Later in the series rte_eal_interrupts.h is removed.
>
> Patch 2: eal/interrupts: implement get set APIs Implementing all get, set and
> alloc APIs. Alloc APIs are implemented to allocate memory for interrupt
> handle instance. Currently most of the drivers defines interrupt handle
> instance as static but now it cant be static as size of rte_intr_handle is
> unknown to all the drivers.
> Drivers are expected to allocate interrupt instances during initialization and
> free these instances during cleanup phase.
>
> Patch 3: eal/interrupts: avoid direct access to interrupt handle Modifying the
> interrupt framework for linux and freebsd to use these get set alloc APIs as
> per requirement and avoid accessing the fields directly.
>
> Patch 4: test/interrupt: apply get set interrupt handle APIs Updating
> interrupt test suite to use interrupt handle APIs.
>
> Patch 5: drivers: remove direct access to interrupt handle fields Modifying all
> the drivers and libraries which are currently directly accessing the interrupt
> handle fields. Drivers are expected to allocated the interrupt instance, use
> get set APIs with the allocated interrupt handle and free it on cleanup.
>
> Patch 6: eal/interrupts: make interrupt handle structure opaque In this patch
> rte_eal_interrupt.h is removed, struct rte_intr_handle definition is moved to
> c file to make it completely opaque. As part of interrupt handle allocation,
> array like efds and elist(which are currently
> static) are dynamically allocated with default size
> (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> device requirement using new API rte_intr_handle_event_list_update().
> Eg, on PCI device probing MSIX size can be queried and these arrays can be
> reallocated accordingly.
>
> Patch 7: eal/alarm: introduce alarm fini routine Introducing alarm fini routine,
> as the memory allocated for alarm interrupt instance can be freed in alarm
> fini.
>
> Testing performed:
> 1. Validated the series by running interrupts and alarm test suite.
> 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> where interrupts are expected on packet arrival.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> Harman Kalra (7):
> eal: interrupt handle API prototypes
> eal/interrupts: implement get set APIs
> eal/interrupts: avoid direct access to interrupt handle
> test/interrupt: apply get set interrupt handle APIs
> drivers: remove direct access to interrupt handle fields
> eal/interrupts: make interrupt handle structure opaque
> eal/alarm: introduce alarm fini routine
>
> MAINTAINERS | 1 +
> app/test/test_interrupts.c | 237 +++---
> drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 13 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 14 +-
> drivers/bus/auxiliary/auxiliary_common.c | 2 +
> drivers/bus/auxiliary/linux/auxiliary.c | 11 +
> drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> drivers/bus/dpaa/dpaa_bus.c | 28 +-
> drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> drivers/bus/fslmc/fslmc_bus.c | 17 +-
> drivers/bus/fslmc/fslmc_vfio.c | 32 +-
> drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 21 +-
> drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> drivers/bus/fslmc/rte_fslmc.h | 2 +-
> drivers/bus/ifpga/ifpga_bus.c | 16 +-
> drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> drivers/bus/pci/bsd/pci.c | 21 +-
> drivers/bus/pci/linux/pci.c | 4 +-
> drivers/bus/pci/linux/pci_uio.c | 73 +-
> drivers/bus/pci/linux/pci_vfio.c | 115 ++-
> drivers/bus/pci/pci_common.c | 29 +-
> drivers/bus/pci/pci_common_uio.c | 21 +-
> drivers/bus/pci/rte_bus_pci.h | 4 +-
> drivers/bus/vmbus/linux/vmbus_bus.c | 7 +
> drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
> drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
> drivers/common/cnxk/roc_cpt.c | 8 +-
> drivers/common/cnxk/roc_dev.c | 14 +-
> drivers/common/cnxk/roc_irq.c | 106 +--
> drivers/common/cnxk/roc_nix_irq.c | 37 +-
> drivers/common/cnxk/roc_npa.c | 2 +-
> drivers/common/cnxk/roc_platform.h | 34 +
> drivers/common/cnxk/roc_sso.c | 4 +-
> drivers/common/cnxk/roc_tim.c | 4 +-
> drivers/common/octeontx2/otx2_dev.c | 14 +-
> drivers/common/octeontx2/otx2_irq.c | 117 +--
> .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> drivers/net/atlantic/atl_ethdev.c | 22 +-
> drivers/net/avp/avp_ethdev.c | 8 +-
> drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> drivers/net/axgbe/axgbe_mdio.c | 6 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 32 +-
> drivers/net/bnxt/bnxt_irq.c | 4 +-
> drivers/net/dpaa/dpaa_ethdev.c | 47 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> drivers/net/e1000/em_ethdev.c | 24 +-
> drivers/net/e1000/igb_ethdev.c | 84 ++-
> drivers/net/ena/ena_ethdev.c | 36 +-
> drivers/net/enic/enic_main.c | 27 +-
> drivers/net/failsafe/failsafe.c | 24 +-
> drivers/net/failsafe/failsafe_intr.c | 45 +-
> drivers/net/failsafe/failsafe_ops.c | 23 +-
> drivers/net/failsafe/failsafe_private.h | 2 +-
> drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> drivers/net/hns3/hns3_ethdev.c | 50 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 57 +-
> drivers/net/hns3/hns3_rxtx.c | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 55 +-
> drivers/net/i40e/i40e_ethdev_vf.c | 43 +-
> drivers/net/iavf/iavf_ethdev.c | 41 +-
> drivers/net/iavf/iavf_vchnl.c | 4 +-
> drivers/net/ice/ice_dcf.c | 10 +-
> drivers/net/ice/ice_dcf_ethdev.c | 23 +-
> drivers/net/ice/ice_ethdev.c | 51 +-
> drivers/net/igc/igc_ethdev.c | 47 +-
> drivers/net/ionic/ionic_ethdev.c | 12 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 70 +-
> drivers/net/memif/memif_socket.c | 114 ++-
> drivers/net/memif/memif_socket.h | 4 +-
> drivers/net/memif/rte_eth_memif.c | 63 +-
> drivers/net/memif/rte_eth_memif.h | 2 +-
> drivers/net/mlx4/mlx4.c | 20 +-
> drivers/net/mlx4/mlx4.h | 2 +-
> drivers/net/mlx4/mlx4_intr.c | 48 +-
> drivers/net/mlx5/linux/mlx5_os.c | 56 +-
> drivers/net/mlx5/linux/mlx5_socket.c | 26 +-
> drivers/net/mlx5/mlx5.h | 6 +-
> drivers/net/mlx5/mlx5_rxq.c | 43 +-
> drivers/net/mlx5/mlx5_trigger.c | 4 +-
> drivers/net/mlx5/mlx5_txpp.c | 27 +-
> drivers/net/netvsc/hn_ethdev.c | 4 +-
> drivers/net/nfp/nfp_common.c | 28 +-
> drivers/net/nfp/nfp_ethdev.c | 13 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> drivers/net/ngbe/ngbe_ethdev.c | 31 +-
> drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> drivers/net/qede/qede_ethdev.c | 16 +-
> drivers/net/sfc/sfc_intr.c | 29 +-
> drivers/net/tap/rte_eth_tap.c | 37 +-
> drivers/net/tap/rte_eth_tap.h | 2 +-
> drivers/net/tap/tap_intr.c | 33 +-
> drivers/net/thunderx/nicvf_ethdev.c | 13 +
> drivers/net/thunderx/nicvf_struct.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 36 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 35 +-
> drivers/net/vhost/rte_eth_vhost.c | 78 +-
> drivers/net/virtio/virtio_ethdev.c | 17 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 53 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 45 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 42 +-
> drivers/raw/ntb/ntb.c | 10 +-
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> drivers/vdpa/mlx5/mlx5_vdpa.c | 11 +
> drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
> drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 46 +-
> lib/bbdev/rte_bbdev.c | 4 +-
> lib/eal/common/eal_common_interrupts.c | 668 +++++++++++++++++
> lib/eal/common/eal_private.h | 11 +
> lib/eal/common/meson.build | 2 +
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 56 +-
> lib/eal/freebsd/eal_interrupts.c | 94 ++-
> lib/eal/include/meson.build | 2 +-
> lib/eal/include/rte_eal_interrupts.h | 269 -------
> lib/eal/include/rte_eal_trace.h | 24 +-
> lib/eal/include/rte_epoll.h | 116 +++
> lib/eal/include/rte_interrupts.h | 673 +++++++++++++++++-
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 39 +-
> lib/eal/linux/eal_dev.c | 65 +-
> lib/eal/linux/eal_interrupts.c | 294 +++++---
> lib/eal/version.map | 30 +
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_ethdev.c | 14 +-
> 132 files changed, 3797 insertions(+), 1685 deletions(-) create mode 100644
> lib/eal/common/eal_common_interrupts.c
> delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> create mode 100644 lib/eal/include/rte_epoll.h
>
> --
> 2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
` (7 preceding siblings ...)
2021-09-15 14:13 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-09-23 8:20 ` David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-09-23 8:20 UTC (permalink / raw)
To: Harman Kalra
Cc: dev, Yigit, Ferruh, Ajit Khaparde, Qi Zhang,
Jerin Jacob Kollanukkaran, Raslan Darawsheh, Maxime Coquelin,
Xia, Chenbo
Hello Harman,
On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra <hkalra@marvell.com> wrote:
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
Thanks for taking care of this huge cleanup.
I started looking at it.
CC: Ferruh and next-net* maintainers for awareness.
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-09-28 15:46 ` David Marchand
2021-10-04 8:51 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-03 18:05 ` [dpdk-dev] " Dmitry Kozlyuk
1 sibling, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-09-28 15:46 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Ray Kinsella
On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra <hkalra@marvell.com> wrote:
>
> Implementing get set APIs for interrupt handle fields.
> To make any change to the interrupt handle fields, one
> should make use of these APIs.
Some global comments.
- Please merge API prototype (from patch 1) and actual implementation
in a single patch.
- rte_intr_handle_ seems a rather long prefix, does it really matter
to have the _handle part?
- what part of this API needs to be exported to applications? Let's
hide as much as we can with __rte_internal.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> lib/eal/common/eal_common_interrupts.c | 506 +++++++++++++++++++++++++
> lib/eal/common/meson.build | 2 +
> lib/eal/include/rte_eal_interrupts.h | 6 +-
> lib/eal/version.map | 30 ++
> 4 files changed, 543 insertions(+), 1 deletion(-)
> create mode 100644 lib/eal/common/eal_common_interrupts.c
>
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> new file mode 100644
> index 0000000000..2e4fed96f0
> --- /dev/null
> +++ b/lib/eal/common/eal_common_interrupts.c
> @@ -0,0 +1,506 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2021 Marvell.
> + */
> +
> +#include <stdlib.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +
> +#include <rte_interrupts.h>
> +
> +
> +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> + bool from_hugepage)
> +{
> + struct rte_intr_handle *intr_handle;
> + int i;
> +
> + if (from_hugepage)
> + intr_handle = rte_zmalloc(NULL,
> + size * sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
We can call DPDK allocator in all cases.
That would avoid headaches on why multiprocess does not work in some
rarely tested cases.
Wdyt?
Plus "from_hugepage" is misleading, you could be in --no-huge mode,
rte_zmalloc still works.
> + if (!intr_handle) {
> + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + for (i = 0; i < size; i++) {
> + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> + intr_handle[i].alloc_from_hugepage = from_hugepage;
> + }
> +
> + return intr_handle;
> +}
> +
> +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> + struct rte_intr_handle *intr_handle, int index)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + return &intr_handle[index];
> +}
> +
> +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src,
> + int index)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (src == NULL) {
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + if (index < 0) {
> + RTE_LOG(ERR, EAL, "Index cany be negative");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + intr_handle[index].fd = src->fd;
> + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle[index].type = src->type;
> + intr_handle[index].max_intr = src->max_intr;
> + intr_handle[index].nb_efd = src->nb_efd;
> + intr_handle[index].efd_counter_size = src->efd_counter_size;
> +
> + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +void rte_intr_handle_instance_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + }
> +
> + if (intr_handle->alloc_from_hugepage)
> + rte_free(intr_handle);
> + else
> + free(intr_handle);
> +}
> +
> +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int fd)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + intr_handle->fd = fd;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->fd;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
> + enum rte_intr_handle_type type)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + intr_handle->type = type;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +enum rte_intr_handle_type rte_intr_handle_type_get(
> + const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + return RTE_INTR_HANDLE_UNKNOWN;
> + }
> +
> + return intr_handle->type;
> +}
> +
> +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + intr_handle->vfio_dev_fd = fd;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->vfio_dev_fd;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> + int max_intr)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (max_intr > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
> + max_intr, intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->max_intr = max_intr;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->max_intr;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle,
> + int nb_efd)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + intr_handle->nb_efd = nb_efd;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->nb_efd;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->nb_intr;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle *intr_handle,
> + uint8_t efd_counter_size)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + intr_handle->efd_counter_size = efd_counter_size;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_efd_counter_size_get(
> + const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->efd_counter_size;
> +fail:
> + return rte_errno;
> +}
> +
> +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->efds;
> +fail:
> + return NULL;
> +}
We don't need this new accessor.
It leaks the internal representation to the API caller.
If the internal representation is later changed, we would have to
maintain this array thing.
The only user is drivers/raw/ifpga/ifpga_rawdev.c.
This driver can build an array itself, and call
rte_intr_handle_efds_index_get() as much as needed.
> +
> +int rte_intr_handle_efds_index_get(const struct rte_intr_handle *intr_handle,
> + int index)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> + intr_handle->nb_intr);
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + return intr_handle->efds[index];
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
> + int index, int fd)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> + intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->efds[index] = fd;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +struct rte_epoll_event *rte_intr_handle_elist_index_get(
> + struct rte_intr_handle *intr_handle, int index)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> + intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + return &intr_handle->elist[index];
> +fail:
> + return NULL;
> +}
> +
> +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
> + int index, struct rte_epoll_event elist)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> + intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->elist[index] = elist;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + return NULL;
> + }
> +
> + return intr_handle->intr_vec;
> +}
rte_intr_handle_vec_list_base leaks an internal representation too.
Afaics with the whole series applied, it is always paired with a
rte_intr_handle_vec_list_alloc or rte_intr_handle_vec_list_free.
rte_intr_handle_vec_list_alloc could do this check itself.
And rte_intr_handle_vec_list_free should already be fine, since it
sets intr_vec to NULL.
> +
> +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
> + const char *name, int size)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + /* Vector list already allocated */
> + if (intr_handle->intr_vec)
> + return 0;
> +
> + if (size > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
> + intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
> + if (!intr_handle->intr_vec) {
> + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + intr_handle->vec_list_size = size;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_vec_list_index_get(
> + const struct rte_intr_handle *intr_handle, int index)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (!intr_handle->intr_vec) {
> + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index > intr_handle->vec_list_size) {
> + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> + index, intr_handle->vec_list_size);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + return intr_handle->intr_vec[index];
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle *intr_handle,
> + int index, int vec)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (!intr_handle->intr_vec) {
> + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (index > intr_handle->vec_list_size) {
> + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> + index, intr_handle->vec_list_size);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->intr_vec[index] = vec;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +void rte_intr_handle_vec_list_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + }
> +
> + rte_free(intr_handle->intr_vec);
> + intr_handle->intr_vec = NULL;
> +}
> diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
> index edfca77779..47f2977539 100644
> --- a/lib/eal/common/meson.build
> +++ b/lib/eal/common/meson.build
> @@ -17,6 +17,7 @@ if is_windows
> 'eal_common_errno.c',
> 'eal_common_fbarray.c',
> 'eal_common_hexdump.c',
> + 'eal_common_interrupts.c',
> 'eal_common_launch.c',
> 'eal_common_lcore.c',
> 'eal_common_log.c',
> @@ -53,6 +54,7 @@ sources += files(
> 'eal_common_fbarray.c',
> 'eal_common_hexdump.c',
> 'eal_common_hypervisor.c',
> + 'eal_common_interrupts.c',
> 'eal_common_launch.c',
> 'eal_common_lcore.c',
> 'eal_common_log.c',
> diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
> index 68ca3a042d..216aece61b 100644
> --- a/lib/eal/include/rte_eal_interrupts.h
> +++ b/lib/eal/include/rte_eal_interrupts.h
> @@ -55,13 +55,17 @@ struct rte_intr_handle {
> };
> void *handle; /**< device driver handle (Windows) */
> };
> + bool alloc_from_hugepage;
> enum rte_intr_handle_type type; /**< handle type */
> uint32_t max_intr; /**< max interrupt requested */
> uint32_t nb_efd; /**< number of available efd(event fd) */
> uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
> + uint16_t nb_intr;
> + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
> int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
> struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
> - /**< intr vector epoll event */
> + /**< intr vector epoll event */
> + uint16_t vec_list_size;
> int *intr_vec; /**< intr vector number array */
> };
>
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index beeb986adc..56108d0998 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -426,6 +426,36 @@ EXPERIMENTAL {
>
> # added in 21.08
> rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> +
> + # added in 21.11
> + rte_intr_handle_fd_set;
> + rte_intr_handle_fd_get;
> + rte_intr_handle_dev_fd_set;
> + rte_intr_handle_dev_fd_get;
> + rte_intr_handle_type_set;
> + rte_intr_handle_type_get;
> + rte_intr_handle_instance_alloc;
> + rte_intr_handle_instance_index_get;
> + rte_intr_handle_instance_free;
> + rte_intr_handle_instance_index_set;
> + rte_intr_handle_event_list_update;
> + rte_intr_handle_max_intr_set;
> + rte_intr_handle_max_intr_get;
> + rte_intr_handle_nb_efd_set;
> + rte_intr_handle_nb_efd_get;
> + rte_intr_handle_nb_intr_get;
> + rte_intr_handle_efds_index_set;
> + rte_intr_handle_efds_index_get;
> + rte_intr_handle_efds_base;
> + rte_intr_handle_elist_index_set;
> + rte_intr_handle_elist_index_get;
> + rte_intr_handle_efd_counter_size_set;
> + rte_intr_handle_efd_counter_size_get;
> + rte_intr_handle_vec_list_alloc;
> + rte_intr_handle_vec_list_index_set;
> + rte_intr_handle_vec_list_index_get;
> + rte_intr_handle_vec_list_free;
> + rte_intr_handle_vec_list_base;
> };
>
> INTERNAL {
> --
> 2.18.0
>
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-09-28 15:46 ` David Marchand
@ 2021-10-03 18:05 ` Dmitry Kozlyuk
2021-10-04 10:37 ` [dpdk-dev] [EXT] " Harman Kalra
1 sibling, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-03 18:05 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Ray Kinsella
2021-09-03 18:10 (UTC+0530), Harman Kalra:
> [...]
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> new file mode 100644
> index 0000000000..2e4fed96f0
> --- /dev/null
> +++ b/lib/eal/common/eal_common_interrupts.c
> @@ -0,0 +1,506 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2021 Marvell.
> + */
> +
> +#include <stdlib.h>
> +#include <string.h>
> +
> +#include <rte_errno.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +
> +#include <rte_interrupts.h>
> +
> +
> +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> + bool from_hugepage)
Since the purpose of the series is to reduce future ABI breakages,
how about making the second parameter "flags" to have some spare bits?
(If not removing it completely per David's suggestion.)
> +{
> + struct rte_intr_handle *intr_handle;
> + int i;
> +
> + if (from_hugepage)
> + intr_handle = rte_zmalloc(NULL,
> + size * sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> + if (!intr_handle) {
> + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + for (i = 0; i < size; i++) {
> + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> + intr_handle[i].alloc_from_hugepage = from_hugepage;
> + }
> +
> + return intr_handle;
> +}
> +
> +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> + struct rte_intr_handle *intr_handle, int index)
If rte_intr_handle_instance_alloc() returns a pointer to an array,
this function is useless since the user can simply manipulate a pointer.
If we want to make a distinction between a single struct rte_intr_handle and a
commonly allocated bunch of such (but why?), then they should be represented
by distinct types.
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOMEM;
Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is not
allocated?
> + return NULL;
> + }
> +
> + return &intr_handle[index];
> +}
> +
> +int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src,
> + int index)
See above regarding the "index" parameter. If it can be removed, a better name
for this function would be rte_intr_handle_copy().
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (src == NULL) {
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + if (index < 0) {
> + RTE_LOG(ERR, EAL, "Index cany be negative");
> + rte_errno = EINVAL;
> + goto fail;
> + }
How about making this parameter "size_t"?
> +
> + intr_handle[index].fd = src->fd;
> + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle[index].type = src->type;
> + intr_handle[index].max_intr = src->max_intr;
> + intr_handle[index].nb_efd = src->nb_efd;
> + intr_handle[index].efd_counter_size = src->efd_counter_size;
> +
> + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> +
> + return 0;
> +fail:
> + return rte_errno;
Should be (-rte_errno) per documentation.
Please check all functions in this file that return an "int" status.
> [...]
> +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->vfio_dev_fd;
> +fail:
> + return rte_errno;
> +}
Returning a errno value instead of an FD is very error-prone.
Probably returning (-1) is both safe and convenient?
> +
> +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> + int max_intr)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (max_intr > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
Seems like this common/cnxk name leaked here by mistake?
> + max_intr, intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->max_intr = max_intr;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +int rte_intr_handle_max_intr_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->max_intr;
> +fail:
> + return rte_errno;
> +}
Should be negative per documentation and to avoid returning a positive value
that cannot be distinguished from a successful return.
Please also check other functions in this file returning an "int" result
(not status).
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-03 18:16 ` Dmitry Kozlyuk
2021-10-04 14:09 ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-03 18:16 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Anatoly Burakov
2021-09-03 18:11 (UTC+0530), Harman Kalra:
> [...]
> @@ -31,11 +54,40 @@ struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> }
>
> for (i = 0; i < size; i++) {
> + if (from_hugepage)
> + intr_handle[i].efds = rte_zmalloc(NULL,
> + RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t), 0);
> + else
> + intr_handle[i].efds = calloc(1,
> + RTE_MAX_RXTX_INTR_VEC_ID * sizeof(uint32_t));
> + if (!intr_handle[i].efds) {
> + RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + if (from_hugepage)
> + intr_handle[i].elist = rte_zmalloc(NULL,
> + RTE_MAX_RXTX_INTR_VEC_ID *
> + sizeof(struct rte_epoll_event), 0);
> + else
> + intr_handle[i].elist = calloc(1,
> + RTE_MAX_RXTX_INTR_VEC_ID *
> + sizeof(struct rte_epoll_event));
> + if (!intr_handle[i].elist) {
> + RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> intr_handle[i].alloc_from_hugepage = from_hugepage;
> }
>
> return intr_handle;
> +fail:
> + free(intr_handle->efds);
> + free(intr_handle);
> + return NULL;
This is incorrect if "from_hugepage" is set.
> }
>
> struct rte_intr_handle *rte_intr_handle_instance_index_get(
> @@ -73,12 +125,48 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> }
>
> intr_handle[index].fd = src->fd;
> - intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle[index].dev_fd = src->dev_fd;
> +
> intr_handle[index].type = src->type;
> intr_handle[index].max_intr = src->max_intr;
> intr_handle[index].nb_efd = src->nb_efd;
> intr_handle[index].efd_counter_size = src->efd_counter_size;
>
> + if (intr_handle[index].nb_intr != src->nb_intr) {
> + if (src->alloc_from_hugepage)
> + intr_handle[index].efds =
> + rte_realloc(intr_handle[index].efds,
> + src->nb_intr *
> + sizeof(uint32_t), 0);
> + else
> + intr_handle[index].efds =
> + realloc(intr_handle[index].efds,
> + src->nb_intr * sizeof(uint32_t));
> + if (intr_handle[index].efds == NULL) {
> + RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + if (src->alloc_from_hugepage)
> + intr_handle[index].elist =
> + rte_realloc(intr_handle[index].elist,
> + src->nb_intr *
> + sizeof(struct rte_epoll_event), 0);
> + else
> + intr_handle[index].elist =
> + realloc(intr_handle[index].elist,
> + src->nb_intr *
> + sizeof(struct rte_epoll_event));
> + if (intr_handle[index].elist == NULL) {
> + RTE_LOG(ERR, EAL, "Failed to realloc the event list");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + intr_handle[index].nb_intr = src->nb_intr;
> + }
> +
This implementation leaves "intr_handle" in an invalid state
and leaks memory on error paths.
> memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
>
> @@ -87,6 +175,45 @@ int rte_intr_handle_instance_index_set(struct rte_intr_handle *intr_handle,
> return rte_errno;
> }
>
> +int rte_intr_handle_event_list_update(struct rte_intr_handle *intr_handle,
> + int size)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (size == 0) {
> + RTE_LOG(ERR, EAL, "Size can't be zero\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + intr_handle->efds = realloc(intr_handle->efds,
> + size * sizeof(uint32_t));
> + if (intr_handle->efds == NULL) {
> + RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + intr_handle->elist = realloc(intr_handle->elist,
> + size * sizeof(struct rte_epoll_event));
> + if (intr_handle->elist == NULL) {
> + RTE_LOG(ERR, EAL, "Failed to realloc the event list");
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + intr_handle->nb_intr = size;
> +
> + return 0;
> +fail:
> + return rte_errno;
> +}
> +
> +
Same here.
> [...]
> diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
> index afc3262967..7dfb849eea 100644
> --- a/lib/eal/include/rte_interrupts.h
> +++ b/lib/eal/include/rte_interrupts.h
> @@ -25,9 +25,29 @@ extern "C" {
> /** Interrupt handle */
> struct rte_intr_handle;
>
> -#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
> +#define RTE_MAX_RXTX_INTR_VEC_ID 512
> +#define RTE_INTR_VEC_ZERO_OFFSET 0
> +#define RTE_INTR_VEC_RXTX_OFFSET 1
> +
> +/**
> + * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
> + */
> +enum rte_intr_handle_type {
> + RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
> + RTE_INTR_HANDLE_UIO, /**< uio device handle */
> + RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
> + RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
> + RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
> + RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
> + RTE_INTR_HANDLE_ALARM, /**< alarm handle */
> + RTE_INTR_HANDLE_EXT, /**< external handler */
> + RTE_INTR_HANDLE_VDEV, /**< virtual device */
> + RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
> + RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
> + RTE_INTR_HANDLE_MAX /**< count of elements */
Enums shouldn't have a _MAX member, can we remove it?
> +};
>
> -#include "rte_eal_interrupts.h"
> +#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
I find this constant more cluttering call sites than helpful.
If a handle is allocated with a calloc-like function, plain 1 reads just fine.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-09-28 15:46 ` David Marchand
@ 2021-10-04 8:51 ` Harman Kalra
2021-10-04 9:57 ` David Marchand
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-04 8:51 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Ray Kinsella
Hi David,
Thanks for the review.
Please see my comments inline.
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, September 28, 2021 9:17 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev <dev@dpdk.org>; Ray Kinsella <mdr@ashroe.eu>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get
> set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra <hkalra@marvell.com> wrote:
> >
> > Implementing get set APIs for interrupt handle fields.
> > To make any change to the interrupt handle fields, one should make use
> > of these APIs.
>
> Some global comments.
>
> - Please merge API prototype (from patch 1) and actual implementation in a
> single patch.
<HK> Sure, will do.
> - rte_intr_handle_ seems a rather long prefix, does it really matter to have
> the _handle part?
<HK> Will fix the API names.
> - what part of this API needs to be exported to applications? Let's hide as
> much as we can with __rte_internal.
<HK> I will make all the APIs (new and some old) not used in test suite and example app as __rte_internal.
>
>
> >
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > Acked-by: Ray Kinsella <mdr@ashroe.eu>
> > ---
> > lib/eal/common/eal_common_interrupts.c | 506
> +++++++++++++++++++++++++
> > lib/eal/common/meson.build | 2 +
> > lib/eal/include/rte_eal_interrupts.h | 6 +-
> > lib/eal/version.map | 30 ++
> > 4 files changed, 543 insertions(+), 1 deletion(-) create mode 100644
> > lib/eal/common/eal_common_interrupts.c
> >
> > diff --git a/lib/eal/common/eal_common_interrupts.c
> > b/lib/eal/common/eal_common_interrupts.c
> > new file mode 100644
> > index 0000000000..2e4fed96f0
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_interrupts.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2021 Marvell.
> > + */
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +
> > +#include <rte_errno.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +
> > +#include <rte_interrupts.h>
> > +
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > + bool
> > +from_hugepage) {
> > + struct rte_intr_handle *intr_handle;
> > + int i;
> > +
> > + if (from_hugepage)
> > + intr_handle = rte_zmalloc(NULL,
> > + size * sizeof(struct rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, size * sizeof(struct
> > + rte_intr_handle));
>
> We can call DPDK allocator in all cases.
> That would avoid headaches on why multiprocess does not work in some
> rarely tested cases.
> Wdyt?
>
> Plus "from_hugepage" is misleading, you could be in --no-huge mode,
> rte_zmalloc still works.
<HK> In mellanox 5 driver interrupt handle instance is freed in destructor
" mlx5_pmd_interrupt_handler_uninstall()" while DPDK memory allocators
are already cleaned up in "rte_eal_cleanup". Hence I allocated interrupt
instances for such cases from normal heap. There could be other such cases
so I think its ok to keep this support.
Regarding name, I will change " from_hugepage" to "dpdk_allocator".
As per suggestion from Dmitry, I will replace bool arg with a flag variable, to
support more such configurations in future.
>
>
> > + if (!intr_handle) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + for (i = 0; i < size; i++) {
> > + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > + intr_handle[i].alloc_from_hugepage = from_hugepage;
> > + }
> > +
> > + return intr_handle;
> > +}
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > + struct rte_intr_handle *intr_handle,
> > +int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + return &intr_handle[index];
> > +}
> > +
> > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> *intr_handle,
> > + const struct rte_intr_handle *src,
> > + int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + if (index < 0) {
> > + RTE_LOG(ERR, EAL, "Index cany be negative");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + intr_handle[index].fd = src->fd;
> > + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle[index].type = src->type;
> > + intr_handle[index].max_intr = src->max_intr;
> > + intr_handle[index].nb_efd = src->nb_efd;
> > + intr_handle[index].efd_counter_size = src->efd_counter_size;
> > +
> > + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +void rte_intr_handle_instance_free(struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + }
> > +
> > + if (intr_handle->alloc_from_hugepage)
> > + rte_free(intr_handle);
> > + else
> > + free(intr_handle);
> > +}
> > +
> > +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int
> > +fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->fd = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->fd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
> > + enum rte_intr_handle_type type) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->type = type;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +enum rte_intr_handle_type rte_intr_handle_type_get(
> > + const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + return RTE_INTR_HANDLE_UNKNOWN;
> > + }
> > +
> > + return intr_handle->type;
> > +}
> > +
> > +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle,
> > +int fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->vfio_dev_fd = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->vfio_dev_fd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> > + int max_intr) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (max_intr > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> PLT_MAX_RXTX_INTR_VEC_ID=%d",
> > + max_intr, intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->max_intr = max_intr;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->max_intr;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle,
> > + int nb_efd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->nb_efd = nb_efd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->nb_efd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->nb_intr;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle
> *intr_handle,
> > + uint8_t efd_counter_size) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->efd_counter_size = efd_counter_size;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efd_counter_size_get(
> > + const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efd_counter_size;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efds;
> > +fail:
> > + return NULL;
> > +}
>
> We don't need this new accessor.
> It leaks the internal representation to the API caller.
> If the internal representation is later changed, we would have to maintain
> this array thing.
>
> The only user is drivers/raw/ifpga/ifpga_rawdev.c.
> This driver can build an array itself, and call
> rte_intr_handle_efds_index_get() as much as needed.
<HK> Yes, it’s a leak I will remove these base APIs and fix the ifpga_rawdev.c driver.
>
>
>
> > +
> > +int rte_intr_handle_efds_index_get(const struct rte_intr_handle
> *intr_handle,
> > + int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efds[index];
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
> > + int index, int fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->efds[index] = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +struct rte_epoll_event *rte_intr_handle_elist_index_get(
> > + struct rte_intr_handle *intr_handle,
> > +int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + return &intr_handle->elist[index];
> > +fail:
> > + return NULL;
> > +}
> > +
> > +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
> > + int index, struct rte_epoll_event
> > +elist) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->elist[index] = elist;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + return NULL;
> > + }
> > +
> > + return intr_handle->intr_vec;
> > +}
>
>
> rte_intr_handle_vec_list_base leaks an internal representation too.
>
> Afaics with the whole series applied, it is always paired with a
> rte_intr_handle_vec_list_alloc or rte_intr_handle_vec_list_free.
> rte_intr_handle_vec_list_alloc could do this check itself.
> And rte_intr_handle_vec_list_free should already be fine, since it sets
> intr_vec to NULL.
<HK> Yes, base API not required.
Thanks
Harman
>
>
>
>
>
> > +
> > +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
> > + const char *name, int size) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + /* Vector list already allocated */
> > + if (intr_handle->intr_vec)
> > + return 0;
> > +
> > + if (size > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + intr_handle->vec_list_size = size;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_vec_list_index_get(
> > + const struct rte_intr_handle *intr_handle, int
> > +index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index > intr_handle->vec_list_size) {
> > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> > + index, intr_handle->vec_list_size);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->intr_vec[index];
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle
> *intr_handle,
> > + int index, int vec) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index > intr_handle->vec_list_size) {
> > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> > + index, intr_handle->vec_list_size);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->intr_vec[index] = vec;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +void rte_intr_handle_vec_list_free(struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + }
> > +
> > + rte_free(intr_handle->intr_vec);
> > + intr_handle->intr_vec = NULL;
> > +}
> > diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
> > index edfca77779..47f2977539 100644
> > --- a/lib/eal/common/meson.build
> > +++ b/lib/eal/common/meson.build
> > @@ -17,6 +17,7 @@ if is_windows
> > 'eal_common_errno.c',
> > 'eal_common_fbarray.c',
> > 'eal_common_hexdump.c',
> > + 'eal_common_interrupts.c',
> > 'eal_common_launch.c',
> > 'eal_common_lcore.c',
> > 'eal_common_log.c',
> > @@ -53,6 +54,7 @@ sources += files(
> > 'eal_common_fbarray.c',
> > 'eal_common_hexdump.c',
> > 'eal_common_hypervisor.c',
> > + 'eal_common_interrupts.c',
> > 'eal_common_launch.c',
> > 'eal_common_lcore.c',
> > 'eal_common_log.c',
> > diff --git a/lib/eal/include/rte_eal_interrupts.h
> > b/lib/eal/include/rte_eal_interrupts.h
> > index 68ca3a042d..216aece61b 100644
> > --- a/lib/eal/include/rte_eal_interrupts.h
> > +++ b/lib/eal/include/rte_eal_interrupts.h
> > @@ -55,13 +55,17 @@ struct rte_intr_handle {
> > };
> > void *handle; /**< device driver handle (Windows) */
> > };
> > + bool alloc_from_hugepage;
> > enum rte_intr_handle_type type; /**< handle type */
> > uint32_t max_intr; /**< max interrupt requested */
> > uint32_t nb_efd; /**< number of available efd(event fd) */
> > uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
> > + uint16_t nb_intr;
> > + /**< Max vector count, default
> > + RTE_MAX_RXTX_INTR_VEC_ID */
> > int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds
> mapping */
> > struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
> > - /**< intr vector epoll event */
> > + /**< intr vector epoll event */
> > + uint16_t vec_list_size;
> > int *intr_vec; /**< intr vector number array */
> > };
> >
> > diff --git a/lib/eal/version.map b/lib/eal/version.map index
> > beeb986adc..56108d0998 100644
> > --- a/lib/eal/version.map
> > +++ b/lib/eal/version.map
> > @@ -426,6 +426,36 @@ EXPERIMENTAL {
> >
> > # added in 21.08
> > rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> > +
> > + # added in 21.11
> > + rte_intr_handle_fd_set;
> > + rte_intr_handle_fd_get;
> > + rte_intr_handle_dev_fd_set;
> > + rte_intr_handle_dev_fd_get;
> > + rte_intr_handle_type_set;
> > + rte_intr_handle_type_get;
> > + rte_intr_handle_instance_alloc;
> > + rte_intr_handle_instance_index_get;
> > + rte_intr_handle_instance_free;
> > + rte_intr_handle_instance_index_set;
> > + rte_intr_handle_event_list_update;
> > + rte_intr_handle_max_intr_set;
> > + rte_intr_handle_max_intr_get;
> > + rte_intr_handle_nb_efd_set;
> > + rte_intr_handle_nb_efd_get;
> > + rte_intr_handle_nb_intr_get;
> > + rte_intr_handle_efds_index_set;
> > + rte_intr_handle_efds_index_get;
> > + rte_intr_handle_efds_base;
> > + rte_intr_handle_elist_index_set;
> > + rte_intr_handle_elist_index_get;
> > + rte_intr_handle_efd_counter_size_set;
> > + rte_intr_handle_efd_counter_size_get;
> > + rte_intr_handle_vec_list_alloc;
> > + rte_intr_handle_vec_list_index_set;
> > + rte_intr_handle_vec_list_index_get;
> > + rte_intr_handle_vec_list_free;
> > + rte_intr_handle_vec_list_base;
> > };
> >
> > INTERNAL {
> > --
> > 2.18.0
> >
>
>
> --
> David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-04 8:51 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-04 9:57 ` David Marchand
2021-10-12 15:22 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-04 9:57 UTC (permalink / raw)
To: Raslan Darawsheh
Cc: dev, Ray Kinsella, Thomas Monjalon, Harman Kalra, Dmitry Kozlyuk
On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com> wrote:
> > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > + bool
> > > +from_hugepage) {
> > > + struct rte_intr_handle *intr_handle;
> > > + int i;
> > > +
> > > + if (from_hugepage)
> > > + intr_handle = rte_zmalloc(NULL,
> > > + size * sizeof(struct rte_intr_handle),
> > > + 0);
> > > + else
> > > + intr_handle = calloc(1, size * sizeof(struct
> > > + rte_intr_handle));
> >
> > We can call DPDK allocator in all cases.
> > That would avoid headaches on why multiprocess does not work in some
> > rarely tested cases.
> > Wdyt?
> >
> > Plus "from_hugepage" is misleading, you could be in --no-huge mode,
> > rte_zmalloc still works.
>
> <HK> In mellanox 5 driver interrupt handle instance is freed in destructor
> " mlx5_pmd_interrupt_handler_uninstall()" while DPDK memory allocators
> are already cleaned up in "rte_eal_cleanup". Hence I allocated interrupt
> instances for such cases from normal heap. There could be other such cases
> so I think its ok to keep this support.
This is surprising.
Why would the mlx5 driver wait to release in a destructor?
It should be done once no interrupt handler is necessary (like when
stopping all ports), and that would be before rte_eal_cleanup().
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-03 18:05 ` [dpdk-dev] " Dmitry Kozlyuk
@ 2021-10-04 10:37 ` Harman Kalra
2021-10-04 11:18 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-04 10:37 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Ray Kinsella, David Marchand
Hi Dmitry,
Thanks for reviewing the series.
Please find my comments inline.
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Sunday, October 3, 2021 11:35 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get
> set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-09-03 18:10 (UTC+0530), Harman Kalra:
> > [...]
> > diff --git a/lib/eal/common/eal_common_interrupts.c
> > b/lib/eal/common/eal_common_interrupts.c
> > new file mode 100644
> > index 0000000000..2e4fed96f0
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_interrupts.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2021 Marvell.
> > + */
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +
> > +#include <rte_errno.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +
> > +#include <rte_interrupts.h>
> > +
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > + bool from_hugepage)
>
> Since the purpose of the series is to reduce future ABI breakages, how about
> making the second parameter "flags" to have some spare bits?
> (If not removing it completely per David's suggestion.)
>
<HK> Having second parameter "flags" is a good suggestion, I will include it.
> > +{
> > + struct rte_intr_handle *intr_handle;
> > + int i;
> > +
> > + if (from_hugepage)
> > + intr_handle = rte_zmalloc(NULL,
> > + size * sizeof(struct rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, size * sizeof(struct rte_intr_handle));
> > + if (!intr_handle) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + for (i = 0; i < size; i++) {
> > + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > + intr_handle[i].alloc_from_hugepage = from_hugepage;
> > + }
> > +
> > + return intr_handle;
> > +}
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > + struct rte_intr_handle *intr_handle, int
> index)
>
> If rte_intr_handle_instance_alloc() returns a pointer to an array, this function
> is useless since the user can simply manipulate a pointer.
<HK> User wont be able to manipulate the pointer as he is not aware of size of struct rte_intr_handle.
He will observe "dereferencing pointer to incomplete type" compilation error.
> If we want to make a distinction between a single struct rte_intr_handle and
> a commonly allocated bunch of such (but why?), then they should be
> represented by distinct types.
<HK> Do you mean, we should have separate APIs for single allocation and batch allocation? As get API
will be useful only in case of batch allocation. Currently interrupt autotests and ifpga_rawdev driver makes
batch allocation.
I think common API for single and batch is fine, get API is required for returning a particular intr_handle instance.
But one problem I see in current implementation is there should be upper limit check for index in get/set
API, which I will fix.
>
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOMEM;
>
> Why it's sometimes ENOMEM and sometimes ENOTSUP when the handle is
> not allocated?
<HK> I will fix and make it symmetrical across.
>
> > + return NULL;
> > + }
> > +
> > + return &intr_handle[index];
> > +}
> > +
> > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> *intr_handle,
> > + const struct rte_intr_handle *src,
> > + int index)
>
> See above regarding the "index" parameter. If it can be removed, a better
> name for this function would be rte_intr_handle_copy().
<HK> I think get API is required.
>
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance
> unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + if (index < 0) {
> > + RTE_LOG(ERR, EAL, "Index cany be negative");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
>
> How about making this parameter "size_t"?
<HK> You mean index ? It can be size_t.
>
> > +
> > + intr_handle[index].fd = src->fd;
> > + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle[index].type = src->type;
> > + intr_handle[index].max_intr = src->max_intr;
> > + intr_handle[index].nb_efd = src->nb_efd;
> > + intr_handle[index].efd_counter_size = src->efd_counter_size;
> > +
> > + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
>
> Should be (-rte_errno) per documentation.
> Please check all functions in this file that return an "int" status.
<HK> Sure will fix it across APIs.
>
> > [...]
> > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->vfio_dev_fd;
> > +fail:
> > + return rte_errno;
> > +}
>
> Returning a errno value instead of an FD is very error-prone.
> Probably returning (-1) is both safe and convenient?
<HK> Ack
>
> > +
> > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> > + int max_intr)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (max_intr > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> > +PLT_MAX_RXTX_INTR_VEC_ID=%d",
>
> Seems like this common/cnxk name leaked here by mistake?
<HK> Thanks for catching this.
>
> > + max_intr, intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->max_intr = max_intr;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->max_intr;
> > +fail:
> > + return rte_errno;
> > +}
>
> Should be negative per documentation and to avoid returning a positive
> value that cannot be distinguished from a successful return.
> Please also check other functions in this file returning an "int" result (not
> status).
<HK> Will fix it.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-04 10:37 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-04 11:18 ` Dmitry Kozlyuk
2021-10-04 14:03 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-04 11:18 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Ray Kinsella, David Marchand
2021-10-04 10:37 (UTC+0000), Harman Kalra:
> [...]
> > > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > > + struct rte_intr_handle *intr_handle, int
> > index)
> >
> > If rte_intr_handle_instance_alloc() returns a pointer to an array, this function
> > is useless since the user can simply manipulate a pointer.
>
> <HK> User wont be able to manipulate the pointer as he is not aware of size of struct rte_intr_handle.
> He will observe "dereferencing pointer to incomplete type" compilation error.
Sorry, my bad.
> > If we want to make a distinction between a single struct rte_intr_handle and
> > a commonly allocated bunch of such (but why?), then they should be
> > represented by distinct types.
>
> <HK> Do you mean, we should have separate APIs for single allocation and batch allocation? As get API
> will be useful only in case of batch allocation. Currently interrupt autotests and ifpga_rawdev driver makes
> batch allocation.
> I think common API for single and batch is fine, get API is required for returning a particular intr_handle instance.
> But one problem I see in current implementation is there should be upper limit check for index in get/set
> API, which I will fix.
I don't think we need different APIs, I was asking if it was your intention.
Now I understand it and agree with you.
> > > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> > *intr_handle,
> > > + const struct rte_intr_handle *src,
> > > + int index)
> >
> > See above regarding the "index" parameter. If it can be removed, a better
> > name for this function would be rte_intr_handle_copy().
>
> <HK> I think get API is required.
Maybe index is still not needed: "intr_handle" can just be a pointer to the
right item obtained with rte_intr_handle_instance_index_get(). This way you
also don't need to duplicate the index-checking logic.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-04 11:18 ` Dmitry Kozlyuk
@ 2021-10-04 14:03 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-04 14:03 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Ray Kinsella, David Marchand
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Monday, October 4, 2021 4:48 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>; David Marchand
> <david.marchand@redhat.com>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
> 2021-10-04 10:37 (UTC+0000), Harman Kalra:
> > [...]
> > > > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > > > + struct rte_intr_handle *intr_handle, int
> > > index)
> > >
> > > If rte_intr_handle_instance_alloc() returns a pointer to an array,
> > > this function is useless since the user can simply manipulate a pointer.
> >
> > <HK> User wont be able to manipulate the pointer as he is not aware of
> size of struct rte_intr_handle.
> > He will observe "dereferencing pointer to incomplete type" compilation
> error.
>
> Sorry, my bad.
>
> > > If we want to make a distinction between a single struct
> > > rte_intr_handle and a commonly allocated bunch of such (but why?),
> > > then they should be represented by distinct types.
> >
> > <HK> Do you mean, we should have separate APIs for single allocation
> > and batch allocation? As get API will be useful only in case of batch
> > allocation. Currently interrupt autotests and ifpga_rawdev driver makes
> batch allocation.
> > I think common API for single and batch is fine, get API is required for
> returning a particular intr_handle instance.
> > But one problem I see in current implementation is there should be
> > upper limit check for index in get/set API, which I will fix.
>
> I don't think we need different APIs, I was asking if it was your intention.
> Now I understand it and agree with you.
>
> > > > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> > > *intr_handle,
> > > > + const struct rte_intr_handle *src,
> > > > + int index)
> > >
> > > See above regarding the "index" parameter. If it can be removed, a
> > > better name for this function would be rte_intr_handle_copy().
> >
> > <HK> I think get API is required.
>
> Maybe index is still not needed: "intr_handle" can just be a pointer to the
> right item obtained with rte_intr_handle_instance_index_get(). This way you
> also don't need to duplicate the index-checking logic.
In the current implementation, batch allocation of interrupt handle may lead to mem leak while
freeing efds and elist array. I am only freeing efds/elist for intr_handle[0] in rte_intr_handle_instance_free().
To free efds/elist of all the intr_handles[], either I should cache the size parameter passed during alloc. But
where should I store it? In first instance of struct rte_intr_handle. I don't think it will be a good idea.
Since batch allocation is only done in test suite and ifpga_rawdev.c, to keep things simpler let's restrict
rte_intr_handle_instance_alloc() to single instance allocation and user can call this API in a loop and
maintain array of handles locally.
With this approach get_index API is not required and set_index API can be renamed to rte_intr_handle_copy()
Thoughts?
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque
2021-10-03 18:16 ` Dmitry Kozlyuk
@ 2021-10-04 14:09 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-04 14:09 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Anatoly Burakov
Hi Dmitry,
Please find my comments inline.
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Sunday, October 3, 2021 11:46 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Anatoly Burakov <anatoly.burakov@intel.com>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt
> handle structure opaque
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-09-03 18:11 (UTC+0530), Harman Kalra:
> > [...]
> > @@ -31,11 +54,40 @@ struct rte_intr_handle
> *rte_intr_handle_instance_alloc(int size,
> > }
> >
> > for (i = 0; i < size; i++) {
> > + if (from_hugepage)
> > + intr_handle[i].efds = rte_zmalloc(NULL,
> > + RTE_MAX_RXTX_INTR_VEC_ID *
> sizeof(uint32_t), 0);
> > + else
> > + intr_handle[i].efds = calloc(1,
> > + RTE_MAX_RXTX_INTR_VEC_ID *
> sizeof(uint32_t));
> > + if (!intr_handle[i].efds) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + if (from_hugepage)
> > + intr_handle[i].elist = rte_zmalloc(NULL,
> > + RTE_MAX_RXTX_INTR_VEC_ID *
> > + sizeof(struct rte_epoll_event), 0);
> > + else
> > + intr_handle[i].elist = calloc(1,
> > + RTE_MAX_RXTX_INTR_VEC_ID *
> > + sizeof(struct rte_epoll_event));
> > + if (!intr_handle[i].elist) {
> > + RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > intr_handle[i].alloc_from_hugepage = from_hugepage;
> > }
> >
> > return intr_handle;
> > +fail:
> > + free(intr_handle->efds);
> > + free(intr_handle);
> > + return NULL;
>
> This is incorrect if "from_hugepage" is set.
<HK> Ack, will fix it.
>
> > }
> >
> > struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > @@ -73,12 +125,48 @@ int rte_intr_handle_instance_index_set(struct
> rte_intr_handle *intr_handle,
> > }
> >
> > intr_handle[index].fd = src->fd;
> > - intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle[index].dev_fd = src->dev_fd;
> > +
> > intr_handle[index].type = src->type;
> > intr_handle[index].max_intr = src->max_intr;
> > intr_handle[index].nb_efd = src->nb_efd;
> > intr_handle[index].efd_counter_size = src->efd_counter_size;
> >
> > + if (intr_handle[index].nb_intr != src->nb_intr) {
> > + if (src->alloc_from_hugepage)
> > + intr_handle[index].efds =
> > + rte_realloc(intr_handle[index].efds,
> > + src->nb_intr *
> > + sizeof(uint32_t), 0);
> > + else
> > + intr_handle[index].efds =
> > + realloc(intr_handle[index].efds,
> > + src->nb_intr * sizeof(uint32_t));
> > + if (intr_handle[index].efds == NULL) {
> > + RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + if (src->alloc_from_hugepage)
> > + intr_handle[index].elist =
> > + rte_realloc(intr_handle[index].elist,
> > + src->nb_intr *
> > + sizeof(struct rte_epoll_event), 0);
> > + else
> > + intr_handle[index].elist =
> > + realloc(intr_handle[index].elist,
> > + src->nb_intr *
> > + sizeof(struct rte_epoll_event));
> > + if (intr_handle[index].elist == NULL) {
> > + RTE_LOG(ERR, EAL, "Failed to realloc the event list");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + intr_handle[index].nb_intr = src->nb_intr;
> > + }
> > +
>
> This implementation leaves "intr_handle" in an invalid state and leaks
> memory on error paths.
<HK> Yes, I will get the reallocated pointer in a tmp variable and will update
(intr_handle[index].elist/efds only after all error paths are cleared.
>
> > memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> >
> > @@ -87,6 +175,45 @@ int rte_intr_handle_instance_index_set(struct
> rte_intr_handle *intr_handle,
> > return rte_errno;
> > }
> >
> > +int rte_intr_handle_event_list_update(struct rte_intr_handle
> *intr_handle,
> > + int size)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (size == 0) {
> > + RTE_LOG(ERR, EAL, "Size can't be zero\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + intr_handle->efds = realloc(intr_handle->efds,
> > + size * sizeof(uint32_t));
> > + if (intr_handle->efds == NULL) {
> > + RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + intr_handle->elist = realloc(intr_handle->elist,
> > + size * sizeof(struct rte_epoll_event));
> > + if (intr_handle->elist == NULL) {
> > + RTE_LOG(ERR, EAL, "Failed to realloc the event list");
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + intr_handle->nb_intr = size;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +
>
> Same here.
<HK> Ack
>
> > [...]
> > diff --git a/lib/eal/include/rte_interrupts.h
> > b/lib/eal/include/rte_interrupts.h
> > index afc3262967..7dfb849eea 100644
> > --- a/lib/eal/include/rte_interrupts.h
> > +++ b/lib/eal/include/rte_interrupts.h
> > @@ -25,9 +25,29 @@ extern "C" {
> > /** Interrupt handle */
> > struct rte_intr_handle;
> >
> > -#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
> > +#define RTE_MAX_RXTX_INTR_VEC_ID 512
> > +#define RTE_INTR_VEC_ZERO_OFFSET 0
> > +#define RTE_INTR_VEC_RXTX_OFFSET 1
> > +
> > +/**
> > + * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
> > + */
> > +enum rte_intr_handle_type {
> > + RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
> > + RTE_INTR_HANDLE_UIO, /**< uio device handle */
> > + RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
> > + RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy)
> */
> > + RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
> > + RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
> > + RTE_INTR_HANDLE_ALARM, /**< alarm handle */
> > + RTE_INTR_HANDLE_EXT, /**< external handler */
> > + RTE_INTR_HANDLE_VDEV, /**< virtual device */
> > + RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
> > + RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
> > + RTE_INTR_HANDLE_MAX /**< count of elements */
>
> Enums shouldn't have a _MAX member, can we remove it?
<HK> I don't see RTE_INTR_HANDLE_MAX used at any place, I will remove it.
>
> > +};
> >
> > -#include "rte_eal_interrupts.h"
> > +#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
>
> I find this constant more cluttering call sites than helpful.
> If a handle is allocated with a calloc-like function, plain 1 reads just fine.
Now since we are thinking of restricting rte_intr_handle_instance_alloc() to single
intr_handle allocation, I will remove this macro.
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 0/6] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (7 preceding siblings ...)
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-05 12:14 ` Harman Kalra
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
` (5 more replies)
2021-10-05 16:07 ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
` (3 subsequent siblings)
12 siblings, 6 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
Harman Kalra (6):
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 163 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 +++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 106 +--
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 23 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 21 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 111 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 54 +-
drivers/net/mlx5/linux/mlx5_socket.c | 24 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 36 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 12 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 75 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 649 ++++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 52 +-
lib/eal/freebsd/eal_interrupts.c | 110 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 634 ++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 302 +++++---
lib/eal/version.map | 46 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
131 files changed, 3645 insertions(+), 1706 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
@ 2021-10-05 12:14 ` Harman Kalra
2021-10-14 0:58 ` Dmitry Kozlyuk
2021-10-14 7:31 ` [dpdk-dev] " David Marchand
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (4 subsequent siblings)
5 siblings, 2 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra, Ray Kinsella
Cc: david.marchand, dmitry.kozliuk
Prototype/Implement get set APIs for interrupt handle fields.
User wont be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 470 +++++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 207 +--------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 614 ++++++++++++++++++++++++-
lib/eal/version.map | 46 +-
8 files changed, 1245 insertions(+), 213 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 278e5b3226..c0e7bba4f7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -210,6 +210,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..9b572a805f
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,470 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+#include <rte_interrupts.h>
+
+
+struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
+{
+ struct rte_intr_handle *intr_handle;
+ bool mem_allocator;
+
+ mem_allocator = (flags & RTE_INTR_ALLOC_DPDK_ALLOCATOR) != 0;
+ if (mem_allocator)
+ intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
+ 0);
+ else
+ intr_handle = calloc(1, sizeof(struct rte_intr_handle));
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle->mem_allocator = mem_allocator;
+
+ return intr_handle;
+}
+
+int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_instance_mem_allocator_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ return -ENOTSUP;
+ }
+
+ return intr_handle->mem_allocator;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ if (intr_handle->mem_allocator)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ return RTE_INTR_HANDLE_UNKNOWN;
+ }
+
+ return intr_handle->type;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Max_intr=%d greater than RTE_MAX_RXTX_INTR_VEC_ID=%d",
+ max_intr, intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..b01e987898 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -81,189 +55,18 @@ struct rte_intr_handle {
};
void *handle; /**< device driver handle (Windows) */
};
+ bool mem_allocator;
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..db830907fb 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,16 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+/** Allocate interrupt instance using DPDK memory management APIs */
+#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
+
+#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
+
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +45,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +174,605 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later as per the requirement.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @param flags
+ * Memory allocation from DPDK allocator or normal allocation
+ *
+ * @return
+ * - On success, address of first interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for event fds. event lists
+ * and interrupt handle array.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to populate interrupt handle, with src handler fields.
+ *
+ * @param intr_handle
+ * Start address of interrupt handles
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the no of event fd field of interrupt handle with
+ * user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the no of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the no of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter, used for vdev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the event list array index with the given elist
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * event list instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of elist instance of event list array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * no of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * No of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Freeing the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
+ * arrays as per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the sources from where memory is allocated for interrupt
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, 1 corresponds to memory allocated via DPDK allocator APIs
+ * - On success, 0 corresponds to memory allocated from traditional heap.
+ * - On failure, negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_mem_allocator_get(const struct rte_intr_handle *intr_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..4c11202faf 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
- rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
+ rte_intr_disable;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,6 +412,14 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_set;
+ rte_intr_fd_get;
+ rte_intr_type_set;
+ rte_intr_type_get;
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
};
INTERNAL {
@@ -430,4 +430,32 @@ INTERNAL {
rte_mem_map;
rte_mem_page_size;
rte_mem_unmap;
+ rte_intr_cap_multiple;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_allow_others;
+ rte_intr_tls_epfd;
+ rte_intr_dev_fd_set;
+ rte_intr_dev_fd_get;
+ rte_intr_instance_copy;
+ rte_intr_event_list_update;
+ rte_intr_max_intr_set;
+ rte_intr_max_intr_get;
+ rte_intr_nb_efd_set;
+ rte_intr_nb_efd_get;
+ rte_intr_nb_intr_get;
+ rte_intr_efds_index_set;
+ rte_intr_efds_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_elist_index_get;
+ rte_intr_efd_counter_size_set;
+ rte_intr_efd_counter_size_get;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_index_set;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_free;
+ rte_intr_instance_mem_allocator_get;
};
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-05 12:14 ` Harman Kalra
2021-10-14 0:59 ` Dmitry Kozlyuk
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (3 subsequent siblings)
5 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson; +Cc: david.marchand, dmitry.kozliuk, mdr
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 111 ++++++++----
lib/eal/include/rte_interrupts.h | 2 +
lib/eal/linux/eal_interrupts.c | 302 +++++++++++++++++++------------
3 files changed, 268 insertions(+), 147 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..cf6216601b 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -86,10 +86,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
{
struct rte_intr_callback *callback;
struct rte_intr_source *src;
- int ret = 0, add_event = 0;
+ int ret = 0, add_event = 0, mem_allocator;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,35 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ /* src->interrupt instance memory allocated
+ * depends on from where intr_handle memory
+ * is allocated.
+ */
+ mem_allocator =
+ rte_intr_instance_mem_allocator_get(
+ intr_handle);
+ if (mem_allocator == 0)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_TRAD_HEAP);
+ else if (mem_allocator == 1)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +179,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +202,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +243,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +258,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +299,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +313,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +346,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +398,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +422,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +440,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +464,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +476,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +499,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +512,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +583,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +595,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +607,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index db830907fb..442b02de8f 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -28,6 +28,8 @@ struct rte_intr_handle;
/** Interrupt instance allocation flags
* @see rte_intr_instance_alloc
*/
+/** Allocate interrupt instance from traditional heap */
+#define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
/** Allocate interrupt instance using DPDK memory management APIs */
#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a9d6833b79 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -475,14 +498,15 @@ int
rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *cb_arg)
{
- int ret, wake_thread;
+ int ret, wake_thread, mem_allocator;
struct rte_intr_source *src;
struct rte_intr_callback *callback;
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,34 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ /* src->interrupt instance memory allocated depends on
+ * from where intr_handle memory is allocated.
+ */
+ mem_allocator =
+ rte_intr_instance_mem_allocator_get(intr_handle);
+ if (mem_allocator == 0)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_TRAD_HEAP);
+ else if (mem_allocator == 1)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +602,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +612,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +653,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +663,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +695,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +727,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +785,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +808,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +851,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +861,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +919,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +965,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1029,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1069,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1079,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1182,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1259,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1480,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1489,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1503,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1515,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1540,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1562,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1571,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1608,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1630,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 3/6] test/interrupt: apply get set interrupt handle APIs
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-05 12:14 ` Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 4/6] drivers: remove direct access to interrupt handle Harman Kalra
` (2 subsequent siblings)
5 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:14 UTC (permalink / raw)
To: dev, Harman Kalra; +Cc: david.marchand, dmitry.kozliuk, mdr
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 163 ++++++++++++++++++++++---------------
1 file changed, 98 insertions(+), 65 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..b8d3a768dc 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,55 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!intr_handles[i])
+ return -1;
+ }
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +121,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +153,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +208,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +230,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +255,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +265,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +273,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +281,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +289,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +297,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +318,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +329,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +337,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +345,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +353,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +361,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +383,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +403,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +428,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -445,8 +477,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +486,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +503,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +512,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +562,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 4/6] drivers: remove direct access to interrupt handle
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
` (2 preceding siblings ...)
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-10-05 12:15 ` Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
5 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:15 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti,
Bruce Richardson, Thomas Monjalon
Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 ++--
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 ++--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 29 ++++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 106 +++++++++-------
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 ++++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 23 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 21 +++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 111 ++++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 54 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 24 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 36 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +++--
drivers/net/thunderx/nicvf_ethdev.c | 12 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +++--
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 75 ++++++-----
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 48 ++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 62 +++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 45 ++++++-
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 29 +++--
lib/eal/linux/eal_dev.c | 63 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
117 files changed, 1808 insertions(+), 1213 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 68ba523ea9..ebbb990deb 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1096,8 +1098,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1109,8 +1112,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4178,7 +4182,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..8add4b13ef 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,16 +743,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1879,7 +1878,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..8f69e8fc3e 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,16 +1014,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2369,7 +2368,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..6d44c433b6 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..a1a74b9258 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,13 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +74,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..59e371dbe2 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..0509e25f79 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,14 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..c2b469a94b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..227582c8d9 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,14 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +498,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +546,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +559,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..80078ce736 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,14 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +197,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +407,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..1a46553be0 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..5aaf604aa4 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..c8da3e2fe8 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..3a7409d0fa 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +271,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +318,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 3c924eee14..aec85a8932 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -297,6 +297,11 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index b52ca5bf1d..c6efa0dadd 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -29,9 +29,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -40,7 +42,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -60,15 +63,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -77,16 +81,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 8582e32c1d..8206ebe422 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -149,9 +149,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -223,12 +229,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 33524ef504..e212e8b6e2 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -62,7 +62,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -82,7 +82,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -126,7 +126,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -149,7 +149,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 4e204373dc..394436e00f 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1176,7 +1176,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c30d7..f3dfd15915 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,42 +136,50 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -174,24 +189,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -205,12 +223,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b82d..d3bed06ae9 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -101,6 +101,33 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_copy rte_intr_instance_copy
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -162,7 +189,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -182,18 +209,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf2626bd..88165ad236 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -491,7 +491,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -521,7 +521,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519..a77d51abc4 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -525,10 +525,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -608,7 +607,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -638,10 +637,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -692,7 +688,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff..f32619e05c 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af1..c26e0a199e 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a..a34b2f078b 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..f13432ac15 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -735,7 +735,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -847,26 +847,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1479,7 +1477,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1521,10 +1519,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 36d8f9249d..f868977227 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e..18c92b9b4e 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1157,7 +1157,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1228,8 +1228,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1268,8 +1268,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..c1060f0c70 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -752,10 +750,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -767,7 +762,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1008,7 +1003,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d80fad01e3..d48e9ae752 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -996,7 +996,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1200,7 +1200,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1259,11 +1259,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1422,7 +1421,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1466,10 +1465,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1509,7 +1505,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1535,10 +1531,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2779,7 +2773,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3296,7 +3290,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3327,11 +3321,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3353,7 +3346,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3377,10 +3370,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3418,7 +3410,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5140,7 +5132,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5160,7 +5152,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5238,7 +5230,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5279,8 +5271,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5298,8 +5291,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5309,8 +5302,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68..5408ca8657 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -947,7 +947,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -969,10 +969,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -988,7 +987,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1008,7 +1007,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1665,7 +1667,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -2817,7 +2819,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -2844,9 +2846,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -2865,7 +2867,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -2873,8 +2877,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b8daf8fb24 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1112,8 +1112,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e60..d17db691d6 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,24 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +310,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..949af61a47 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e0..b31b2adfef 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -398,15 +398,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -440,12 +447,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
@@ -458,10 +465,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
}
}
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return 0;
free_rxq:
fs_rx_queue_release(rxq);
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e40..c3c9daa82b 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c01e2ec1d4..c1bb767cf9 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1225,13 +1225,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3132,7 +3132,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3142,7 +3142,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3172,7 +3172,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972..0e35737c7c 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5275,7 +5275,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5288,7 +5288,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5347,8 +5347,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5381,8 +5381,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5716,7 +5716,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5739,16 +5739,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5761,20 +5758,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5785,7 +5783,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5795,8 +5793,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5939,7 +5938,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5959,16 +5958,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c8..fb25241be6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2136,16 +2136,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2301,7 +2301,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2324,16 +2324,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2346,20 +2343,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2370,7 +2369,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2380,8 +2379,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2845,7 +2845,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2871,7 +2871,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 481872e395..5da020f37d 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1038,7 +1038,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index bd97d93dd7..1fab60bff5 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1451,7 +1451,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
@@ -1987,7 +1987,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2103,10 +2103,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2156,8 +2157,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2165,7 +2166,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2179,7 +2182,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2206,7 +2209,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2372,7 +2375,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2390,12 +2393,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2536,7 +2536,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2577,10 +2577,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2599,7 +2598,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_mirror_rule *p_mirror;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
@@ -11407,11 +11406,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11426,7 +11425,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11435,11 +11434,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e1..f99e421168 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -660,17 +660,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -730,7 +729,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -740,14 +740,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -789,8 +791,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -926,10 +927,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1669,7 +1667,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1688,7 +1687,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1700,7 +1699,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2384,12 +2384,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2423,7 +2423,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 3275687927..f76b4b09c4 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e3..68c13ac48d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 4e4cdbcd7d..6d21d2ce75 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -160,11 +160,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -214,7 +212,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -224,12 +223,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -634,10 +634,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 9ab7704ff0..afa4e6c8ed 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2171,7 +2171,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2368,7 +2368,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2398,7 +2398,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2423,10 +2423,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2440,7 +2437,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3338,10 +3335,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3369,8 +3367,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3379,7 +3378,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3391,7 +3392,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3417,7 +3418,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3437,11 +3438,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4766,19 +4765,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4787,11 +4786,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a095483..cb7250afff 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -668,10 +668,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -731,7 +728,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -755,8 +752,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -773,8 +770,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -810,7 +807,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -819,7 +816,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -913,7 +911,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -951,10 +949,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1169,7 +1166,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1339,11 +1336,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2100,7 +2097,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2119,7 +2116,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e620793966..bbbf1333cd 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1071,7 +1071,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1085,15 +1085,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1122,7 +1117,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 47693c0c47..3e2f7dd6d1 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1034,7 +1034,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1533,7 +1533,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2549,7 +2549,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2604,11 +2604,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2844,7 +2842,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2895,10 +2893,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2982,7 +2977,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4628,7 +4623,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5309,7 +5304,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5372,11 +5367,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5416,7 +5409,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5444,10 +5437,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5459,7 +5449,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5942,7 +5932,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5968,7 +5958,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5984,7 +5974,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -6111,7 +6101,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -6138,8 +6128,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -6160,7 +6152,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6204,8 +6196,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb..1dad451b8a 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,26 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +877,11 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ if (cc->intr_handle)
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +937,22 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +965,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1046,6 +1084,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1108,13 +1148,25 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1129,6 +1181,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index fd9e877c3d..e651ceecdf 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1122,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1320,25 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1362,24 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1359,6 +1395,7 @@ memif_queue_release(void *queue)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index 7f9f300c6c..866c5d22a3 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,19 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1067,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1112,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..8059fb4624 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +261,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +294,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..5395f3127e 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2586,9 +2586,8 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2753,7 +2752,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2937,7 +2936,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int ret;
int flags;
- sh->intr_handle.fd = -1;
+ sh->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL);
ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd,
F_SETFL, flags | O_NONBLOCK);
@@ -2945,17 +2952,25 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle,
+ ((struct ibv_context *)sh->ctx)->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp =
(void *)mlx5_glue->devx_create_cmd_comp(sh->ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
@@ -2970,13 +2985,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2993,13 +3009,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..5bd60ecee7 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,18 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +167,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3581414b78..95c6fec6fa 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1016,7 +1016,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1184,8 +1184,8 @@ struct mlx5_dev_ctx_shared {
/* Memory Pool for mlx5 flow resources. */
struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce7989..dfb923b65c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -837,10 +837,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -848,7 +845,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -859,9 +859,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -888,14 +888,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -916,11 +922,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -930,10 +936,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54173bfacb..cc91be926c 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1129,7 +1129,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1138,7 +1138,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 4f6da9f2d1..d73e1ee5aa 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -784,13 +785,22 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..521c449429 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593..029f29448b 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -808,7 +808,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -828,7 +829,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -878,7 +880,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 6ba3c27f7f..016580a06f 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -81,7 +81,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -108,12 +108,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -328,10 +329,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -574,7 +575,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865..d0cd2620b1 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -49,7 +49,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -69,12 +69,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -223,10 +224,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -439,7 +440,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..4045fbbf00 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index a4304e0eff..cf9fd3c401 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2569,22 +2569,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..b31965d1ff 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf7..086feb53a5 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1923,6 +1925,14 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1940,9 +1950,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2093,6 +2103,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..ded50ed653 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 561a98fc81..63595f6664 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2175,6 +2178,15 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 0063994688..a12c461b9b 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1680,17 +1680,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1871,7 +1868,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1921,10 +1918,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1987,7 +1981,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -3107,7 +3101,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3640,7 +3634,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3722,7 +3716,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3756,8 +3750,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a887..373fcf167f 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9a..7ca59aa25f 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,9 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +665,30 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +707,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +926,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index b60eeb24ab..7998c66e62 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -737,8 +737,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1649,7 +1648,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1688,15 +1689,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..c445dc2a51 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,37 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
if (!eth_dev->intr_handle) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +672,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +977,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +987,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1012,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 2f40ae907d..3e3c73c5ab 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -620,11 +620,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -635,8 +633,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -644,17 +641,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -802,7 +801,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -825,7 +826,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1022,10 +1025,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1677,7 +1677,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1687,7 +1689,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..c59261047f 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,14 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!ifpga_irq_handle[i])
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1392,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1428,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1521,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..ee3f939afa 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -698,6 +698,12 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -716,6 +722,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
if (ctx)
@@ -750,6 +758,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev)
rte_vdpa_unregister_device(priv->vdev);
mlx5_glue->close_device(priv->ctx);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 2a04e36607..f72cb358ec 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -92,7 +92,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -142,7 +142,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 3541c652ce..98d2d976c5 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058..995b3c7928 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,33 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_ALLOC_DPDK_ALLOCATOR);
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -501,7 +517,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index fc37236195..b97aad4ae9 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1093,7 +1093,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1104,7 +1104,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..333dbd743b 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,43 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
+ rte_intr_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -118,7 +139,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +157,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +185,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +225,10 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -256,6 +283,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -282,6 +312,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..5856bc08ce 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,35 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +122,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +154,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +184,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..a9935ebba7 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,38 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_ALLOC_TRAD_HEAP);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_fd_set(intr_handle, -1);
+ rte_intr_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +364,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..0fa324d868 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4777,13 +4777,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4818,15 +4818,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5004,12 +5004,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 5/6] eal/interrupts: make interrupt handle structure opaque
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
` (3 preceding siblings ...)
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 4/6] drivers: remove direct access to interrupt handle Harman Kalra
@ 2021-10-05 12:15 ` Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
5 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:15 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra; +Cc: david.marchand, dmitry.kozliuk, mdr
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 189 ++++++++++++++++++++++++-
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 ----------
lib/eal/include/rte_interrupts.h | 24 +++-
5 files changed, 212 insertions(+), 81 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index c8da3e2fe8..f274aa4aab 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 9b572a805f..a5311a0299 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -11,6 +11,29 @@
#include <rte_interrupts.h>
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *handle; /**< device driver handle (Windows) */
+ };
+ bool mem_allocator;
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
{
@@ -29,15 +52,52 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
return NULL;
}
+ if (mem_allocator)
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(uint32_t));
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (mem_allocator)
+ intr_handle->elist =
+ rte_zmalloc(NULL, RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle->mem_allocator = mem_allocator;
return intr_handle;
+fail:
+ if (intr_handle->mem_allocator) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
const struct rte_intr_handle *src)
{
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
+
if (intr_handle == NULL) {
RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
rte_errno = ENOTSUP;
@@ -51,16 +111,51 @@ int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
}
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
+ intr_handle->mem_allocator = src->mem_allocator;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
intr_handle->efd_counter_size = src->efd_counter_size;
+ if (intr_handle->nb_intr != src->nb_intr) {
+ if (src->mem_allocator)
+ tmp_efds = rte_realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (src->mem_allocator)
+ tmp_elist = rte_realloc(intr_handle->elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = src->nb_intr;
+ }
+
memcpy(intr_handle->efds, src->efds, src->nb_intr);
memcpy(intr_handle->elist, src->elist, src->nb_intr);
return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
fail:
return -rte_errno;
}
@@ -76,17 +171,77 @@ int rte_intr_instance_mem_allocator_get(
return intr_handle->mem_allocator;
}
-void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
{
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
+
if (intr_handle == NULL) {
RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
}
if (intr_handle->mem_allocator)
- rte_free(intr_handle);
+ tmp_efds = rte_realloc(intr_handle->efds, size *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, size *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (intr_handle->mem_allocator)
+ tmp_elist = rte_realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event),
+ 0);
else
+ tmp_elist = realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = size;
+
+ return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
+fail:
+ return -rte_errno;
+}
+
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
+ rte_errno = ENOTSUP;
+ }
+
+ if (intr_handle->mem_allocator) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
@@ -153,7 +308,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
goto fail;
}
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -168,7 +323,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
goto fail;
}
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
@@ -289,6 +444,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -310,6 +471,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -333,6 +500,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -354,6 +527,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
goto fail;
}
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index b01e987898..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *handle; /**< device driver handle (Windows) */
- };
- bool mem_allocator;
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index 442b02de8f..367e739f08 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -33,9 +33,27 @@ struct rte_intr_handle;
/** Allocate interrupt instance using DPDK memory management APIs */
#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
-#define RTE_INTR_HANDLE_DEFAULT_SIZE 1
-
-#include "rte_eal_interrupts.h"
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v2 6/6] eal/alarm: introduce alarm fini routine
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
` (4 preceding siblings ...)
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-05 12:15 ` Harman Kalra
5 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-05 12:15 UTC (permalink / raw)
To: dev, Bruce Richardson; +Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 +++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++++++
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 10 +++++++++-
5 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..7fb9bc1324 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 6cee5ae369..7efead4f48 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -973,6 +973,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index 333dbd743b..0a5098efc0 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 3577eaeaa4..5c8af85ad5 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 5856bc08ce..a236c639e8 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
@@ -68,7 +75,8 @@ rte_eal_alarm_init(void)
goto error;
}
- rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
/* create a timerfd file descriptor */
if (rte_intr_fd_set(intr_handle,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (8 preceding siblings ...)
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
@ 2021-10-05 16:07 ` Stephen Hemminger
2021-10-07 10:57 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
` (2 subsequent siblings)
12 siblings, 1 reply; 152+ messages in thread
From: Stephen Hemminger @ 2021-10-05 16:07 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev
On Thu, 26 Aug 2021 20:27:19 +0530
Harman Kalra <hkalra@marvell.com> wrote:
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.
I agree rte_intr_handle and eth_devices structure needs to be hidden.
But there does not appear to be an API to check if device supports
receive interrupt mode.
There is:
RTE_ETH_DEV_INTR_LSC - link state
RTE_ETH_DEV_INTR_RMV - interrupt on removal
but no
RTE_ETH_DEV_INTR_RXQ - device supports rxq interrupt
There should be a new flag reported by devices, and the intr_conf should
be checked in rte_eth_dev_configure
Doing this would require fixes many drivers and there is risk of exposing
existing sematic bugs in applications.
code
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [RFC 0/7] make rte_intr_handle internal
2021-10-05 16:07 ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
@ 2021-10-07 10:57 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-07 10:57 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
Hi Stephen
Thanks for your suggestion on RTE_ETH_DEV_INTR_RXQ .
Please see my comments inline.
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, October 5, 2021 9:38 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org
> Subject: [EXT] Re: [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
>
> External Email
>
> ----------------------------------------------------------------------
> On Thu, 26 Aug 2021 20:27:19 +0530
> Harman Kalra <hkalra@marvell.com> wrote:
>
> > This series makes struct rte_intr_handle totally opaque to the outside
> > world by wrapping it inside a .c file and providing get set wrapper
> > APIs to read or manipulate its fields.. Any changes to be made to any
> > of the fields should be done via these get set APIs.
> > Introduced a new eal_common_interrupts.c where all these APIs are
> > defined and also hides struct rte_intr_handle definition.
>
> I agree rte_intr_handle and eth_devices structure needs to be hidden.
> But there does not appear to be an API to check if device supports receive
> interrupt mode.
>
> There is:
> RTE_ETH_DEV_INTR_LSC - link state
> RTE_ETH_DEV_INTR_RMV - interrupt on removal
>
> but no
> RTE_ETH_DEV_INTR_RXQ - device supports rxq interrupt
>
> There should be a new flag reported by devices, and the intr_conf should be
> checked in rte_eth_dev_configure
>
> Doing this would require fixes many drivers and there is risk of exposing
> existing sematic bugs in applications.
>
>
Yes, currently "intr_conf.rxq" is checked by respective drivers which supports queue interrupts and enable them if set.
But driver doesn't expose if they are capable of supporting rxq interrupts, just like LSC and RMV.
RTE_ETH_DEV_INTR_RXQ should be introduces and set by capable drivers in " dev_info.dev_flags" and applications
like l3fwd-power which sets " intr_conf.rxq" should check this flag.
I will add this enhancement to my TODO list and will send it as a new series.
Thanks
Harman
> code
>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-04 9:57 ` David Marchand
@ 2021-10-12 15:22 ` Thomas Monjalon
2021-10-13 17:54 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-12 15:22 UTC (permalink / raw)
To: Harman Kalra
Cc: Raslan Darawsheh, dev, Ray Kinsella, Dmitry Kozlyuk,
David Marchand, viacheslavo, matan
04/10/2021 11:57, David Marchand:
> On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com> wrote:
> > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > + bool
> > > > +from_hugepage) {
> > > > + struct rte_intr_handle *intr_handle;
> > > > + int i;
> > > > +
> > > > + if (from_hugepage)
> > > > + intr_handle = rte_zmalloc(NULL,
> > > > + size * sizeof(struct rte_intr_handle),
> > > > + 0);
> > > > + else
> > > > + intr_handle = calloc(1, size * sizeof(struct
> > > > + rte_intr_handle));
> > >
> > > We can call DPDK allocator in all cases.
> > > That would avoid headaches on why multiprocess does not work in some
> > > rarely tested cases.
> > > Wdyt?
> > >
> > > Plus "from_hugepage" is misleading, you could be in --no-huge mode,
> > > rte_zmalloc still works.
> >
> > <HK> In mellanox 5 driver interrupt handle instance is freed in destructor
> > " mlx5_pmd_interrupt_handler_uninstall()" while DPDK memory allocators
> > are already cleaned up in "rte_eal_cleanup". Hence I allocated interrupt
> > instances for such cases from normal heap. There could be other such cases
> > so I think its ok to keep this support.
>
> This is surprising.
> Why would the mlx5 driver wait to release in a destructor?
> It should be done once no interrupt handler is necessary (like when
> stopping all ports), and that would be before rte_eal_cleanup().
I agree with David.
I prefer a simpler API which always use rte_malloc,
and make sure interrupts are always handled between
rte_eal_init and rte_eal_cleanup.
The mlx5 PMD could be reworked to match this requirement.
In any case we should not any memory management in constructors/destructors.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-12 15:22 ` Thomas Monjalon
@ 2021-10-13 17:54 ` Harman Kalra
2021-10-13 17:57 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-13 17:54 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Raslan Darawsheh, dev, Ray Kinsella, Dmitry Kozlyuk,
David Marchand, viacheslavo, matan
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, October 12, 2021 8:53 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org; Ray Kinsella
> <mdr@ashroe.eu>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; David
> Marchand <david.marchand@redhat.com>; viacheslavo@nvidia.com;
> matan@nvidia.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
> 04/10/2021 11:57, David Marchand:
> > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com>
> wrote:
> > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > > + bool
> > > > > +from_hugepage) {
> > > > > + struct rte_intr_handle *intr_handle;
> > > > > + int i;
> > > > > +
> > > > > + if (from_hugepage)
> > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > + size * sizeof(struct rte_intr_handle),
> > > > > + 0);
> > > > > + else
> > > > > + intr_handle = calloc(1, size * sizeof(struct
> > > > > + rte_intr_handle));
> > > >
> > > > We can call DPDK allocator in all cases.
> > > > That would avoid headaches on why multiprocess does not work in
> > > > some rarely tested cases.
> > > > Wdyt?
> > > >
> > > > Plus "from_hugepage" is misleading, you could be in --no-huge
> > > > mode, rte_zmalloc still works.
> > >
> > > <HK> In mellanox 5 driver interrupt handle instance is freed in
> > > destructor " mlx5_pmd_interrupt_handler_uninstall()" while DPDK
> > > memory allocators are already cleaned up in "rte_eal_cleanup". Hence
> > > I allocated interrupt instances for such cases from normal heap.
> > > There could be other such cases so I think its ok to keep this support.
> >
> > This is surprising.
> > Why would the mlx5 driver wait to release in a destructor?
> > It should be done once no interrupt handler is necessary (like when
> > stopping all ports), and that would be before rte_eal_cleanup().
>
> I agree with David.
> I prefer a simpler API which always use rte_malloc, and make sure interrupts
> are always handled between rte_eal_init and rte_eal_cleanup.
> The mlx5 PMD could be reworked to match this requirement.
> In any case we should not any memory management in
> constructors/destructors.
Hi Thomas, David
There are couple of more dependencies on glibc heap APIs:
1. "rte_eal_alarm_init()" allocates an interrupt instance which is used for timerfd,
is called before "rte_eal_memory_init()" which does the memseg init.
Not sure what all challenges we may face in moving alarm_init after memory_init
as it might break some subsystem inits.
Other option could be to allocate interrupt instance for timerfd on first alarm_setup call.
2. Currently interrupt handle field inside struct rte_pci_device is static which is changed
to a pointer in this series(as struct rte_intr_handle is hidden inside a c file and size is unknown outside).
I am allocating the memory for this interrupt instance inside "rte_pci_probe_one_driver()" just before
"pci_vfio_map_resource()" which sets up vfio resources and calls " pci_vfio_setup_interrupts()" which setups
the interrupt support. Here challenge is "rte_bus_probe()" also gets called before "rte_eal_memory_init()".
There are many other drivers which statically declares the interrupt handles inside their respective private
structures and memory for those structure was allocated from heap. For such drivers I allocated interrupt instances
also using glibc heap APIs.
Thanks
Harman
>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-13 17:54 ` Harman Kalra
@ 2021-10-13 17:57 ` Harman Kalra
2021-10-13 18:52 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-13 17:57 UTC (permalink / raw)
To: Harman Kalra, Thomas Monjalon
Cc: Raslan Darawsheh, dev, Ray Kinsella, Dmitry Kozlyuk,
David Marchand, viacheslavo, matan
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Wednesday, October 13, 2021 11:24 PM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org; Ray Kinsella
> <mdr@ashroe.eu>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; David
> Marchand <david.marchand@redhat.com>; viacheslavo@nvidia.com;
> matan@nvidia.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
>
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Tuesday, October 12, 2021 8:53 PM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org; Ray Kinsella
> > <mdr@ashroe.eu>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; David
> > Marchand <david.marchand@redhat.com>; viacheslavo@nvidia.com;
> > matan@nvidia.com
> > Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts:
> > implement get set APIs
> >
> > 04/10/2021 11:57, David Marchand:
> > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com>
> > wrote:
> > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > > > + bool
> > > > > > +from_hugepage) {
> > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > + int i;
> > > > > > +
> > > > > > + if (from_hugepage)
> > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > + 0);
> > > > > > + else
> > > > > > + intr_handle = calloc(1, size * sizeof(struct
> > > > > > + rte_intr_handle));
> > > > >
> > > > > We can call DPDK allocator in all cases.
> > > > > That would avoid headaches on why multiprocess does not work in
> > > > > some rarely tested cases.
> > > > > Wdyt?
> > > > >
> > > > > Plus "from_hugepage" is misleading, you could be in --no-huge
> > > > > mode, rte_zmalloc still works.
> > > >
> > > > <HK> In mellanox 5 driver interrupt handle instance is freed in
> > > > destructor " mlx5_pmd_interrupt_handler_uninstall()" while DPDK
> > > > memory allocators are already cleaned up in "rte_eal_cleanup".
> > > > Hence I allocated interrupt instances for such cases from normal heap.
> > > > There could be other such cases so I think its ok to keep this support.
> > >
> > > This is surprising.
> > > Why would the mlx5 driver wait to release in a destructor?
> > > It should be done once no interrupt handler is necessary (like when
> > > stopping all ports), and that would be before rte_eal_cleanup().
> >
> > I agree with David.
> > I prefer a simpler API which always use rte_malloc, and make sure
> > interrupts are always handled between rte_eal_init and rte_eal_cleanup.
> > The mlx5 PMD could be reworked to match this requirement.
> > In any case we should not any memory management in
> > constructors/destructors.
>
>
> Hi Thomas, David
>
> There are couple of more dependencies on glibc heap APIs:
> 1. "rte_eal_alarm_init()" allocates an interrupt instance which is used for
> timerfd, is called before "rte_eal_memory_init()" which does the memseg
> init.
> Not sure what all challenges we may face in moving alarm_init after
> memory_init as it might break some subsystem inits.
> Other option could be to allocate interrupt instance for timerfd on first
> alarm_setup call.
>
> 2. Currently interrupt handle field inside struct rte_pci_device is static which
> is changed to a pointer in this series(as struct rte_intr_handle is hidden inside
> a c file and size is unknown outside).
> I am allocating the memory for this interrupt instance inside
> "rte_pci_probe_one_driver()" just before "pci_vfio_map_resource()" which
> sets up vfio resources and calls " pci_vfio_setup_interrupts()" which setups
> the interrupt support. Here challenge is "rte_bus_probe()" also gets called
> before "rte_eal_memory_init()".
Sorry my bad, bus probing happens after "rte_eal_memory_init()", so second point
Is invalid.
>
> There are many other drivers which statically declares the interrupt handles
> inside their respective private structures and memory for those structure
> was allocated from heap. For such drivers I allocated interrupt instances also
> using glibc heap APIs.
>
> Thanks
> Harman
>
>
>
> >
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-13 17:57 ` Harman Kalra
@ 2021-10-13 18:52 ` Thomas Monjalon
2021-10-14 8:22 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-13 18:52 UTC (permalink / raw)
To: Harman Kalra
Cc: Raslan Darawsheh, dev, Ray Kinsella, Dmitry Kozlyuk,
David Marchand, viacheslavo, matan
13/10/2021 19:57, Harman Kalra:
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 04/10/2021 11:57, David Marchand:
> > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com>
> > > wrote:
> > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > > > > + bool
> > > > > > > +from_hugepage) {
> > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + if (from_hugepage)
> > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > + 0);
> > > > > > > + else
> > > > > > > + intr_handle = calloc(1, size * sizeof(struct
> > > > > > > + rte_intr_handle));
> > > > > >
> > > > > > We can call DPDK allocator in all cases.
> > > > > > That would avoid headaches on why multiprocess does not work in
> > > > > > some rarely tested cases.
> > > > > > Wdyt?
> > > > > >
> > > > > > Plus "from_hugepage" is misleading, you could be in --no-huge
> > > > > > mode, rte_zmalloc still works.
> > > > >
> > > > > <HK> In mellanox 5 driver interrupt handle instance is freed in
> > > > > destructor " mlx5_pmd_interrupt_handler_uninstall()" while DPDK
> > > > > memory allocators are already cleaned up in "rte_eal_cleanup".
> > > > > Hence I allocated interrupt instances for such cases from normal heap.
> > > > > There could be other such cases so I think its ok to keep this support.
> > > >
> > > > This is surprising.
> > > > Why would the mlx5 driver wait to release in a destructor?
> > > > It should be done once no interrupt handler is necessary (like when
> > > > stopping all ports), and that would be before rte_eal_cleanup().
> > >
> > > I agree with David.
> > > I prefer a simpler API which always use rte_malloc, and make sure
> > > interrupts are always handled between rte_eal_init and rte_eal_cleanup.
> > > The mlx5 PMD could be reworked to match this requirement.
> > > In any case we should not any memory management in
> > > constructors/destructors.
For info, Dmitry is going to send a fix for mlx5.
> > Hi Thomas, David
> >
> > There are couple of more dependencies on glibc heap APIs:
> > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is used for
> > timerfd, is called before "rte_eal_memory_init()" which does the memseg
> > init.
> > Not sure what all challenges we may face in moving alarm_init after
> > memory_init as it might break some subsystem inits.
> > Other option could be to allocate interrupt instance for timerfd on first
> > alarm_setup call.
Indeed it is an issue.
[...]
> > There are many other drivers which statically declares the interrupt handles
> > inside their respective private structures and memory for those structure
> > was allocated from heap. For such drivers I allocated interrupt instances also
> > using glibc heap APIs.
Could you use rte_malloc in these drivers?
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-14 0:58 ` Dmitry Kozlyuk
2021-10-14 17:15 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-14 7:31 ` [dpdk-dev] " David Marchand
1 sibling, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-14 0:58 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
2021-10-05 17:44 (UTC+0530), Harman Kalra:
> [...]
> +int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (src == NULL) {
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + intr_handle->fd = src->fd;
> + intr_handle->vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle->type = src->type;
> + intr_handle->max_intr = src->max_intr;
> + intr_handle->nb_efd = src->nb_efd;
> + intr_handle->efd_counter_size = src->efd_counter_size;
> +
> + memcpy(intr_handle->efds, src->efds, src->nb_intr);
> + memcpy(intr_handle->elist, src->elist, src->nb_intr);
Buffer overrun if "intr_handle->nb_intr < src->nb_intr"?
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> +
> +int rte_intr_instance_mem_allocator_get(
> + const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + return -ENOTSUP;
ENOTSUP usually means the operation is valid from API standpoint
but not supported by the implementation. EINVAL/EFAULT suits better.
> + }
> +
> + return intr_handle->mem_allocator;
> +}
What do you think about having an API to retrieve the entire flags instead?
> +
> +void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + }
API are neater when free(NULL) is a no-op.
> +
> + if (intr_handle->mem_allocator)
> + rte_free(intr_handle);
> + else
> + free(intr_handle);
> +}
> +
> +int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
This piece repeats over and over, how about making it a function or a macro,
like in ethdev?
> +
> + intr_handle->fd = fd;
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> +
> +int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + return intr_handle->fd;
> +fail:
> + return -1;
> +}
Please add a similar pair of experimental API for the "handle" member,
it is needed for Windows interrupt support I'm working on top of these series
(IIUC, API changes should be closed by RC1.)
If you will be doing this and don't like "handle" name, it might be like
"dev_handle" or "windows_device".
> [...]
> +int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
> + int max_intr)
> +{
> + if (intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
> +
> + if (max_intr > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Max_intr=%d greater than RTE_MAX_RXTX_INTR_VEC_ID=%d",
The macros is not used in the comparison, neither should the log mention it.
> [...]
> @@ -420,6 +412,14 @@ EXPERIMENTAL {
>
> # added in 21.08
> rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> +
> + # added in 21.11
> + rte_intr_fd_set;
> + rte_intr_fd_get;
WINDOWS_NO_EXPORT
> + rte_intr_type_set;
> + rte_intr_type_get;
> + rte_intr_instance_alloc;
> + rte_intr_instance_free;
> };
Do I understand correctly that these exports are needed
to allow an application to use DPDK callback facilities
for its own interrupt sources?
If so, I'd suggest that instead we export a simpler set of functions:
1. Create/free a handle instance with automatic fixed type selection.
2. Trigger an interrupt on the specified handle instance.
The flow would be that the application listens on whatever it wants,
probably with OS-specific mechanisms, and just notifies the interrupt thread
about events to trigger callbacks.
Because these APIs are experimental we don't need to change it now,
just my thoughts for the future.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-14 0:59 ` Dmitry Kozlyuk
2021-10-14 17:31 ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-14 0:59 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr
2021-10-05 17:44 (UTC+0530), Harman Kalra:
> Making changes to the interrupt framework to use interrupt handle
> APIs to get/set any field. Direct access to any of the fields
> should be avoided to avoid any ABI breakage in future.
How is ABI breakage applicable to internal consumers?
This protects against fields renaming for sure, but convenience is arguable.
If EAL needs to add a EAL-private field to struct rte_intr_handle,
it must add an accessor even though the field is likely OS-specific.
It would be simpler if the definition was in some private EAL header
and could be accessed directly by EAL code.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
2021-10-14 0:58 ` Dmitry Kozlyuk
@ 2021-10-14 7:31 ` David Marchand
1 sibling, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-14 7:31 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, Dmitry Kozlyuk
On Tue, Oct 5, 2021 at 2:17 PM Harman Kalra <hkalra@marvell.com> wrote:
> +struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
> +{
> + struct rte_intr_handle *intr_handle;
> + bool mem_allocator;
Regardless of the currently defined flags, we want to have an ABI
ready for future changes, so if there is a "flags" input parameter, it
must be checked against valid values.
You can build a RTE_INTR_ALLOC_KNOWN_FLAGS define that contains all
valid flags either in a private header or only in this .c file if no
other unit needs it.
Next, in this function:
if ((flags & ~RTE_INTR_ALLOC_KNOWN_FLAGS) != 0) {
rte_errno = EINVAL;
return NULL;
}
A check in unit tests is then a good thing to add so that developpers
adding new flag get a CI failure.
This is not a blocker as this API is still experimental, but please
let's do this from the start.
> +
> + mem_allocator = (flags & RTE_INTR_ALLOC_DPDK_ALLOCATOR) != 0;
> + if (mem_allocator)
> + intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-13 18:52 ` Thomas Monjalon
@ 2021-10-14 8:22 ` Thomas Monjalon
2021-10-14 9:31 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-14 8:22 UTC (permalink / raw)
To: Harman Kalra
Cc: dev, Raslan Darawsheh, Ray Kinsella, Dmitry Kozlyuk,
David Marchand, viacheslavo, matan
13/10/2021 20:52, Thomas Monjalon:
> 13/10/2021 19:57, Harman Kalra:
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 04/10/2021 11:57, David Marchand:
> > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra <hkalra@marvell.com>
> > > > wrote:
> > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > > > > > > > + bool
> > > > > > > > +from_hugepage) {
> > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > + int i;
> > > > > > > > +
> > > > > > > > + if (from_hugepage)
> > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > + 0);
> > > > > > > > + else
> > > > > > > > + intr_handle = calloc(1, size * sizeof(struct
> > > > > > > > + rte_intr_handle));
> > > > > > >
> > > > > > > We can call DPDK allocator in all cases.
> > > > > > > That would avoid headaches on why multiprocess does not work in
> > > > > > > some rarely tested cases.
[...]
> > > > I agree with David.
> > > > I prefer a simpler API which always use rte_malloc, and make sure
> > > > interrupts are always handled between rte_eal_init and rte_eal_cleanup.
[...]
> > > There are couple of more dependencies on glibc heap APIs:
> > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is used for
> > > timerfd, is called before "rte_eal_memory_init()" which does the memseg
> > > init.
> > > Not sure what all challenges we may face in moving alarm_init after
> > > memory_init as it might break some subsystem inits.
> > > Other option could be to allocate interrupt instance for timerfd on first
> > > alarm_setup call.
>
> Indeed it is an issue.
>
> [...]
>
> > > There are many other drivers which statically declares the interrupt handles
> > > inside their respective private structures and memory for those structure
> > > was allocated from heap. For such drivers I allocated interrupt instances also
> > > using glibc heap APIs.
>
> Could you use rte_malloc in these drivers?
If we take the direction of 2 different allocations mode for the interrupts,
I suggest we make it automatic without any API parameter.
We don't have any function to check rte_malloc readiness I think.
But we can detect whether shared memory is ready with this check:
rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC
This check is true at the end of rte_eal_init, so it is false during probing.
Would it be enough? Or should we implement rte_malloc_is_ready()?
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 8:22 ` Thomas Monjalon
@ 2021-10-14 9:31 ` Harman Kalra
2021-10-14 9:37 ` David Marchand
` (2 more replies)
0 siblings, 3 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-14 9:31 UTC (permalink / raw)
To: Thomas Monjalon, David Marchand
Cc: dev, Raslan Darawsheh, Ray Kinsella, Dmitry Kozlyuk, viacheslavo, matan
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 14, 2021 1:53 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; Ray Kinsella
> <mdr@ashroe.eu>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; David
> Marchand <david.marchand@redhat.com>; viacheslavo@nvidia.com;
> matan@nvidia.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
> 13/10/2021 20:52, Thomas Monjalon:
> > 13/10/2021 19:57, Harman Kalra:
> > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 04/10/2021 11:57, David Marchand:
> > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > <hkalra@marvell.com>
> > > > > wrote:
> > > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int
> size,
> > > > > > > > > +
> > > > > > > > > +bool
> > > > > > > > > +from_hugepage) {
> > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > + int i;
> > > > > > > > > +
> > > > > > > > > + if (from_hugepage)
> > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > > + 0);
> > > > > > > > > + else
> > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > >
> > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > That would avoid headaches on why multiprocess does not
> > > > > > > > work in some rarely tested cases.
> [...]
> > > > > I agree with David.
> > > > > I prefer a simpler API which always use rte_malloc, and make
> > > > > sure interrupts are always handled between rte_eal_init and
> rte_eal_cleanup.
> [...]
> > > > There are couple of more dependencies on glibc heap APIs:
> > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is
> > > > used for timerfd, is called before "rte_eal_memory_init()" which
> > > > does the memseg init.
> > > > Not sure what all challenges we may face in moving alarm_init
> > > > after memory_init as it might break some subsystem inits.
> > > > Other option could be to allocate interrupt instance for timerfd
> > > > on first alarm_setup call.
> >
> > Indeed it is an issue.
> >
> > [...]
> >
> > > > There are many other drivers which statically declares the
> > > > interrupt handles inside their respective private structures and
> > > > memory for those structure was allocated from heap. For such
> > > > drivers I allocated interrupt instances also using glibc heap APIs.
> >
> > Could you use rte_malloc in these drivers?
>
> If we take the direction of 2 different allocations mode for the interrupts, I
> suggest we make it automatic without any API parameter.
> We don't have any function to check rte_malloc readiness I think.
> But we can detect whether shared memory is ready with this check:
> rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This check
> is true at the end of rte_eal_init, so it is false during probing.
> Would it be enough? Or should we implement rte_malloc_is_ready()?
Hi Thomas,
It's a very good suggestion. Let's implement "rte_malloc_is_ready()" which could be as
simple as " rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC" check.
There may be more consumers for this API in future.
If we are making it automatic detection, shall we now even have argument to this alloc API?
I added a flags argument (32 bit) in latest series where each bit of this flag can be an allocation capability.
I used two bits for discriminating between glibc malloc and rte_malloc. Shall we keep it or drop it?
David, Dmitry please share your thoughts.
Thanks
Harman
>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 9:31 ` Harman Kalra
@ 2021-10-14 9:37 ` David Marchand
2021-10-14 9:41 ` Thomas Monjalon
2021-10-14 10:25 ` Dmitry Kozlyuk
2 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-14 9:37 UTC (permalink / raw)
To: Harman Kalra
Cc: Thomas Monjalon, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
On Thu, Oct 14, 2021 at 11:31 AM Harman Kalra <hkalra@marvell.com> wrote:
> If we are making it automatic detection, shall we now even have argument to this alloc API?
> I added a flags argument (32 bit) in latest series where each bit of this flag can be an allocation capability.
> I used two bits for discriminating between glibc malloc and rte_malloc. Shall we keep it or drop it?
>
> David, Dmitry please share your thoughts.
I don't have ideas of how we would extend allocations of such object,
so I am unsure.
In doubt, I would keep this flags field, and validate it's always 0
(as mentioned in my reply on ABI).
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 9:31 ` Harman Kalra
2021-10-14 9:37 ` David Marchand
@ 2021-10-14 9:41 ` Thomas Monjalon
2021-10-14 10:31 ` Harman Kalra
2021-10-14 10:25 ` Dmitry Kozlyuk
2 siblings, 1 reply; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-14 9:41 UTC (permalink / raw)
To: Harman Kalra
Cc: David Marchand, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
14/10/2021 11:31, Harman Kalra:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 13/10/2021 20:52, Thomas Monjalon:
> > > 13/10/2021 19:57, Harman Kalra:
> > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > 04/10/2021 11:57, David Marchand:
> > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > > <hkalra@marvell.com>
> > > > > > wrote:
> > > > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int
> > size,
> > > > > > > > > > +
> > > > > > > > > > +bool
> > > > > > > > > > +from_hugepage) {
> > > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > > + int i;
> > > > > > > > > > +
> > > > > > > > > > + if (from_hugepage)
> > > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > > > + 0);
> > > > > > > > > > + else
> > > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > > >
> > > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > > That would avoid headaches on why multiprocess does not
> > > > > > > > > work in some rarely tested cases.
> > [...]
> > > > > > I agree with David.
> > > > > > I prefer a simpler API which always use rte_malloc, and make
> > > > > > sure interrupts are always handled between rte_eal_init and
> > rte_eal_cleanup.
> > [...]
> > > > > There are couple of more dependencies on glibc heap APIs:
> > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is
> > > > > used for timerfd, is called before "rte_eal_memory_init()" which
> > > > > does the memseg init.
> > > > > Not sure what all challenges we may face in moving alarm_init
> > > > > after memory_init as it might break some subsystem inits.
> > > > > Other option could be to allocate interrupt instance for timerfd
> > > > > on first alarm_setup call.
> > >
> > > Indeed it is an issue.
> > >
> > > [...]
> > >
> > > > > There are many other drivers which statically declares the
> > > > > interrupt handles inside their respective private structures and
> > > > > memory for those structure was allocated from heap. For such
> > > > > drivers I allocated interrupt instances also using glibc heap APIs.
> > >
> > > Could you use rte_malloc in these drivers?
> >
> > If we take the direction of 2 different allocations mode for the interrupts, I
> > suggest we make it automatic without any API parameter.
> > We don't have any function to check rte_malloc readiness I think.
> > But we can detect whether shared memory is ready with this check:
> > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This check
> > is true at the end of rte_eal_init, so it is false during probing.
> > Would it be enough? Or should we implement rte_malloc_is_ready()?
>
> Hi Thomas,
>
> It's a very good suggestion. Let's implement "rte_malloc_is_ready()" which could be as
> simple as " rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC" check.
> There may be more consumers for this API in future.
You cannot rely on the magic because it is set only after probing.
For such API you need to have another internal flag to check that malloc is setup.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 9:31 ` Harman Kalra
2021-10-14 9:37 ` David Marchand
2021-10-14 9:41 ` Thomas Monjalon
@ 2021-10-14 10:25 ` Dmitry Kozlyuk
2 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-14 10:25 UTC (permalink / raw)
To: Harman Kalra
Cc: Thomas Monjalon, David Marchand, dev, Raslan Darawsheh,
Ray Kinsella, viacheslavo, matan
2021-10-14 09:31 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Thursday, October 14, 2021 1:53 PM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; Ray Kinsella
> > <mdr@ashroe.eu>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; David
> > Marchand <david.marchand@redhat.com>; viacheslavo@nvidia.com;
> > matan@nvidia.com
> > Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> > get set APIs
> >
> > 13/10/2021 20:52, Thomas Monjalon:
> > > 13/10/2021 19:57, Harman Kalra:
> > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > 04/10/2021 11:57, David Marchand:
> > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > > <hkalra@marvell.com>
> > > > > > wrote:
> > > > > > > > > > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int
> > size,
> > > > > > > > > > +
> > > > > > > > > > +bool
> > > > > > > > > > +from_hugepage) {
> > > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > > + int i;
> > > > > > > > > > +
> > > > > > > > > > + if (from_hugepage)
> > > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > > > + 0);
> > > > > > > > > > + else
> > > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > > >
> > > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > > That would avoid headaches on why multiprocess does not
> > > > > > > > > work in some rarely tested cases.
> > [...]
> > > > > > I agree with David.
> > > > > > I prefer a simpler API which always use rte_malloc, and make
> > > > > > sure interrupts are always handled between rte_eal_init and
> > rte_eal_cleanup.
> > [...]
> > > > > There are couple of more dependencies on glibc heap APIs:
> > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance which is
> > > > > used for timerfd, is called before "rte_eal_memory_init()" which
> > > > > does the memseg init.
> > > > > Not sure what all challenges we may face in moving alarm_init
> > > > > after memory_init as it might break some subsystem inits.
> > > > > Other option could be to allocate interrupt instance for timerfd
> > > > > on first alarm_setup call.
> > >
> > > Indeed it is an issue.
> > >
> > > [...]
> > >
> > > > > There are many other drivers which statically declares the
> > > > > interrupt handles inside their respective private structures and
> > > > > memory for those structure was allocated from heap. For such
> > > > > drivers I allocated interrupt instances also using glibc heap APIs.
> > >
> > > Could you use rte_malloc in these drivers?
> >
> > If we take the direction of 2 different allocations mode for the interrupts, I
> > suggest we make it automatic without any API parameter.
> > We don't have any function to check rte_malloc readiness I think.
> > But we can detect whether shared memory is ready with this check:
> > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This check
> > is true at the end of rte_eal_init, so it is false during probing.
> > Would it be enough? Or should we implement rte_malloc_is_ready()?
>
> Hi Thomas,
>
> It's a very good suggestion. Let's implement "rte_malloc_is_ready()" which could be as
> simple as " rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC" check.
> There may be more consumers for this API in future.
I doubt it should be public. How it is supposed to be used?
Any application code for DPDK necessarily calls rte_eal_init() first,
after that this function would always return true.
>
> If we are making it automatic detection, shall we now even have argument to this alloc API?
> I added a flags argument (32 bit) in latest series where each bit of this flag can be an allocation capability.
> I used two bits for discriminating between glibc malloc and rte_malloc. Shall we keep it or drop it?
>
> David, Dmitry please share your thoughts.
I'd drop it, but no strong opinion.
Since allocation type is automatic and all other properties can be set later,
there are no use cases for any options here.
And if they appear, flags may be insufficient as well.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 9:41 ` Thomas Monjalon
@ 2021-10-14 10:31 ` Harman Kalra
2021-10-14 10:35 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-14 10:31 UTC (permalink / raw)
To: Thomas Monjalon
Cc: David Marchand, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 14, 2021 3:11 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: David Marchand <david.marchand@redhat.com>; dev@dpdk.org; Raslan
> Darawsheh <rasland@nvidia.com>; Ray Kinsella <mdr@ashroe.eu>; Dmitry
> Kozlyuk <dmitry.kozliuk@gmail.com>; viacheslavo@nvidia.com;
> matan@nvidia.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
> 14/10/2021 11:31, Harman Kalra:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 13/10/2021 20:52, Thomas Monjalon:
> > > > 13/10/2021 19:57, Harman Kalra:
> > > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > > 04/10/2021 11:57, David Marchand:
> > > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > > > <hkalra@marvell.com>
> > > > > > > wrote:
> > > > > > > > > > > +struct rte_intr_handle
> > > > > > > > > > > +*rte_intr_handle_instance_alloc(int
> > > size,
> > > > > > > > > > > +
> > > > > > > > > > > +bool
> > > > > > > > > > > +from_hugepage) {
> > > > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > > > + int i;
> > > > > > > > > > > +
> > > > > > > > > > > + if (from_hugepage)
> > > > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > > > > + 0);
> > > > > > > > > > > + else
> > > > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > > > >
> > > > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > > > That would avoid headaches on why multiprocess does
> > > > > > > > > > not work in some rarely tested cases.
> > > [...]
> > > > > > > I agree with David.
> > > > > > > I prefer a simpler API which always use rte_malloc, and make
> > > > > > > sure interrupts are always handled between rte_eal_init and
> > > rte_eal_cleanup.
> > > [...]
> > > > > > There are couple of more dependencies on glibc heap APIs:
> > > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance
> > > > > > which is used for timerfd, is called before
> > > > > > "rte_eal_memory_init()" which does the memseg init.
> > > > > > Not sure what all challenges we may face in moving alarm_init
> > > > > > after memory_init as it might break some subsystem inits.
> > > > > > Other option could be to allocate interrupt instance for
> > > > > > timerfd on first alarm_setup call.
> > > >
> > > > Indeed it is an issue.
> > > >
> > > > [...]
> > > >
> > > > > > There are many other drivers which statically declares the
> > > > > > interrupt handles inside their respective private structures
> > > > > > and memory for those structure was allocated from heap. For
> > > > > > such drivers I allocated interrupt instances also using glibc heap
> APIs.
> > > >
> > > > Could you use rte_malloc in these drivers?
> > >
> > > If we take the direction of 2 different allocations mode for the
> > > interrupts, I suggest we make it automatic without any API parameter.
> > > We don't have any function to check rte_malloc readiness I think.
> > > But we can detect whether shared memory is ready with this check:
> > > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This
> > > check is true at the end of rte_eal_init, so it is false during probing.
> > > Would it be enough? Or should we implement rte_malloc_is_ready()?
> >
> > Hi Thomas,
> >
> > It's a very good suggestion. Let's implement "rte_malloc_is_ready()"
> > which could be as simple as " rte_eal_get_configuration()->mem_config-
> >magic == RTE_MAGIC" check.
> > There may be more consumers for this API in future.
>
> You cannot rely on the magic because it is set only after probing.
> For such API you need to have another internal flag to check that malloc is
> setup.
Yeah, got that. You mean in case of bus probing although rte_malloc is setup
but eal_mcfg_complete() is calledt done yet. So we should set another malloc
specific flag at the end of rte_eal_memory_init(). Correct?
But just for understanding, as David suggested that we preserve keep this flag
then why not use it, have rte_malloc and malloc bits and make a decision.
Let driver has the flexibility to choose. Do you see any harm in this?
Thanks
Harman
>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 10:31 ` Harman Kalra
@ 2021-10-14 10:35 ` Thomas Monjalon
2021-10-14 10:44 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-14 10:35 UTC (permalink / raw)
To: Harman Kalra
Cc: David Marchand, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
14/10/2021 12:31, Harman Kalra:
> From: Thomas Monjalon <thomas@monjalon.net>
> > 14/10/2021 11:31, Harman Kalra:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > 13/10/2021 20:52, Thomas Monjalon:
> > > > > 13/10/2021 19:57, Harman Kalra:
> > > > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > > > 04/10/2021 11:57, David Marchand:
> > > > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > > > > <hkalra@marvell.com>
> > > > > > > > wrote:
> > > > > > > > > > > > +struct rte_intr_handle
> > > > > > > > > > > > +*rte_intr_handle_instance_alloc(int
> > > > size,
> > > > > > > > > > > > +
> > > > > > > > > > > > +bool
> > > > > > > > > > > > +from_hugepage) {
> > > > > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > > > > + int i;
> > > > > > > > > > > > +
> > > > > > > > > > > > + if (from_hugepage)
> > > > > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > > > > + size * sizeof(struct rte_intr_handle),
> > > > > > > > > > > > + 0);
> > > > > > > > > > > > + else
> > > > > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > > > > >
> > > > > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > > > > That would avoid headaches on why multiprocess does
> > > > > > > > > > > not work in some rarely tested cases.
> > > > [...]
> > > > > > > > I agree with David.
> > > > > > > > I prefer a simpler API which always use rte_malloc, and make
> > > > > > > > sure interrupts are always handled between rte_eal_init and
> > > > rte_eal_cleanup.
> > > > [...]
> > > > > > > There are couple of more dependencies on glibc heap APIs:
> > > > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance
> > > > > > > which is used for timerfd, is called before
> > > > > > > "rte_eal_memory_init()" which does the memseg init.
> > > > > > > Not sure what all challenges we may face in moving alarm_init
> > > > > > > after memory_init as it might break some subsystem inits.
> > > > > > > Other option could be to allocate interrupt instance for
> > > > > > > timerfd on first alarm_setup call.
> > > > >
> > > > > Indeed it is an issue.
> > > > >
> > > > > [...]
> > > > >
> > > > > > > There are many other drivers which statically declares the
> > > > > > > interrupt handles inside their respective private structures
> > > > > > > and memory for those structure was allocated from heap. For
> > > > > > > such drivers I allocated interrupt instances also using glibc heap
> > APIs.
> > > > >
> > > > > Could you use rte_malloc in these drivers?
> > > >
> > > > If we take the direction of 2 different allocations mode for the
> > > > interrupts, I suggest we make it automatic without any API parameter.
> > > > We don't have any function to check rte_malloc readiness I think.
> > > > But we can detect whether shared memory is ready with this check:
> > > > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This
> > > > check is true at the end of rte_eal_init, so it is false during probing.
> > > > Would it be enough? Or should we implement rte_malloc_is_ready()?
> > >
> > > Hi Thomas,
> > >
> > > It's a very good suggestion. Let's implement "rte_malloc_is_ready()"
> > > which could be as simple as " rte_eal_get_configuration()->mem_config-
> > >magic == RTE_MAGIC" check.
> > > There may be more consumers for this API in future.
> >
> > You cannot rely on the magic because it is set only after probing.
> > For such API you need to have another internal flag to check that malloc is
> > setup.
>
> Yeah, got that. You mean in case of bus probing although rte_malloc is setup
> but eal_mcfg_complete() is calledt done yet. So we should set another malloc
> specific flag at the end of rte_eal_memory_init(). Correct?
I think the new internal flag should be at the end of rte_eal_malloc_heap_init().
Then a rte_internal function rte_malloc_is_ready() should check this flag.
> But just for understanding, as David suggested that we preserve keep this flag
> then why not use it, have rte_malloc and malloc bits and make a decision.
> Let driver has the flexibility to choose. Do you see any harm in this?
Which flag?
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 10:35 ` Thomas Monjalon
@ 2021-10-14 10:44 ` Harman Kalra
2021-10-14 12:04 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-14 10:44 UTC (permalink / raw)
To: Thomas Monjalon
Cc: David Marchand, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, October 14, 2021 4:06 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: David Marchand <david.marchand@redhat.com>; dev@dpdk.org; Raslan
> Darawsheh <rasland@nvidia.com>; Ray Kinsella <mdr@ashroe.eu>; Dmitry
> Kozlyuk <dmitry.kozliuk@gmail.com>; viacheslavo@nvidia.com;
> matan@nvidia.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement
> get set APIs
>
> 14/10/2021 12:31, Harman Kalra:
> > From: Thomas Monjalon <thomas@monjalon.net>
> > > 14/10/2021 11:31, Harman Kalra:
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > 13/10/2021 20:52, Thomas Monjalon:
> > > > > > 13/10/2021 19:57, Harman Kalra:
> > > > > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > > > > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > > > > > 04/10/2021 11:57, David Marchand:
> > > > > > > > > > On Mon, Oct 4, 2021 at 10:51 AM Harman Kalra
> > > > > > > > > > <hkalra@marvell.com>
> > > > > > > > > wrote:
> > > > > > > > > > > > > +struct rte_intr_handle
> > > > > > > > > > > > > +*rte_intr_handle_instance_alloc(int
> > > > > size,
> > > > > > > > > > > > > +
> > > > > > > > > > > > > +bool
> > > > > > > > > > > > > +from_hugepage) {
> > > > > > > > > > > > > + struct rte_intr_handle *intr_handle;
> > > > > > > > > > > > > + int i;
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + if (from_hugepage)
> > > > > > > > > > > > > + intr_handle = rte_zmalloc(NULL,
> > > > > > > > > > > > > + size * sizeof(struct
> rte_intr_handle),
> > > > > > > > > > > > > + 0);
> > > > > > > > > > > > > + else
> > > > > > > > > > > > > + intr_handle = calloc(1, size *
> > > > > > > > > > > > > + sizeof(struct rte_intr_handle));
> > > > > > > > > > > >
> > > > > > > > > > > > We can call DPDK allocator in all cases.
> > > > > > > > > > > > That would avoid headaches on why multiprocess
> > > > > > > > > > > > does not work in some rarely tested cases.
> > > > > [...]
> > > > > > > > > I agree with David.
> > > > > > > > > I prefer a simpler API which always use rte_malloc, and
> > > > > > > > > make sure interrupts are always handled between
> > > > > > > > > rte_eal_init and
> > > > > rte_eal_cleanup.
> > > > > [...]
> > > > > > > > There are couple of more dependencies on glibc heap APIs:
> > > > > > > > 1. "rte_eal_alarm_init()" allocates an interrupt instance
> > > > > > > > which is used for timerfd, is called before
> > > > > > > > "rte_eal_memory_init()" which does the memseg init.
> > > > > > > > Not sure what all challenges we may face in moving
> > > > > > > > alarm_init after memory_init as it might break some subsystem
> inits.
> > > > > > > > Other option could be to allocate interrupt instance for
> > > > > > > > timerfd on first alarm_setup call.
> > > > > >
> > > > > > Indeed it is an issue.
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > > There are many other drivers which statically declares the
> > > > > > > > interrupt handles inside their respective private
> > > > > > > > structures and memory for those structure was allocated
> > > > > > > > from heap. For such drivers I allocated interrupt
> > > > > > > > instances also using glibc heap
> > > APIs.
> > > > > >
> > > > > > Could you use rte_malloc in these drivers?
> > > > >
> > > > > If we take the direction of 2 different allocations mode for the
> > > > > interrupts, I suggest we make it automatic without any API parameter.
> > > > > We don't have any function to check rte_malloc readiness I think.
> > > > > But we can detect whether shared memory is ready with this check:
> > > > > rte_eal_get_configuration()->mem_config->magic == RTE_MAGIC This
> > > > > check is true at the end of rte_eal_init, so it is false during probing.
> > > > > Would it be enough? Or should we implement rte_malloc_is_ready()?
> > > >
> > > > Hi Thomas,
> > > >
> > > > It's a very good suggestion. Let's implement "rte_malloc_is_ready()"
> > > > which could be as simple as "
> > > >rte_eal_get_configuration()->mem_config-
> > > >magic == RTE_MAGIC" check.
> > > > There may be more consumers for this API in future.
> > >
> > > You cannot rely on the magic because it is set only after probing.
> > > For such API you need to have another internal flag to check that
> > > malloc is setup.
> >
> > Yeah, got that. You mean in case of bus probing although rte_malloc is
> > setup but eal_mcfg_complete() is calledt done yet. So we should set
> > another malloc specific flag at the end of rte_eal_memory_init(). Correct?
>
> I think the new internal flag should be at the end of
> rte_eal_malloc_heap_init().
> Then a rte_internal function rte_malloc_is_ready() should check this flag.
Sure.
>
> > But just for understanding, as David suggested that we preserve keep
> > this flag then why not use it, have rte_malloc and malloc bits and make a
> decision.
> > Let driver has the flexibility to choose. Do you see any harm in this?
>
> Which flag?
In V2, I have replaced the bool arg with an 32bit flag in alloc api:
struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags);
Declared some flags which can be passed by the consumer
/** Interrupt instance allocation flags
* @see rte_intr_instance_alloc
*/
/** Allocate interrupt instance from traditional heap */
#define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
/** Allocate interrupt instance using DPDK memory management APIs */
#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
As a future enhancement, if more options to the allocation is required by user,
new flags can be added.
Thanks
Harman
>
>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
2021-10-14 10:44 ` Harman Kalra
@ 2021-10-14 12:04 ` Thomas Monjalon
0 siblings, 0 replies; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-14 12:04 UTC (permalink / raw)
To: Harman Kalra
Cc: David Marchand, dev, Raslan Darawsheh, Ray Kinsella,
Dmitry Kozlyuk, viacheslavo, matan
14/10/2021 12:44, Harman Kalra:
> > > But just for understanding, as David suggested that we preserve keep
> > > this flag then why not use it, have rte_malloc and malloc bits and make a
> > decision.
> > > Let driver has the flexibility to choose. Do you see any harm in this?
> >
> > Which flag?
>
> In V2, I have replaced the bool arg with an 32bit flag in alloc api:
> struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags);
>
> Declared some flags which can be passed by the consumer
> /** Interrupt instance allocation flags
> * @see rte_intr_instance_alloc
> */
> /** Allocate interrupt instance from traditional heap */
> #define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
> /** Allocate interrupt instance using DPDK memory management APIs */
> #define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
>
> As a future enhancement, if more options to the allocation is required by user,
> new flags can be added.
I am not sure we need such flag.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-14 0:58 ` Dmitry Kozlyuk
@ 2021-10-14 17:15 ` Harman Kalra
2021-10-14 17:53 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-14 17:15 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
Hi Dmitry,
Thanks for your inputs.
Please see inline.
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Thursday, October 14, 2021 6:29 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> <mdr@ashroe.eu>; david.marchand@redhat.com
> Subject: [EXT] Re: [PATCH v2 1/6] eal/interrupts: implement get set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-05 17:44 (UTC+0530), Harman Kalra:
> > [...]
> > +int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> > + const struct rte_intr_handle *src) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance
> unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + intr_handle->fd = src->fd;
> > + intr_handle->vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle->type = src->type;
> > + intr_handle->max_intr = src->max_intr;
> > + intr_handle->nb_efd = src->nb_efd;
> > + intr_handle->efd_counter_size = src->efd_counter_size;
> > +
> > + memcpy(intr_handle->efds, src->efds, src->nb_intr);
> > + memcpy(intr_handle->elist, src->elist, src->nb_intr);
>
> Buffer overrun if "intr_handle->nb_intr < src->nb_intr"?
Ack, I will add the check.
>
> > +
> > + return 0;
> > +fail:
> > + return -rte_errno;
> > +}
> > +
> > +int rte_intr_instance_mem_allocator_get(
> > + const struct rte_intr_handle *intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + return -ENOTSUP;
>
> ENOTSUP usually means the operation is valid from API standpoint but not
> supported by the implementation. EINVAL/EFAULT suits better.
Ack, will make it EFAULT.
>
> > + }
> > +
> > + return intr_handle->mem_allocator;
> > +}
>
> What do you think about having an API to retrieve the entire flags instead?
Now since we are planning to remove this flag variable and rely on auto detection mechanism.
I will remove this API.
>
> > +
> > +void rte_intr_instance_free(struct rte_intr_handle *intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + }
>
> API are neater when free(NULL) is a no-op.
Correct.
>
> > +
> > + if (intr_handle->mem_allocator)
> > + rte_free(intr_handle);
> > + else
> > + free(intr_handle);
> > +}
> > +
> > +int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
>
> This piece repeats over and over, how about making it a function or a macro,
> like in ethdev?
Ack, will define a macro for the same.
>
> > +
> > + intr_handle->fd = fd;
> > +
> > + return 0;
> > +fail:
> > + return -rte_errno;
> > +}
> > +
> > +int rte_intr_fd_get(const struct rte_intr_handle *intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->fd;
> > +fail:
> > + return -1;
> > +}
>
> Please add a similar pair of experimental API for the "handle" member, it is
> needed for Windows interrupt support I'm working on top of these series
> (IIUC, API changes should be closed by RC1.) If you will be doing this and
> don't like "handle" name, it might be like "dev_handle" or
> "windows_device".
I add new APIs to get/set handle. Let's rename it to "windows_handle"
>
> > [...]
> > +int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
> > + int max_intr)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (max_intr > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> > +RTE_MAX_RXTX_INTR_VEC_ID=%d",
>
> The macros is not used in the comparison, neither should the log mention it.
I will add the check.
>
> > [...]
> > @@ -420,6 +412,14 @@ EXPERIMENTAL {
> >
> > # added in 21.08
> > rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> > +
> > + # added in 21.11
> > + rte_intr_fd_set;
> > + rte_intr_fd_get;
>
> WINDOWS_NO_EXPORT
Ack.
>
> > + rte_intr_type_set;
> > + rte_intr_type_get;
> > + rte_intr_instance_alloc;
> > + rte_intr_instance_free;
> > };
>
> Do I understand correctly that these exports are needed to allow an
> application to use DPDK callback facilities for its own interrupt sources?
I exported only those APIs which are currently used by test suite or example
applications, may be later more APIs can be moved from internal to public on
need basis.
> If so, I'd suggest that instead we export a simpler set of functions:
> 1. Create/free a handle instance with automatic fixed type selection.
> 2. Trigger an interrupt on the specified handle instance.
> The flow would be that the application listens on whatever it wants, probably
> with OS-specific mechanisms, and just notifies the interrupt thread about
> events to trigger callbacks.
> Because these APIs are experimental we don't need to change it now, just my
> thoughts for the future.
I am sorry but I did not followed your suggestion, can you please explain.
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-14 0:59 ` Dmitry Kozlyuk
@ 2021-10-14 17:31 ` Harman Kalra
2021-10-14 17:53 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-14 17:31 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Bruce Richardson, david.marchand, mdr
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Thursday, October 14, 2021 6:29 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>;
> david.marchand@redhat.com; mdr@ashroe.eu
> Subject: [EXT] Re: [PATCH v2 2/6] eal/interrupts: avoid direct access to
> interrupt handle
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-05 17:44 (UTC+0530), Harman Kalra:
> > Making changes to the interrupt framework to use interrupt handle APIs
> > to get/set any field. Direct access to any of the fields should be
> > avoided to avoid any ABI breakage in future.
>
> How is ABI breakage applicable to internal consumers?
>
> This protects against fields renaming for sure, but convenience is arguable.
> If EAL needs to add a EAL-private field to struct rte_intr_handle, it must add
> an accessor even though the field is likely OS-specific.
> It would be simpler if the definition was in some private EAL header and
> could be accessed directly by EAL code.
Initially we thought to implement this way only i.e. defining rte_intr_handle inside internal headers
but supporting out of tree drivers was one of the reason to go via this get/set approach. All drivers
internal and external should follow the same way, that was the intention.
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-14 17:15 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-14 17:53 ` Dmitry Kozlyuk
2021-10-15 7:53 ` Thomas Monjalon
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-14 17:53 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
2021-10-14 17:15 (UTC+0000), Harman Kalra:
> [...]
> > > +int rte_intr_fd_get(const struct rte_intr_handle *intr_handle) {
> > > + if (intr_handle == NULL) {
> > > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > > + rte_errno = ENOTSUP;
> > > + goto fail;
> > > + }
> > > +
> > > + return intr_handle->fd;
> > > +fail:
> > > + return -1;
> > > +}
> >
> > Please add a similar pair of experimental API for the "handle" member, it is
> > needed for Windows interrupt support I'm working on top of these series
> > (IIUC, API changes should be closed by RC1.) If you will be doing this and
> > don't like "handle" name, it might be like "dev_handle" or
> > "windows_device".
>
> I add new APIs to get/set handle. Let's rename it to "windows_handle"
The name works for me, thanks.
> > > [...]
> > > +int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
> > > + int max_intr)
> > > +{
> > > + if (intr_handle == NULL) {
> > > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > > + rte_errno = ENOTSUP;
> > > + goto fail;
> > > + }
> > > +
> > > + if (max_intr > intr_handle->nb_intr) {
> > > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> > > +RTE_MAX_RXTX_INTR_VEC_ID=%d",
> >
> > The macros is not used in the comparison, neither should the log mention it.
>
> I will add the check.
What check? I mean that the condition is `max_intr > intr_handle->nb_intr`,
so `RTE_MAX_RXTX_INTR_VEC_ID` is not relevant, `intr_handle->nb_intr` is
dynamic. Probably it should be like this:
RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
"the number of available events (%d)\n",
max_intr, intr_handle->nb_intr);
> [...]
> > > + rte_intr_type_set;
> > > + rte_intr_type_get;
> > > + rte_intr_instance_alloc;
> > > + rte_intr_instance_free;
> > > };
> >
> > Do I understand correctly that these exports are needed to allow an
> > application to use DPDK callback facilities for its own interrupt sources?
>
> I exported only those APIs which are currently used by test suite or example
> applications, may be later more APIs can be moved from internal to public on
> need basis.
>
>
> > If so, I'd suggest that instead we export a simpler set of functions:
> > 1. Create/free a handle instance with automatic fixed type selection.
> > 2. Trigger an interrupt on the specified handle instance.
> > The flow would be that the application listens on whatever it wants, probably
> > with OS-specific mechanisms, and just notifies the interrupt thread about
> > events to trigger callbacks.
> > Because these APIs are experimental we don't need to change it now, just my
> > thoughts for the future.
>
> I am sorry but I did not followed your suggestion, can you please explain.
These API is used as follows. The application has a file descriptor
that becomes readable on some event. The programmer doesn't want to create
another thread like EAL interrupt thread, implement thread-safe callback
registration and invocation. They want to reuse DPDK mechanism instead.
So they create an instance of type EXT and give it the descriptor.
In case of the unit test the descriptor is a pipe read end.
In case of a real application it can be a socket, like in mlx5 PMD.
This is often convenient, but not always. An event may be a signal,
or busy-wait end, or it may be Windows with its completely different IO model
(it's "issue an IO, wait for completion" instead of POSIX
"wait for IO readiness, do a blocking IO").
In all these cases the user needs to create a fake pipe (or whatever)
to fit into how the interrupt thread waits for events.
But what the application really needs is to say "there's an event, please run
the callback on this handle". It's a function call that doesn't require any
explicit file descriptors or handles, doesn't rely on any IO model.
How it is implemented depends on the EAL, for POSIX it will probably be
an internal pipe, Windows can use APC as in eal_intr_thread_schedule().
Again, I'm thinking out loud here, nothing of this needs to be done now.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-14 17:31 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-14 17:53 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-14 17:53 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr
2021-10-14 17:31 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > Sent: Thursday, October 14, 2021 6:29 AM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>;
> > david.marchand@redhat.com; mdr@ashroe.eu
> > Subject: [EXT] Re: [PATCH v2 2/6] eal/interrupts: avoid direct access to
> > interrupt handle
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > 2021-10-05 17:44 (UTC+0530), Harman Kalra:
> > > Making changes to the interrupt framework to use interrupt handle APIs
> > > to get/set any field. Direct access to any of the fields should be
> > > avoided to avoid any ABI breakage in future.
> >
> > How is ABI breakage applicable to internal consumers?
> >
> > This protects against fields renaming for sure, but convenience is arguable.
> > If EAL needs to add a EAL-private field to struct rte_intr_handle, it must add
> > an accessor even though the field is likely OS-specific.
> > It would be simpler if the definition was in some private EAL header and
> > could be accessed directly by EAL code.
>
> Initially we thought to implement this way only i.e. defining rte_intr_handle inside internal headers
> but supporting out of tree drivers was one of the reason to go via this get/set approach. All drivers
> internal and external should follow the same way, that was the intention.
>
> Thanks
> Harman
True for drivers, I understand this, but the question is about EAL itself.
I shouldn't say "internal consumers", I only meant EAL, not inbox drivers.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/6] eal/interrupts: implement get set APIs
2021-10-14 17:53 ` Dmitry Kozlyuk
@ 2021-10-15 7:53 ` Thomas Monjalon
0 siblings, 0 replies; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-15 7:53 UTC (permalink / raw)
To: Harman Kalra, Dmitry Kozlyuk
Cc: dev, Ray Kinsella, david.marchand, bruce.richardson, stephen
14/10/2021 19:53, Dmitry Kozlyuk:
> 2021-10-14 17:15 (UTC+0000), Harman Kalra:
> > > > + rte_intr_type_set;
> > > > + rte_intr_type_get;
> > > > + rte_intr_instance_alloc;
> > > > + rte_intr_instance_free;
> > > > };
> > >
> > > Do I understand correctly that these exports are needed to allow an
> > > application to use DPDK callback facilities for its own interrupt sources?
> >
> > I exported only those APIs which are currently used by test suite or example
> > applications, may be later more APIs can be moved from internal to public on
> > need basis.
> >
> > > If so, I'd suggest that instead we export a simpler set of functions:
> > > 1. Create/free a handle instance with automatic fixed type selection.
> > > 2. Trigger an interrupt on the specified handle instance.
> > > The flow would be that the application listens on whatever it wants, probably
> > > with OS-specific mechanisms, and just notifies the interrupt thread about
> > > events to trigger callbacks.
> > > Because these APIs are experimental we don't need to change it now, just my
> > > thoughts for the future.
> >
> > I am sorry but I did not followed your suggestion, can you please explain.
>
> These API is used as follows. The application has a file descriptor
> that becomes readable on some event. The programmer doesn't want to create
> another thread like EAL interrupt thread, implement thread-safe callback
> registration and invocation. They want to reuse DPDK mechanism instead.
> So they create an instance of type EXT and give it the descriptor.
> In case of the unit test the descriptor is a pipe read end.
> In case of a real application it can be a socket, like in mlx5 PMD.
> This is often convenient, but not always. An event may be a signal,
> or busy-wait end, or it may be Windows with its completely different IO model
> (it's "issue an IO, wait for completion" instead of POSIX
> "wait for IO readiness, do a blocking IO").
> In all these cases the user needs to create a fake pipe (or whatever)
> to fit into how the interrupt thread waits for events.
> But what the application really needs is to say "there's an event, please run
> the callback on this handle". It's a function call that doesn't require any
> explicit file descriptors or handles, doesn't rely on any IO model.
> How it is implemented depends on the EAL, for POSIX it will probably be
> an internal pipe, Windows can use APC as in eal_intr_thread_schedule().
> Again, I'm thinking out loud here, nothing of this needs to be done now.
I like this way of thinking.
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 0/7] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (9 preceding siblings ...)
2021-10-05 16:07 ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API Harman Kalra
` (6 more replies)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
12 siblings, 7 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: malloc: introduce malloc is ready API
This patch introduces a new API which tells if DPDK memory
subsystem is initialized and rte_malloc* APIs are ready to be
used. If rte_malloc* are setup, memory for interrupt instance
is allocated using rte_malloc else using traditional heap APIs.
Patch 2: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
Harman Kalra (7):
malloc: introduce malloc is ready API
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 162 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 26 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 19 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 14 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 +++-
drivers/bus/pci/pci_common.c | 27 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 108 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 22 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 21 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 59 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 18 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 51 +-
drivers/net/mlx5/linux/mlx5_socket.c | 24 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 35 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 11 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 75 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 47 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 61 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 9 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 586 ++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/malloc_heap.c | 16 +-
lib/eal/common/malloc_heap.h | 3 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 52 +-
lib/eal/freebsd/eal_interrupts.c | 92 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 650 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 287 +++++---
lib/eal/version.map | 47 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
134 files changed, 3568 insertions(+), 1709 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-19 15:53 ` Thomas Monjalon
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
` (5 subsequent siblings)
6 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Anatoly Burakov
Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Implementing a new API get the state if DPDK memory management
APIs are initialized.
One of the use case of this API is while allocating an interrupt
instance, if malloc APIs are ready memory for interrupt handles
should be allocated via rte_malloc_* APIs else glibc malloc APIs
are used. Eg. Alarm subsystem is initialised before DPDK memory
infra setup and it allocates an interrupt handle.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/malloc_heap.c | 16 +++++++++++++++-
lib/eal/common/malloc_heap.h | 3 +++
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index ee400f38ec..4d649e3e5c 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -36,6 +36,8 @@
#define CONST_MAX(a, b) (a > b ? a : b) /* RTE_MAX is not a constant */
#define EXTERNAL_HEAP_MIN_SOCKET_ID (CONST_MAX((1 << 8), RTE_MAX_NUMA_NODES))
+static bool malloc_ready;
+
static unsigned
check_hugepage_sz(unsigned flags, uint64_t hugepage_sz)
{
@@ -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
unsigned int i;
+ int ret;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
@@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
return 0;
/* add all IOVA-contiguous areas to the heap */
- return rte_memseg_contig_walk(malloc_add_seg, NULL);
+ ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
+
+ if (!ret)
+ malloc_ready = true;
+
+ return ret;
+}
+
+bool
+rte_malloc_is_ready(void)
+{
+ return malloc_ready == true;
}
diff --git a/lib/eal/common/malloc_heap.h b/lib/eal/common/malloc_heap.h
index 3a6ec6ecf0..f55d408492 100644
--- a/lib/eal/common/malloc_heap.h
+++ b/lib/eal/common/malloc_heap.h
@@ -96,4 +96,7 @@ malloc_socket_to_heap_id(unsigned int socket_id);
int
rte_eal_malloc_heap_init(void);
+bool
+rte_malloc_is_ready(void);
+
#endif /* MALLOC_HEAP_H_ */
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 22:07 ` Dmitry Kozlyuk
2021-10-18 22:56 ` [dpdk-dev] " Stephen Hemminger
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (4 subsequent siblings)
6 siblings, 2 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra, Ray Kinsella
Cc: david.marchand, dmitry.kozliuk
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 410 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 209 +--------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 622 ++++++++++++++++++++++++-
lib/eal/version.map | 47 +-
8 files changed, 1195 insertions(+), 214 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 8dceb6c0e0..3782e88742 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -210,6 +210,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..90e9c70ca3
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_interrupts.h>
+
+#include <malloc_heap.h>
+
+/* Macros to check for valid port */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+struct rte_intr_handle *rte_intr_instance_alloc(void)
+{
+ struct rte_intr_handle *intr_handle;
+ bool mem_allocator;
+
+ /* Detect if DPDK malloc APIs are ready to be used. */
+ mem_allocator = rte_malloc_is_ready();
+ if (mem_allocator)
+ intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
+ 0);
+ else
+ intr_handle = calloc(1, sizeof(struct rte_intr_handle));
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle->mem_allocator = mem_allocator;
+
+ return intr_handle;
+}
+
+int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src)
+{
+ uint16_t nb_intr;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+
+ nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
+ memcpy(intr_handle->efds, src->efds, nb_intr);
+ memcpy(intr_handle->elist, src->elist, nb_intr);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle->mem_allocator)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ }
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (!windows_handle) {
+ RTE_LOG(ERR, EAL, "Windows handle should not be NULL\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..6764ba3f35 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle (Windows) */
};
+ bool mem_allocator;
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..98edf774af 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,8 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +37,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +166,621 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later as per the requirement.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @param flags
+ * Memory allocation from DPDK allocator or normal allocation
+ *
+ * @return
+ * - On success, address of first interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for event fds. event lists
+ * and interrupt handle array.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to populate interrupt handle, with src handler fields.
+ *
+ * @param intr_handle
+ * Start address of interrupt handles
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the no of event fd field of interrupt handle with
+ * user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the no of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the no of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter, used for vdev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the event list array index with the given elist
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * event list instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of elist instance of event list array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * no of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * No of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Freeing the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
+ * arrays as per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..0ef77c3b40 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
- rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
+ rte_intr_disable;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,6 +412,14 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_type_set; # WINDOWS_NO_EXPORT
+ rte_intr_type_get; # WINDOWS_NO_EXPORT
+ rte_intr_instance_alloc; # WINDOWS_NO_EXPORT
+ rte_intr_instance_free; # WINDOWS_NO_EXPORT
};
INTERNAL {
@@ -430,4 +430,33 @@ INTERNAL {
rte_mem_map;
rte_mem_page_size;
rte_mem_unmap;
+ rte_intr_cap_multiple;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_allow_others;
+ rte_intr_tls_epfd;
+ rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_instance_copy; # WINDOWS_NO_EXPORT
+ rte_intr_event_list_update; # WINDOWS_NO_EXPORT
+ rte_intr_max_intr_set; # WINDOWS_NO_EXPORT
+ rte_intr_max_intr_get; # WINDOWS_NO_EXPORT
+ rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
+ rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
+ rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
+ rte_intr_efds_index_set; # WINDOWS_NO_EXPORT
+ rte_intr_efds_index_get; # WINDOWS_NO_EXPORT
+ rte_intr_elist_index_set; # WINDOWS_NO_EXPORT
+ rte_intr_elist_index_get; # WINDOWS_NO_EXPORT
+ rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
+ rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
+ rte_intr_vec_list_alloc; # WINDOWS_NO_EXPORT
+ rte_intr_vec_list_index_set; # WINDOWS_NO_EXPORT
+ rte_intr_vec_list_index_get; # WINDOWS_NO_EXPORT
+ rte_intr_vec_list_free; # WINDOWS_NO_EXPORT
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
};
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (3 subsequent siblings)
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
2 files changed, 234 insertions(+), 145 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..846ca4aa89 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +162,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +185,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +241,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +282,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +296,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +329,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +381,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +405,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +423,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +447,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +459,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +482,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +495,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +566,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +578,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +590,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a250a9df66 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,21 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +599,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +640,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +650,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +682,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +714,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +772,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +795,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +838,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +848,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +906,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +939,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1016,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1056,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1066,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1169,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1233,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1467,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1490,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1502,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1527,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1549,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1558,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1595,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1617,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 4/7] test/interrupt: apply get set interrupt handle APIs
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
` (2 preceding siblings ...)
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 5/7] drivers: remove direct access to interrupt handle Harman Kalra
` (2 subsequent siblings)
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Harman Kalra; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 162 ++++++++++++++++++++++---------------
1 file changed, 97 insertions(+), 65 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..774a573f02 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,54 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] = rte_intr_instance_alloc();
+ if (!intr_handles[i])
+ return -1;
+ }
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +120,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +152,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +207,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +229,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +254,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +264,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +272,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +280,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +288,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +296,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +317,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +328,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +336,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +344,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +352,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +360,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +382,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +402,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +427,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -445,8 +476,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +485,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +502,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +511,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +561,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 5/7] drivers: remove direct access to interrupt handle
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
` (3 preceding siblings ...)
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti,
Bruce Richardson, Thomas Monjalon
Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 ++--
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 26 +++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 19 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 14 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 ++--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 27 +++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 108 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 ++++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 22 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 21 +++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 108 +++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 59 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 18 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 51 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 24 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 35 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +++--
drivers/net/thunderx/nicvf_ethdev.c | 11 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +++--
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 75 ++++++-----
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 47 ++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 61 ++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 9 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 45 ++++++-
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 29 +++--
lib/eal/linux/eal_dev.c | 63 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
118 files changed, 1791 insertions(+), 1217 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 4e2feefc3c..73a8fac2f4 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1097,8 +1099,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1110,8 +1113,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4184,7 +4188,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..8add4b13ef 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,16 +743,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1879,7 +1878,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..8f69e8fc3e 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,16 +1014,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2369,7 +2368,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..6d44c433b6 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..374246657a 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,13 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +74,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..d7c2639034 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,14 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +222,14 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +263,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +576,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +632,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..b469c0cf9e 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,14 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..c2b469a94b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..4472175ce3 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,13 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle = rte_intr_instance_alloc();
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +497,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +545,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +558,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..afddffde03 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,13 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle = rte_intr_instance_alloc();
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +196,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +406,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..1a46553be0 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..5aaf604aa4 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..c8da3e2fe8 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..aef99f9f9b 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,22 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle = rte_intr_instance_alloc();
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +269,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +316,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..bc8ccc24e2 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,11 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..7792712a25 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +82,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..90b34004fa 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +259,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 74ada6ef42..15f1aae23e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..b05578e13d 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +136,53 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle) ||
+ vec >= (uint32_t)plt_intr_nb_intr_get(intr_handle)) {
plt_err("Vector=%d greater than max_intr=%d or "
"max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ vec, plt_intr_max_intr_get(intr_handle),
+ (uint64_t)plt_intr_nb_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +192,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +226,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 241655b334..c707a7bdf4 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -103,6 +103,33 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_copy rte_intr_instance_copy
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -165,7 +192,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -185,18 +212,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519..a77d51abc4 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -525,10 +525,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -608,7 +607,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -638,10 +637,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -692,7 +688,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6cb8bb4338..60b896cf7a 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index ebd5411fdd..89e4a3dd71 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a..a34b2f078b 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..f13432ac15 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -735,7 +735,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -847,26 +847,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1479,7 +1477,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1521,10 +1519,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 59f4a93b3e..9d570781ac 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ff8ae89922..f413d629c8 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1153,7 +1153,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1224,8 +1224,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1264,8 +1264,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..c1060f0c70 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -752,10 +750,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -767,7 +762,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1008,7 +1003,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 6510cd7ceb..82ac1dadd2 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -996,7 +996,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1200,7 +1200,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1259,11 +1259,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1422,7 +1421,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1466,10 +1465,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1509,7 +1505,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1535,10 +1531,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2779,7 +2773,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3296,7 +3290,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3327,11 +3321,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3353,7 +3346,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3377,10 +3370,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3418,7 +3410,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5140,7 +5132,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5160,7 +5152,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5238,7 +5230,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5279,8 +5271,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5298,8 +5291,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5309,8 +5302,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b6287..aa6f34ca04 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -945,7 +945,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -967,10 +967,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -986,7 +985,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1006,7 +1005,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1663,7 +1665,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -2815,7 +2817,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -2842,9 +2844,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -2863,7 +2865,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -2871,8 +2875,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b8daf8fb24 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1112,8 +1112,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e60..23916a9eed 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,23 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle = rte_intr_instance_alloc();
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +309,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..949af61a47 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index d0030af061..fe6a7c0c84 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
@@ -453,10 +460,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
}
}
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return 0;
free_rxq:
fs_rx_queue_release(dev, rx_queue_id);
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7075d69022..850ec35059 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index cd4dad8588..d4d5df24f9 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1229,13 +1229,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3136,7 +3136,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3146,7 +3146,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3176,7 +3176,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index cabf73ffbc..10e06cbd1b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5269,7 +5269,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5282,7 +5282,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5341,8 +5341,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5375,8 +5375,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5710,7 +5710,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5733,16 +5733,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5755,20 +5752,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5779,7 +5777,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5789,8 +5787,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5933,7 +5932,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5953,16 +5952,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c8..fb25241be6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2136,16 +2136,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2301,7 +2301,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2324,16 +2324,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2346,20 +2343,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2370,7 +2369,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2380,8 +2379,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2845,7 +2845,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2871,7 +2871,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6b77672aa1..7604cbba35 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1fc3d897a8..90cdd5bc18 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1439,7 +1439,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11055,11 +11054,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11074,7 +11073,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11083,11 +11082,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e1..f99e421168 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -660,17 +660,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -730,7 +729,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -740,14 +740,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -789,8 +791,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -926,10 +927,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1669,7 +1667,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1688,7 +1687,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1700,7 +1699,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2384,12 +2384,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2423,7 +2423,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 3275687927..f76b4b09c4 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e3..68c13ac48d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f6558742..e6fd88de7c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -160,11 +160,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -214,7 +212,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -224,12 +223,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -634,10 +634,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 65e43a18f9..6a61d79ddc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2171,7 +2171,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2368,7 +2368,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2398,7 +2398,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2423,10 +2423,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2440,7 +2437,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3338,10 +3335,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3369,8 +3367,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3379,7 +3378,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3391,7 +3392,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3417,7 +3418,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3437,11 +3438,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4766,19 +4765,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4787,11 +4786,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 0e41c85d29..5f8fa0af86 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -668,10 +668,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -731,7 +728,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -755,8 +752,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -773,8 +770,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -810,7 +807,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -819,7 +816,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -913,7 +911,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -951,10 +949,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1169,7 +1166,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1339,11 +1336,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2100,7 +2097,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2119,7 +2116,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 344c076f30..9af2bb2159 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1071,7 +1071,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1085,15 +1085,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1122,7 +1117,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a127dc0d86..ba2af9d729 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1526,7 +1526,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2542,7 +2542,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2597,11 +2597,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2837,7 +2835,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2888,10 +2886,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2975,7 +2970,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4621,7 +4616,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5302,7 +5297,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5365,11 +5360,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5409,7 +5402,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5437,10 +5430,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5452,7 +5442,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5750,7 +5740,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5776,7 +5766,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5792,7 +5782,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5919,7 +5909,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5946,8 +5936,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5968,7 +5960,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6012,8 +6004,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65..bea4461e12 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,25 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle = rte_intr_instance_alloc();
+ if (!cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +876,11 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ if (cc->intr_handle)
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +936,21 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle = rte_intr_instance_alloc();
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +963,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1083,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1147,24 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle = rte_intr_instance_alloc();
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1179,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e..2b9a092a34 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1122,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1320,24 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc();
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1361,23 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc();
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1370,6 +1404,7 @@ memif_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..75656c06db 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,18 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle = rte_intr_instance_alloc();
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1066,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1111,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..8059fb4624 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +261,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +294,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 3746057673..02dd3e1811 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2586,9 +2586,7 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle = rte_intr_instance_alloc();
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2753,7 +2751,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2937,7 +2935,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int ret;
int flags;
- sh->intr_handle.fd = -1;
+ sh->intr_handle = rte_intr_instance_alloc();
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL);
ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd,
F_SETFL, flags | O_NONBLOCK);
@@ -2945,17 +2950,24 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle,
+ ((struct ibv_context *)sh->ctx)->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx = rte_intr_instance_alloc();
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp =
(void *)mlx5_glue->devx_create_cmd_comp(sh->ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
@@ -2970,13 +2982,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2993,13 +3006,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..1d6a97fbea 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,18 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle = rte_intr_instance_alloc();
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +167,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 3581414b78..95c6fec6fa 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1016,7 +1016,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1184,8 +1184,8 @@ struct mlx5_dev_ctx_shared {
/* Memory Pool for mlx5 flow resources. */
struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index b68443bed5..e1430508ff 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +919,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +933,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54173bfacb..cc91be926c 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1129,7 +1129,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1138,7 +1138,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 2be7e71f89..68c9cf73fd 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -784,13 +785,21 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle = rte_intr_instance_alloc();
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..521c449429 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 4395a09c59..460ad9408c 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -808,7 +808,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -828,7 +829,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -878,7 +880,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8..fc33bb2ffa 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0..9c1db84733 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..4045fbbf00 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index fd8c62a182..104a26266d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2581,22 +2581,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..b31965d1ff 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d..2ecc2e1531 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1923,6 +1925,13 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle = rte_intr_instance_alloc();
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1940,9 +1949,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2093,6 +2102,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..ded50ed653 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..2dd27ab043 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2175,6 +2178,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle = rte_intr_instance_alloc();
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b267da462b..3b1572e485 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1680,17 +1680,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1871,7 +1868,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1921,10 +1918,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1987,7 +1981,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -3107,7 +3101,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3640,7 +3634,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3722,7 +3716,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3756,8 +3750,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a887..373fcf167f 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2e24e5f7ff..8d01ec65dd 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,9 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +665,30 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle = rte_intr_instance_alloc();
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +707,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +926,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 6aa36b3f39..e7ae6e37f0 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1641,7 +1640,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1680,15 +1681,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..62fe307b7a 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,36 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle = rte_intr_instance_alloc();
if (!eth_dev->intr_handle) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +671,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +976,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +986,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1011,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index cfffc94c48..45ab4971ed 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -620,11 +620,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -635,8 +633,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -644,17 +641,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -802,7 +801,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -825,7 +826,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1022,10 +1025,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1671,7 +1671,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1681,7 +1683,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..b3e0671e7a 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,13 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] = rte_intr_instance_alloc();
+ if (!ifpga_irq_handle[i])
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1391,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1427,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1520,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..0f6d180ae2 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -698,6 +698,11 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle = rte_intr_instance_alloc();
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -716,6 +721,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
if (ctx)
@@ -750,6 +757,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev)
rte_vdpa_unregister_device(priv->vdev);
mlx5_glue->close_device(priv->ctx);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index a27f3fdadb..0c51376dd9 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -139,7 +139,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index bb6722839a..5ec04a875b 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058..da9e09f22c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,32 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle = rte_intr_instance_alloc();
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -501,7 +516,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..2c6fa65020 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..cd971036cd 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,43 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
+ rte_intr_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -118,7 +139,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +157,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +185,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +225,10 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle = rte_intr_instance_alloc();
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -256,6 +283,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -282,6 +312,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..cf8e2f2066 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,35 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +122,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +154,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +184,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..95931d7bec 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,38 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_fd_set(intr_handle, -1);
+ rte_intr_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +364,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..c7b6162c4f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4696,13 +4696,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4737,15 +4737,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -4923,12 +4923,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 6/7] eal/interrupts: make interrupt handle structure opaque
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
` (4 preceding siblings ...)
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 5/7] drivers: remove direct access to interrupt handle Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 194 +++++++++++++++++++++++--
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 ---------
lib/eal/include/rte_interrupts.h | 30 +++-
5 files changed, 221 insertions(+), 83 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index c8da3e2fe8..f274aa4aab 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 90e9c70ca3..1d7ab17bc3 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -21,6 +21,29 @@
} \
} while (0)
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle (Windows) */
+ };
+ bool mem_allocator;
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_instance_alloc(void)
{
struct rte_intr_handle *intr_handle;
@@ -39,16 +62,51 @@ struct rte_intr_handle *rte_intr_instance_alloc(void)
return NULL;
}
+ if (mem_allocator)
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(uint32_t));
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (mem_allocator)
+ intr_handle->elist =
+ rte_zmalloc(NULL, RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle->mem_allocator = mem_allocator;
return intr_handle;
+fail:
+ if (intr_handle->mem_allocator) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
const struct rte_intr_handle *src)
{
- uint16_t nb_intr;
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -59,27 +117,121 @@ int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
}
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
+ intr_handle->mem_allocator = src->mem_allocator;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
intr_handle->efd_counter_size = src->efd_counter_size;
- nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
- memcpy(intr_handle->efds, src->efds, nb_intr);
- memcpy(intr_handle->elist, src->elist, nb_intr);
+ if (intr_handle->nb_intr != src->nb_intr) {
+ if (src->mem_allocator)
+ tmp_efds = rte_realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (src->mem_allocator)
+ tmp_elist = rte_realloc(intr_handle->elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = src->nb_intr;
+ }
+
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
fail:
return -rte_errno;
}
-void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
{
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
if (intr_handle->mem_allocator)
- rte_free(intr_handle);
+ tmp_efds = rte_realloc(intr_handle->efds, size *
+ sizeof(uint32_t), 0);
else
+ tmp_efds = realloc(intr_handle->efds, size *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (intr_handle->mem_allocator)
+ tmp_elist = rte_realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = size;
+
+ return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
+fail:
+ return -rte_errno;
+}
+
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle->mem_allocator) {
+ if (intr_handle) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
+ }
+ rte_free(intr_handle);
+ } else {
+ if (intr_handle) {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
+ }
free(intr_handle);
+ }
}
int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
@@ -128,7 +280,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -139,7 +291,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
@@ -229,6 +381,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -246,6 +404,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -265,6 +429,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -282,6 +452,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 6764ba3f35..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle (Windows) */
- };
- bool mem_allocator;
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index 98edf774af..b577056ce1 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -25,7 +25,35 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
-#include "rte_eal_interrupts.h"
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+/** Allocate interrupt instance from traditional heap */
+#define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
+/** Allocate interrupt instance using DPDK memory management APIs */
+#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
+
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v3 7/7] eal/alarm: introduce alarm fini routine
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
` (5 preceding siblings ...)
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-18 19:37 ` Harman Kalra
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-18 19:37 UTC (permalink / raw)
To: dev, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 +++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++++++
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 10 +++++++++-
5 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..7fb9bc1324 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..535ea687ca 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -977,6 +977,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index cd971036cd..167384e79a 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..806158f297 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index cf8e2f2066..56f69d8e6d 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
@@ -68,7 +75,8 @@ rte_eal_alarm_init(void)
goto error;
}
- rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
/* create a timerfd file descriptor */
if (rte_intr_fd_set(intr_handle,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-18 22:07 ` Dmitry Kozlyuk
2021-10-19 8:50 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-18 22:56 ` [dpdk-dev] " Stephen Hemminger
1 sibling, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-18 22:07 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
2021-10-19 01:07 (UTC+0530), Harman Kalra:
[...]
> +struct rte_intr_handle *rte_intr_instance_alloc(void)
> +{
> + struct rte_intr_handle *intr_handle;
> + bool mem_allocator;
This name is not very descriptive; what would "mem_allocator is false" mean?
How about "is_rte_memory"?
> +
> + /* Detect if DPDK malloc APIs are ready to be used. */
> + mem_allocator = rte_malloc_is_ready();
> + if (mem_allocator)
> + intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> + if (!intr_handle) {
> + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> + intr_handle->mem_allocator = mem_allocator;
> +
> + return intr_handle;
> +}
> +
> +int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src)
> +{
> + uint16_t nb_intr;
> +
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (src == NULL) {
> + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + intr_handle->fd = src->fd;
> + intr_handle->vfio_dev_fd = src->vfio_dev_fd;
> + intr_handle->type = src->type;
> + intr_handle->max_intr = src->max_intr;
> + intr_handle->nb_efd = src->nb_efd;
> + intr_handle->efd_counter_size = src->efd_counter_size;
> +
> + nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
Truncating copy is error-prone.
It should be either a reallocation (in the future) or an error (now).
> + memcpy(intr_handle->efds, src->efds, nb_intr);
> + memcpy(intr_handle->elist, src->elist, nb_intr);
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> +
> +void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle->mem_allocator)
This function should accept NULL and be a no-op in such case.
> + rte_free(intr_handle);
> + else
> + free(intr_handle);
> +}
[...]
> +void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + return intr_handle->windows_handle;
> +fail:
> + return NULL;
> +}
> +
> +int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
> + void *windows_handle)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (!windows_handle) {
> + RTE_LOG(ERR, EAL, "Windows handle should not be NULL\n");
> + rte_errno = EINVAL;
> + goto fail;
> + }
Thanks for adding this API, but please remove the check.
It is possible that the API user will pass NULL to reset the state
(also NULL is not the only invalid value for a Windows handle).
There is no check for Unix FD, neither should be here.
> +
> + intr_handle->windows_handle = windows_handle;
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
[...]
> @@ -79,191 +53,20 @@ struct rte_intr_handle {
> };
> int fd; /**< interrupt event file descriptor */
> };
> - void *handle; /**< device driver handle (Windows) */
> + void *windows_handle; /**< device driver handle (Windows) */
I guess Windows can be dropped from the comment since it's now in the name.
> };
> + bool mem_allocator;
> enum rte_intr_handle_type type; /**< handle type */
> uint32_t max_intr; /**< max interrupt requested */
> uint32_t nb_efd; /**< number of available efd(event fd) */
> uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
> + uint16_t nb_intr;
> + /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
> int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
> struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
> - /**< intr vector epoll event */
> + /**< intr vector epoll event */
> + uint16_t vec_list_size;
> int *intr_vec; /**< intr vector number array */
> };
>
[...]
> diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
> index cc3bf45d8c..98edf774af 100644
> --- a/lib/eal/include/rte_interrupts.h
> +++ b/lib/eal/include/rte_interrupts.h
[...]
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * It allocates memory for interrupt instance. API takes flag as an argument
Not anymore. Please update the description.
> + * which define from where memory should be allocated i.e. using DPDK memory
> + * management library APIs or normal heap allocation.
> + * Default memory allocation for event fds and event list array is done which
> + * can be realloced later as per the requirement.
> + *
> + * This function should be called from application or driver, before calling any
> + * of the interrupt APIs.
> + *
> + * @param flags
> + * Memory allocation from DPDK allocator or normal allocation
> + *
> + * @return
> + * - On success, address of first interrupt handle.
> + * - On failure, NULL.
> + */
> +__rte_experimental
> +struct rte_intr_handle *
> +rte_intr_instance_alloc(void);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * This API is used to free the memory allocated for event fds. event lists
> + * and interrupt handle array.
It's simpler and more future-proof to just say "interrupt handle resources"
instead of enumerating them.
> + *
> + * @param intr_handle
> + * Base address of interrupt handle array.
It's not an array anymore.
[...]
> +/**
> + * @internal
> + * This API is used to set the event list array index with the given elist
"Event list array" sound like an array of lists,
while it is really an array of scalar elements.
"Event data array"? TBH, I don't know how it's usually named in Unices.
> + * instance.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param index
> + * elist array index to be set
> + * @param elist
> + * event list instance of struct rte_epoll_event
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
> + struct rte_epoll_event elist);
> +
> +/**
> + * @internal
> + * Returns the address of elist instance of event list array at a given index.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param index
> + * elist array index to be returned
> + *
> + * @return
> + * - On success, elist
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +struct rte_epoll_event *
> +rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
> +
> +/**
> + * @internal
> + * Allocates the memory of interrupt vector list array, with size defining the
> + * no of elements required in the array.
Typo: "no" -> "number".
[...]
> +
> +/**
> + * @internal
> + * This API returns the windows handle of the given interrupt instance.
Typo: "windows" -> "Windows" here and below.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + *
> + * @return
> + * - On success, windows handle.
> + * - On failure, NULL.
> + */
> +__rte_internal
> +void *
> +rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
> +
> +/**
> + * @internal
> + * This API set the windows handle for the given interrupt instance.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param windows_handle
> + * windows handle to be set.
> + *
> + * @return
> + * - On success, zero
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
> + void *windows_handle);
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index 38f7de83e1..0ef77c3b40 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -109,18 +109,10 @@ DPDK_22 {
> rte_hexdump;
> rte_hypervisor_get;
> rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
> - rte_intr_allow_others;
> rte_intr_callback_register;
> rte_intr_callback_unregister;
> - rte_intr_cap_multiple;
> - rte_intr_disable;
> - rte_intr_dp_is_en;
> - rte_intr_efd_disable;
> - rte_intr_efd_enable;
> rte_intr_enable;
> - rte_intr_free_epoll_fd;
> - rte_intr_rx_ctl;
> - rte_intr_tls_epfd;
> + rte_intr_disable;
> rte_keepalive_create; # WINDOWS_NO_EXPORT
> rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
> rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
> @@ -420,6 +412,14 @@ EXPERIMENTAL {
>
> # added in 21.08
> rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> +
> + # added in 21.11
> + rte_intr_fd_set; # WINDOWS_NO_EXPORT
> + rte_intr_fd_get; # WINDOWS_NO_EXPORT
OK, these are not feasible on Windows.
> + rte_intr_type_set; # WINDOWS_NO_EXPORT
> + rte_intr_type_get; # WINDOWS_NO_EXPORT
> + rte_intr_instance_alloc; # WINDOWS_NO_EXPORT
> + rte_intr_instance_free; # WINDOWS_NO_EXPORT
No, these *are* needed on Windows.
> };
>
> INTERNAL {
> @@ -430,4 +430,33 @@ INTERNAL {
> rte_mem_map;
> rte_mem_page_size;
> rte_mem_unmap;
> + rte_intr_cap_multiple;
> + rte_intr_dp_is_en;
> + rte_intr_efd_disable;
> + rte_intr_efd_enable;
> + rte_intr_free_epoll_fd;
> + rte_intr_rx_ctl;
> + rte_intr_allow_others;
> + rte_intr_tls_epfd;
> + rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
> + rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
OK.
> + rte_intr_instance_copy; # WINDOWS_NO_EXPORT
> + rte_intr_event_list_update; # WINDOWS_NO_EXPORT
> + rte_intr_max_intr_set; # WINDOWS_NO_EXPORT
> + rte_intr_max_intr_get; # WINDOWS_NO_EXPORT
These are needed on Windows.
> + rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
> + rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
> + rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
> + rte_intr_efds_index_set; # WINDOWS_NO_EXPORT
> + rte_intr_efds_index_get; # WINDOWS_NO_EXPORT
OK.
> + rte_intr_elist_index_set; # WINDOWS_NO_EXPORT
> + rte_intr_elist_index_get; # WINDOWS_NO_EXPORT
These are needed on Windows.
> + rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
> + rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
OK.
> + rte_intr_vec_list_alloc; # WINDOWS_NO_EXPORT
> + rte_intr_vec_list_index_set; # WINDOWS_NO_EXPORT
> + rte_intr_vec_list_index_get; # WINDOWS_NO_EXPORT
> + rte_intr_vec_list_free; # WINDOWS_NO_EXPORT
These are needed on Windows.
> + rte_intr_instance_windows_handle_get;
> + rte_intr_instance_windows_handle_set;
> };
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-18 22:07 ` Dmitry Kozlyuk
@ 2021-10-18 22:56 ` Stephen Hemminger
2021-10-19 8:32 ` [dpdk-dev] [EXT] " Harman Kalra
1 sibling, 1 reply; 152+ messages in thread
From: Stephen Hemminger @ 2021-10-18 22:56 UTC (permalink / raw)
To: Harman Kalra
Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand, dmitry.kozliuk
On Tue, 19 Oct 2021 01:07:02 +0530
Harman Kalra <hkalra@marvell.com> wrote:
> + /* Detect if DPDK malloc APIs are ready to be used. */
> + mem_allocator = rte_malloc_is_ready();
> + if (mem_allocator)
> + intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
This is problematic way to do this.
The reason to use rte_malloc vs malloc should be determined by usage.
If the pointer will be shared between primary/secondary process then
it has to be in hugepages (ie rte_malloc). If it is not shared then
then use regular malloc.
But what you have done is created a method which will be
a latent bug for anyone using primary/secondary process.
Either:
intr_handle is not allowed to be used in secondary.
Then always use malloc().
Or.
intr_handle can be used by both primary and secondary.
Then always use rte_malloc().
Any code path that allocates intr_handle before pool is
ready is broken.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-18 22:56 ` [dpdk-dev] " Stephen Hemminger
@ 2021-10-19 8:32 ` Harman Kalra
2021-10-19 15:58 ` Thomas Monjalon
2021-10-20 15:30 ` Dmitry Kozlyuk
0 siblings, 2 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 8:32 UTC (permalink / raw)
To: Stephen Hemminger, Thomas Monjalon, david.marchand, dmitry.kozliuk
Cc: dev, Ray Kinsella
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Tuesday, October 19, 2021 4:27 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> <mdr@ashroe.eu>; david.marchand@redhat.com;
> dmitry.kozliuk@gmail.com
> Subject: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get
> set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> On Tue, 19 Oct 2021 01:07:02 +0530
> Harman Kalra <hkalra@marvell.com> wrote:
>
> > + /* Detect if DPDK malloc APIs are ready to be used. */
> > + mem_allocator = rte_malloc_is_ready();
> > + if (mem_allocator)
> > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
>
> This is problematic way to do this.
> The reason to use rte_malloc vs malloc should be determined by usage.
>
> If the pointer will be shared between primary/secondary process then it has
> to be in hugepages (ie rte_malloc). If it is not shared then then use regular
> malloc.
>
> But what you have done is created a method which will be a latent bug for
> anyone using primary/secondary process.
>
> Either:
> intr_handle is not allowed to be used in secondary.
> Then always use malloc().
> Or.
> intr_handle can be used by both primary and secondary.
> Then always use rte_malloc().
> Any code path that allocates intr_handle before pool is
> ready is broken.
Hi Stephan,
Till V2, I implemented this API in a way where user of the API can choose
If he wants intr handle to be allocated using malloc or rte_malloc by passing
a flag arg to the rte_intr_instanc_alloc API. User of the API will best know if
the intr handle is to be shared with secondary or not.
But after some discussions and suggestions from the community we decided
to drop that flag argument and auto detect on whether rte_malloc APIs are
ready to be used and thereafter make all further allocations via rte_malloc.
Currently alarm subsystem (or any driver doing allocation in constructor) gets
interrupt instance allocated using glibc malloc that too because rte_malloc*
is not ready by rte_eal_alarm_init(), while all further consumers gets instance
allocated via rte_malloc.
I think this should not cause any issue in primary/secondary model as all interrupt
instance pointer will be shared. Infact to avoid any surprises of primary/secondary
not working we thought of making all allocations via rte_malloc.
David, Thomas, Dmitry, please add if I missed anything.
Can we please conclude on this series APIs as API freeze deadline (rc1) is very near.
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-18 22:07 ` Dmitry Kozlyuk
@ 2021-10-19 8:50 ` Harman Kalra
2021-10-19 18:44 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 8:50 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
Hi Dmitry,
Thanks for reviewing. Please find my responses inline.
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Tuesday, October 19, 2021 3:38 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> <mdr@ashroe.eu>; david.marchand@redhat.com
> Subject: [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-19 01:07 (UTC+0530), Harman Kalra:
> [...]
> > +struct rte_intr_handle *rte_intr_instance_alloc(void) {
> > + struct rte_intr_handle *intr_handle;
> > + bool mem_allocator;
>
> This name is not very descriptive; what would "mem_allocator is false"
> mean?
> How about "is_rte_memory"?
Sure, will make it "is_rte_memory"
>
> > +
> > + /* Detect if DPDK malloc APIs are ready to be used. */
> > + mem_allocator = rte_malloc_is_ready();
> > + if (mem_allocator)
> > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > + if (!intr_handle) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > + intr_handle->mem_allocator = mem_allocator;
> > +
> > + return intr_handle;
> > +}
> > +
> > +int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> > + const struct rte_intr_handle *src) {
> > + uint16_t nb_intr;
> > +
> > + CHECK_VALID_INTR_HANDLE(intr_handle);
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance
> unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + intr_handle->fd = src->fd;
> > + intr_handle->vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle->type = src->type;
> > + intr_handle->max_intr = src->max_intr;
> > + intr_handle->nb_efd = src->nb_efd;
> > + intr_handle->efd_counter_size = src->efd_counter_size;
> > +
> > + nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
>
> Truncating copy is error-prone.
> It should be either a reallocation (in the future) or an error (now).
Actually in patch 6, I have made lot of changes to this API wrt nb_intr,
where efds/elist arrays are reallocated based on src->nb_intr and make
intr_handle->nb_intr equal to src->nb_intr. I think those changes can be
moved from patch 6 to patch 2.
>
> > + memcpy(intr_handle->efds, src->efds, nb_intr);
> > + memcpy(intr_handle->elist, src->elist, nb_intr);
> > +
> > + return 0;
> > +fail:
> > + return -rte_errno;
> > +}
> > +
> > +void rte_intr_instance_free(struct rte_intr_handle *intr_handle) {
> > + if (intr_handle->mem_allocator)
>
> This function should accept NULL and be a no-op in such case.
Ack.
>
> > + rte_free(intr_handle);
> > + else
> > + free(intr_handle);
> > +}
>
> [...]
> > +void *rte_intr_instance_windows_handle_get(struct rte_intr_handle
> > +*intr_handle) {
> > + CHECK_VALID_INTR_HANDLE(intr_handle);
> > +
> > + return intr_handle->windows_handle;
> > +fail:
> > + return NULL;
> > +}
> > +
> > +int rte_intr_instance_windows_handle_set(struct rte_intr_handle
> *intr_handle,
> > + void *windows_handle)
> > +{
> > + CHECK_VALID_INTR_HANDLE(intr_handle);
> > +
> > + if (!windows_handle) {
> > + RTE_LOG(ERR, EAL, "Windows handle should not be
> NULL\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
>
> Thanks for adding this API, but please remove the check.
> It is possible that the API user will pass NULL to reset the state (also NULL is
> not the only invalid value for a Windows handle).
> There is no check for Unix FD, neither should be here.
Sure, will remove the check.
>
> > +
> > + intr_handle->windows_handle = windows_handle;
> > +
> > + return 0;
> > +fail:
> > + return -rte_errno;
> > +}
>
> [...]
> > @@ -79,191 +53,20 @@ struct rte_intr_handle {
> > };
> > int fd; /**< interrupt event file descriptor */
> > };
> > - void *handle; /**< device driver handle (Windows) */
> > + void *windows_handle; /**< device driver handle (Windows)
> */
>
> I guess Windows can be dropped from the comment since it's now in the
> name.
Ack.
>
> > };
> > + bool mem_allocator;
> > enum rte_intr_handle_type type; /**< handle type */
> > uint32_t max_intr; /**< max interrupt requested */
> > uint32_t nb_efd; /**< number of available efd(event fd) */
> > uint8_t efd_counter_size; /**< size of efd counter, used for vdev
> */
> > + uint16_t nb_intr;
> > + /**< Max vector count, default
> RTE_MAX_RXTX_INTR_VEC_ID */
> > int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds
> mapping */
> > struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
> > - /**< intr vector epoll event */
> > + /**< intr vector epoll event */
> > + uint16_t vec_list_size;
> > int *intr_vec; /**< intr vector number array */
> > };
> >
>
> [...]
> > diff --git a/lib/eal/include/rte_interrupts.h
> > b/lib/eal/include/rte_interrupts.h
> > index cc3bf45d8c..98edf774af 100644
> > --- a/lib/eal/include/rte_interrupts.h
> > +++ b/lib/eal/include/rte_interrupts.h
> [...]
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * It allocates memory for interrupt instance. API takes flag as an
> > +argument
>
> Not anymore. Please update the description.
Ack.
>
> > + * which define from where memory should be allocated i.e. using DPDK
> > +memory
> > + * management library APIs or normal heap allocation.
> > + * Default memory allocation for event fds and event list array is
> > +done which
> > + * can be realloced later as per the requirement.
> > + *
> > + * This function should be called from application or driver, before
> > +calling any
> > + * of the interrupt APIs.
> > + *
> > + * @param flags
> > + * Memory allocation from DPDK allocator or normal allocation
> > + *
> > + * @return
> > + * - On success, address of first interrupt handle.
> > + * - On failure, NULL.
> > + */
> > +__rte_experimental
> > +struct rte_intr_handle *
> > +rte_intr_instance_alloc(void);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * This API is used to free the memory allocated for event fds. event
> > +lists
> > + * and interrupt handle array.
>
> It's simpler and more future-proof to just say "interrupt handle resources"
> instead of enumerating them.
Sure, will reword it.
>
> > + *
> > + * @param intr_handle
> > + * Base address of interrupt handle array.
>
> It's not an array anymore.
Ack.
>
> [...]
> > +/**
> > + * @internal
> > + * This API is used to set the event list array index with the given
> > +elist
>
> "Event list array" sound like an array of lists, while it is really an array of
> scalar elements.
> "Event data array"? TBH, I don't know how it's usually named in Unices.
>
> > + * instance.
> > + *
> > + * @param intr_handle
> > + * pointer to the interrupt handle.
> > + * @param index
> > + * elist array index to be set
> > + * @param elist
> > + * event list instance of struct rte_epoll_event
> > + *
> > + * @return
> > + * - On success, zero.
> > + * - On failure, a negative value.
> > + */
> > +__rte_internal
> > +int
> > +rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
> > + struct rte_epoll_event elist);
> > +
> > +/**
> > + * @internal
> > + * Returns the address of elist instance of event list array at a given index.
> > + *
> > + * @param intr_handle
> > + * pointer to the interrupt handle.
> > + * @param index
> > + * elist array index to be returned
> > + *
> > + * @return
> > + * - On success, elist
> > + * - On failure, a negative value.
> > + */
> > +__rte_internal
> > +struct rte_epoll_event *
> > +rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int
> > +index);
> > +
> > +/**
> > + * @internal
> > + * Allocates the memory of interrupt vector list array, with size
> > +defining the
> > + * no of elements required in the array.
>
> Typo: "no" -> "number".
Ack.
>
> [...]
> > +
> > +/**
> > + * @internal
> > + * This API returns the windows handle of the given interrupt instance.
>
> Typo: "windows" -> "Windows" here and below.
>
> > + *
> > + * @param intr_handle
> > + * pointer to the interrupt handle.
> > + *
> > + * @return
> > + * - On success, windows handle.
> > + * - On failure, NULL.
> > + */
> > +__rte_internal
> > +void *
> > +rte_intr_instance_windows_handle_get(struct rte_intr_handle
> > +*intr_handle);
> > +
> > +/**
> > + * @internal
> > + * This API set the windows handle for the given interrupt instance.
> > + *
> > + * @param intr_handle
> > + * pointer to the interrupt handle.
> > + * @param windows_handle
> > + * windows handle to be set.
> > + *
> > + * @return
> > + * - On success, zero
> > + * - On failure, a negative value.
> > + */
> > +__rte_internal
> > +int
> > +rte_intr_instance_windows_handle_set(struct rte_intr_handle
> *intr_handle,
> > + void *windows_handle);
> > +
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/eal/version.map b/lib/eal/version.map index
> > 38f7de83e1..0ef77c3b40 100644
> > --- a/lib/eal/version.map
> > +++ b/lib/eal/version.map
> > @@ -109,18 +109,10 @@ DPDK_22 {
> > rte_hexdump;
> > rte_hypervisor_get;
> > rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
> > - rte_intr_allow_others;
> > rte_intr_callback_register;
> > rte_intr_callback_unregister;
> > - rte_intr_cap_multiple;
> > - rte_intr_disable;
> > - rte_intr_dp_is_en;
> > - rte_intr_efd_disable;
> > - rte_intr_efd_enable;
> > rte_intr_enable;
> > - rte_intr_free_epoll_fd;
> > - rte_intr_rx_ctl;
> > - rte_intr_tls_epfd;
> > + rte_intr_disable;
> > rte_keepalive_create; # WINDOWS_NO_EXPORT
> > rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
> > rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT @@ -420,6
> +412,14 @@
> > EXPERIMENTAL {
> >
> > # added in 21.08
> > rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> > +
> > + # added in 21.11
> > + rte_intr_fd_set; # WINDOWS_NO_EXPORT
> > + rte_intr_fd_get; # WINDOWS_NO_EXPORT
>
> OK, these are not feasible on Windows.
Ack.
>
> > + rte_intr_type_set; # WINDOWS_NO_EXPORT
> > + rte_intr_type_get; # WINDOWS_NO_EXPORT
> > + rte_intr_instance_alloc; # WINDOWS_NO_EXPORT
> > + rte_intr_instance_free; # WINDOWS_NO_EXPORT
>
> No, these *are* needed on Windows.
Ack.
>
> > };
> >
> > INTERNAL {
> > @@ -430,4 +430,33 @@ INTERNAL {
> > rte_mem_map;
> > rte_mem_page_size;
> > rte_mem_unmap;
> > + rte_intr_cap_multiple;
> > + rte_intr_dp_is_en;
> > + rte_intr_efd_disable;
> > + rte_intr_efd_enable;
> > + rte_intr_free_epoll_fd;
> > + rte_intr_rx_ctl;
> > + rte_intr_allow_others;
> > + rte_intr_tls_epfd;
> > + rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
> > + rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
>
> OK.
>
> > + rte_intr_instance_copy; # WINDOWS_NO_EXPORT
> > + rte_intr_event_list_update; # WINDOWS_NO_EXPORT
> > + rte_intr_max_intr_set; # WINDOWS_NO_EXPORT
> > + rte_intr_max_intr_get; # WINDOWS_NO_EXPORT
>
> These are needed on Windows.
Ack.
>
> > + rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
> > + rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
> > + rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
> > + rte_intr_efds_index_set; # WINDOWS_NO_EXPORT
> > + rte_intr_efds_index_get; # WINDOWS_NO_EXPORT
>
> OK.
>
> > + rte_intr_elist_index_set; # WINDOWS_NO_EXPORT
> > + rte_intr_elist_index_get; # WINDOWS_NO_EXPORT
>
> These are needed on Windows.
Ack.
>
> > + rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
> > + rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
>
> OK.
>
> > + rte_intr_vec_list_alloc; # WINDOWS_NO_EXPORT
> > + rte_intr_vec_list_index_set; # WINDOWS_NO_EXPORT
> > + rte_intr_vec_list_index_get; # WINDOWS_NO_EXPORT
> > + rte_intr_vec_list_free; # WINDOWS_NO_EXPORT
>
> These are needed on Windows.
Ack.
>
> > + rte_intr_instance_windows_handle_get;
> > + rte_intr_instance_windows_handle_set;
> > };
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API Harman Kalra
@ 2021-10-19 15:53 ` Thomas Monjalon
0 siblings, 0 replies; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-19 15:53 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Anatoly Burakov, david.marchand, dmitry.kozliuk, mdr
18/10/2021 21:37, Harman Kalra:
> @@ -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void)
> {
> struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> unsigned int i;
> + int ret;
> const struct internal_config *internal_conf =
> eal_get_internal_configuration();
>
> @@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
> return 0;
>
> /* add all IOVA-contiguous areas to the heap */
> - return rte_memseg_contig_walk(malloc_add_seg, NULL);
> + ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
> +
> + if (!ret)
Style: It should be "if (ret == 0)" because ret is not a bool.
> + malloc_ready = true;
> +
> + return ret;
> +}
> +
> +bool
> +rte_malloc_is_ready(void)
> +{
> + return malloc_ready == true;
> }
> --- a/lib/eal/common/malloc_heap.h
> +++ b/lib/eal/common/malloc_heap.h
> @@ -96,4 +96,7 @@ malloc_socket_to_heap_id(unsigned int socket_id);
> int
> rte_eal_malloc_heap_init(void);
>
Please insert a comment here to document what we can expect.
> +bool
> +rte_malloc_is_ready(void);
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-19 8:32 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-19 15:58 ` Thomas Monjalon
2021-10-20 15:30 ` Dmitry Kozlyuk
1 sibling, 0 replies; 152+ messages in thread
From: Thomas Monjalon @ 2021-10-19 15:58 UTC (permalink / raw)
To: Stephen Hemminger, Harman Kalra
Cc: david.marchand, dmitry.kozliuk, dev, Ray Kinsella
19/10/2021 10:32, Harman Kalra:
> From: Stephen Hemminger <stephen@networkplumber.org>
> > On Tue, 19 Oct 2021 01:07:02 +0530
> > Harman Kalra <hkalra@marvell.com> wrote:
> > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > + mem_allocator = rte_malloc_is_ready();
> > > + if (mem_allocator)
> > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > rte_intr_handle),
> > > + 0);
> > > + else
> > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> >
> > This is problematic way to do this.
> > The reason to use rte_malloc vs malloc should be determined by usage.
> >
> > If the pointer will be shared between primary/secondary process then it has
> > to be in hugepages (ie rte_malloc). If it is not shared then then use regular
> > malloc.
> >
> > But what you have done is created a method which will be a latent bug for
> > anyone using primary/secondary process.
> >
> > Either:
> > intr_handle is not allowed to be used in secondary.
> > Then always use malloc().
> > Or.
> > intr_handle can be used by both primary and secondary.
> > Then always use rte_malloc().
> > Any code path that allocates intr_handle before pool is
> > ready is broken.
>
> Hi Stephan,
>
> Till V2, I implemented this API in a way where user of the API can choose
> If he wants intr handle to be allocated using malloc or rte_malloc by passing
> a flag arg to the rte_intr_instanc_alloc API. User of the API will best know if
> the intr handle is to be shared with secondary or not.
Yes the caller should know, but it makes usage more difficult.
Using rte_malloc always is simpler.
> But after some discussions and suggestions from the community we decided
> to drop that flag argument and auto detect on whether rte_malloc APIs are
> ready to be used and thereafter make all further allocations via rte_malloc.
> Currently alarm subsystem (or any driver doing allocation in constructor) gets
> interrupt instance allocated using glibc malloc that too because rte_malloc*
> is not ready by rte_eal_alarm_init(), while all further consumers gets instance
> allocated via rte_malloc.
Yes the general case is to allocate after rte_malloc is ready.
Anyway a constructor should not allocate complicate things.
> I think this should not cause any issue in primary/secondary model as all interrupt
> instance pointer will be shared. Infact to avoid any surprises of primary/secondary
> not working we thought of making all allocations via rte_malloc.
Yes
> David, Thomas, Dmitry, please add if I missed anything.
I understand Stephen's concern but I think this choice is a good compromise.
Ideally we should avoid doing real stuff in constructors.
> Can we please conclude on this series APIs as API freeze deadline (rc1) is very near.
I vote for keeping this design.
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (10 preceding siblings ...)
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API Harman Kalra
` (6 more replies)
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
12 siblings, 7 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: malloc: introduce malloc is ready API
This patch introduces a new API which tells if DPDK memory
subsystem is initialized and rte_malloc* APIs are ready to be
used. If rte_malloc* are setup, memory for interrupt instance
is allocated using rte_malloc else using traditional heap APIs.
Patch 2: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
Harman Kalra (7):
malloc: introduce malloc is ready API
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 162 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 26 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 19 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 14 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 +++-
drivers/bus/pci/pci_common.c | 27 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 108 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 22 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 21 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 59 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 18 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 51 +-
drivers/net/mlx5/linux/mlx5_socket.c | 24 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 35 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 11 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 75 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 47 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 61 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 9 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 585 ++++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/malloc_heap.c | 19 +-
lib/eal/common/malloc_heap.h | 3 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 52 +-
lib/eal/freebsd/eal_interrupts.c | 92 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 648 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 287 +++++---
lib/eal/version.map | 47 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
134 files changed, 3568 insertions(+), 1709 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 22:01 ` Dmitry Kozlyuk
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs Harman Kalra
` (5 subsequent siblings)
6 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Anatoly Burakov
Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Implementing a new API get the state if DPDK memory management
APIs are initialized.
One of the use case of this API is while allocating an interrupt
instance, if malloc APIs are ready memory for interrupt handles
should be allocated via rte_malloc_* APIs else glibc malloc APIs
are used. Eg. Alarm subsystem is initialised before DPDK memory
infra setup and it allocates an interrupt handle.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/malloc_heap.c | 16 +++++++++++++++-
lib/eal/common/malloc_heap.h | 6 ++++++
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index ee400f38ec..35affecf91 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -36,6 +36,8 @@
#define CONST_MAX(a, b) (a > b ? a : b) /* RTE_MAX is not a constant */
#define EXTERNAL_HEAP_MIN_SOCKET_ID (CONST_MAX((1 << 8), RTE_MAX_NUMA_NODES))
+static bool malloc_ready;
+
static unsigned
check_hugepage_sz(unsigned flags, uint64_t hugepage_sz)
{
@@ -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
unsigned int i;
+ int ret;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
@@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
return 0;
/* add all IOVA-contiguous areas to the heap */
- return rte_memseg_contig_walk(malloc_add_seg, NULL);
+ ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
+
+ if (ret == 0)
+ malloc_ready = true;
+
+ return ret;
+}
+
+bool
+rte_malloc_is_ready(void)
+{
+ return malloc_ready == true;
}
diff --git a/lib/eal/common/malloc_heap.h b/lib/eal/common/malloc_heap.h
index 3a6ec6ecf0..bc23944958 100644
--- a/lib/eal/common/malloc_heap.h
+++ b/lib/eal/common/malloc_heap.h
@@ -96,4 +96,10 @@ malloc_socket_to_heap_id(unsigned int socket_id);
int
rte_eal_malloc_heap_init(void);
+/* This API is used to know if DPDK memory subsystem is setup and its
+ * corresponding APIs are ready to be used.
+ */
+bool
+rte_malloc_is_ready(void);
+
#endif /* MALLOC_HEAP_H_ */
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-20 6:14 ` David Marchand
2021-10-20 16:15 ` Dmitry Kozlyuk
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (4 subsequent siblings)
6 siblings, 2 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra, Ray Kinsella
Cc: david.marchand, dmitry.kozliuk
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 406 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 209 +--------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 621 ++++++++++++++++++++++++-
lib/eal/version.map | 47 +-
8 files changed, 1190 insertions(+), 214 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 8dceb6c0e0..3782e88742 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -210,6 +210,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..434ad63a64
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,406 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_interrupts.h>
+
+#include <malloc_heap.h>
+
+/* Macros to check for valid port */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+struct rte_intr_handle *rte_intr_instance_alloc(void)
+{
+ struct rte_intr_handle *intr_handle;
+ bool is_rte_memory;
+
+ /* Detect if DPDK malloc APIs are ready to be used. */
+ is_rte_memory = rte_malloc_is_ready();
+ if (is_rte_memory)
+ intr_handle = rte_zmalloc(NULL, sizeof(struct rte_intr_handle),
+ 0);
+ else
+ intr_handle = calloc(1, sizeof(struct rte_intr_handle));
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle->is_rte_memory = is_rte_memory;
+
+ return intr_handle;
+}
+
+int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src)
+{
+ uint16_t nb_intr;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+
+ nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
+ memcpy(intr_handle->efds, src->efds, nb_intr);
+ memcpy(intr_handle->elist, src->elist, nb_intr);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle) {
+ if (intr_handle->is_rte_memory)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+ }
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (!intr_handle->intr_vec) {
+ RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ }
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..cbec1dfd99 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle */
};
+ bool is_rte_memory;
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..3d5649efc1 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,8 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +37,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +166,620 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API auto detects if memory
+ * for the instance should be allocated using DPDK memory management library
+ * APIs or normal heap allocation, based on if DPDK memory subsystem is
+ * initialized and ready to be used.
+ *
+ * Default memory allocation for event fds and epoll event array is done which
+ * can be realloced later as per the requirement.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @return
+ * - On success, address of first interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for interrupt handle resources.
+ *
+ * @param intr_handle
+ * Base address of interrupt handle array.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to populate interrupt handle, with src handler fields.
+ *
+ * @param intr_handle
+ * Start address of interrupt handles
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the number of event fd field of interrupt handle
+ * with user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the number of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the number of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter, used for vdev
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the epoll event object array index with the given
+ * elist instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * epoll event instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of epoll event instance from elist array at a given
+ * index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * number of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * Number of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Freeing the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
+ * arrays as per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the Windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, Windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the Windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * Windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..7112dbc146 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
- rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
+ rte_intr_disable;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,12 +412,49 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
+ rte_intr_type_get;
+ rte_intr_type_set;
};
INTERNAL {
global:
rte_firmware_read;
+ rte_intr_allow_others;
+ rte_intr_cap_multiple;
+ rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_dp_is_en;
+ rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
+ rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_efds_index_get; # WINDOWS_NO_EXPORT
+ rte_intr_efds_index_set; # WINDOWS_NO_EXPORT
+ rte_intr_elist_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_event_list_update;
+ rte_intr_free_epoll_fd;
+ rte_intr_instance_copy;
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
+ rte_intr_max_intr_get;
+ rte_intr_max_intr_set;
+ rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
+ rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
+ rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_free;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_index_set;
rte_mem_lock;
rte_mem_map;
rte_mem_page_size;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 21:27 ` Dmitry Kozlyuk
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (3 subsequent siblings)
6 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
2 files changed, 234 insertions(+), 145 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..846ca4aa89 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +162,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +185,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +241,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +282,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +296,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +329,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +381,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +405,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +423,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +447,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +459,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +482,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +495,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +566,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +578,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +590,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..a250a9df66 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,21 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_alloc();
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +589,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +599,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +640,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +650,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +682,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +714,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +772,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +795,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +838,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +848,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +906,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +939,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +952,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1016,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1056,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1066,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1169,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1233,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1246,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1467,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1490,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1502,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1527,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1549,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1558,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1595,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1617,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 4/7] test/interrupt: apply get set interrupt handle APIs
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
` (2 preceding siblings ...)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle Harman Kalra
` (2 subsequent siblings)
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Harman Kalra; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 162 ++++++++++++++++++++++---------------
1 file changed, 97 insertions(+), 65 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..774a573f02 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,54 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] = rte_intr_instance_alloc();
+ if (!intr_handles[i])
+ return -1;
+ }
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +120,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +152,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +207,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +229,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +254,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +264,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +272,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +280,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +288,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +296,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +317,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +328,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +336,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +344,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +352,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +360,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +382,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +402,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +427,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -445,8 +476,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +485,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +502,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +511,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +561,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
` (3 preceding siblings ...)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-20 1:57 ` Hyong Youb Kim (hyonkim)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
6 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti,
Bruce Richardson, Thomas Monjalon
Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 ++--
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 9 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 26 +++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 15 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 19 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 14 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 ++--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 27 +++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 5 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 108 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 ++++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 22 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 21 +++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 108 +++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 59 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 18 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 51 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 24 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 35 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +++--
drivers/net/thunderx/nicvf_ethdev.c | 11 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 34 +++--
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 75 ++++++-----
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 47 ++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 61 ++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 9 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 45 ++++++-
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 29 +++--
lib/eal/linux/eal_dev.c | 63 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
118 files changed, 1791 insertions(+), 1217 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 4e2feefc3c..73a8fac2f4 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1097,8 +1099,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1110,8 +1113,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4184,7 +4188,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6485cc824a..8add4b13ef 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,16 +743,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1879,7 +1878,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 350c4248eb..8f69e8fc3e 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,16 +1014,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2369,7 +2368,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..6d44c433b6 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..374246657a 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,13 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +74,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..d7c2639034 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,14 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +222,14 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +263,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +576,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +632,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..b469c0cf9e 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,14 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..c2b469a94b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..4472175ce3 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,13 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle = rte_intr_instance_alloc();
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +497,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +545,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +558,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..afddffde03 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,13 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle = rte_intr_instance_alloc();
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +196,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +406,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..1a46553be0 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..5aaf604aa4 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..c8da3e2fe8 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..aef99f9f9b 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,22 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle = rte_intr_instance_alloc();
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +269,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +316,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..bc8ccc24e2 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,11 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle = rte_intr_instance_alloc();
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..7792712a25 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +82,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..90b34004fa 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +259,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 74ada6ef42..15f1aae23e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..b05578e13d 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +136,53 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle) ||
+ vec >= (uint32_t)plt_intr_nb_intr_get(intr_handle)) {
plt_err("Vector=%d greater than max_intr=%d or "
"max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ vec, plt_intr_max_intr_get(intr_handle),
+ (uint64_t)plt_intr_nb_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +192,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +226,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 241655b334..c707a7bdf4 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -103,6 +103,33 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_copy rte_intr_instance_copy
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -165,7 +192,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -185,18 +212,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519..a77d51abc4 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -360,7 +360,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -479,7 +479,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -525,10 +525,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -608,7 +607,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -638,10 +637,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -692,7 +688,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 6cb8bb4338..60b896cf7a 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index ebd5411fdd..89e4a3dd71 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -404,7 +404,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2323,7 +2323,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2347,8 +2347,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a..a34b2f078b 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -234,10 +234,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -262,8 +262,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index aa7e7fdc85..f13432ac15 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -735,7 +735,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -847,26 +847,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1479,7 +1477,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1521,10 +1519,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 59f4a93b3e..9d570781ac 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -219,7 +219,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -276,13 +276,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -389,9 +390,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -461,7 +463,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -470,7 +472,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1101,20 +1103,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index ff8ae89922..f413d629c8 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1153,7 +1153,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1224,8 +1224,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1264,8 +1264,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b02..c1060f0c70 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -525,7 +525,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -575,12 +575,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -718,7 +716,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -752,10 +750,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -767,7 +762,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1008,7 +1003,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 6510cd7ceb..82ac1dadd2 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -853,12 +853,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -996,7 +996,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1200,7 +1200,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1259,11 +1259,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1422,7 +1421,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1466,10 +1465,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1509,7 +1505,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1535,10 +1531,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2779,7 +2773,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3296,7 +3290,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3327,11 +3321,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3353,7 +3346,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3377,10 +3370,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3418,7 +3410,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5140,7 +5132,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5160,7 +5152,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5238,7 +5230,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5279,8 +5271,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5298,8 +5291,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5309,8 +5302,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index a82d4b6287..aa6f34ca04 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -473,7 +473,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -945,7 +945,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -967,10 +967,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -986,7 +985,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1006,7 +1005,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1663,7 +1665,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -2815,7 +2817,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -2842,9 +2844,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -2863,7 +2865,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -2871,8 +2875,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6..b8daf8fb24 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1112,8 +1112,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e60..23916a9eed 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,23 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle = rte_intr_instance_alloc();
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +309,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..949af61a47 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index d0030af061..fe6a7c0c84 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
@@ -453,10 +460,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
}
}
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return 0;
free_rxq:
fs_rx_queue_release(dev, rx_queue_id);
fs_unlock(dev, 0);
+ rte_intr_instance_free(intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 7075d69022..850ec35059 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2368,7 +2368,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2393,7 +2393,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2421,15 +2421,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2788,7 +2790,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3054,7 +3056,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index cd4dad8588..d4d5df24f9 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1229,13 +1229,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3136,7 +3136,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3146,7 +3146,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3176,7 +3176,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index cabf73ffbc..10e06cbd1b 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5269,7 +5269,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5282,7 +5282,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5341,8 +5341,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5375,8 +5375,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5710,7 +5710,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5733,16 +5733,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5755,20 +5752,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5779,7 +5777,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5789,8 +5787,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5933,7 +5932,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5953,16 +5952,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c8..fb25241be6 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1985,7 +1985,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1993,7 +1993,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2045,8 +2045,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2074,8 +2074,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2118,7 +2118,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2136,16 +2136,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2301,7 +2301,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2324,16 +2324,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2346,20 +2343,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2370,7 +2369,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2380,8 +2379,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2845,7 +2845,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2871,7 +2871,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 6b77672aa1..7604cbba35 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 1fc3d897a8..90cdd5bc18 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1439,7 +1439,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11055,11 +11054,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11074,7 +11073,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11083,11 +11082,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 5a5a7f59e1..f99e421168 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -660,17 +660,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -730,7 +729,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -740,14 +740,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -789,8 +791,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -926,10 +927,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1669,7 +1667,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1688,7 +1687,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1700,7 +1699,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2384,12 +2384,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2423,7 +2423,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 3275687927..f76b4b09c4 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct iavf_adapter *adapter, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e3..68c13ac48d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 91f6558742..e6fd88de7c 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -160,11 +160,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -214,7 +212,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -224,12 +223,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -634,10 +634,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 65e43a18f9..6a61d79ddc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2171,7 +2171,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2368,7 +2368,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2398,7 +2398,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2423,10 +2423,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2440,7 +2437,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3338,10 +3335,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3369,8 +3367,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3379,7 +3378,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3391,7 +3392,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3417,7 +3418,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3437,11 +3438,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4766,19 +4765,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4787,11 +4786,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 0e41c85d29..5f8fa0af86 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -384,7 +384,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -404,7 +404,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -616,7 +616,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -668,10 +668,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -731,7 +728,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -755,8 +752,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -773,8 +770,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -810,7 +807,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -819,7 +816,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -913,7 +911,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -951,10 +949,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1169,7 +1166,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1339,11 +1336,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2100,7 +2097,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2119,7 +2116,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index 344c076f30..9af2bb2159 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1071,7 +1071,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1085,15 +1085,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1122,7 +1117,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a127dc0d86..ba2af9d729 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1526,7 +1526,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2542,7 +2542,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2597,11 +2597,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2837,7 +2835,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2888,10 +2886,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2975,7 +2970,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4621,7 +4616,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5302,7 +5297,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5365,11 +5360,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5409,7 +5402,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5437,10 +5430,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5452,7 +5442,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5750,7 +5740,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5776,7 +5766,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5792,7 +5782,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5919,7 +5909,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5946,8 +5936,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5968,7 +5960,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6012,8 +6004,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65..bea4461e12 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,25 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle = rte_intr_instance_alloc();
+ if (!cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +876,11 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ if (cc->intr_handle)
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +936,21 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle = rte_intr_instance_alloc();
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +963,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1083,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1147,24 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle = rte_intr_instance_alloc();
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1179,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e..2b9a092a34 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1122,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1320,24 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc();
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1361,23 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc();
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1370,6 +1404,7 @@ memif_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..75656c06db 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,18 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle = rte_intr_instance_alloc();
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1066,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1111,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..8059fb4624 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +261,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +294,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index e036ed1435..c17be92f78 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2586,9 +2586,7 @@ mlx5_os_pci_probe_pf(struct rte_pci_device *pci_dev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle = rte_intr_instance_alloc();
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2753,7 +2751,7 @@ mlx5_os_auxiliary_probe(struct rte_device *dev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2937,7 +2935,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int ret;
int flags;
- sh->intr_handle.fd = -1;
+ sh->intr_handle = rte_intr_instance_alloc();
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(((struct ibv_context *)sh->ctx)->async_fd, F_GETFL);
ret = fcntl(((struct ibv_context *)sh->ctx)->async_fd,
F_SETFL, flags | O_NONBLOCK);
@@ -2945,17 +2950,24 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ((struct ibv_context *)sh->ctx)->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle,
+ ((struct ibv_context *)sh->ctx)->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx = rte_intr_instance_alloc();
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp =
(void *)mlx5_glue->devx_create_cmd_comp(sh->ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
@@ -2970,13 +2982,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2993,13 +3006,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..1d6a97fbea 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,18 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle = rte_intr_instance_alloc();
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +167,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fe533fcc81..183644e271 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1025,7 +1025,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1193,8 +1193,8 @@ struct mlx5_dev_ctx_shared {
/* Memory Pool for mlx5 flow resources. */
struct mlx5_l3t_tbl *cnt_id_tbl; /* Shared counter lookup table. */
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 247f36e5d7..abc3da4808 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +919,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +933,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 3cbf5816a1..81c7417aa6 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1182,7 +1182,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1191,7 +1191,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 2be7e71f89..68c9cf73fd 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -756,11 +756,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -784,13 +785,21 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle = rte_intr_instance_alloc();
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..521c449429 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 4395a09c59..460ad9408c 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -808,7 +808,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -828,7 +829,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -878,7 +880,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8..fc33bb2ffa 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0..9c1db84733 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..4045fbbf00 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index fd8c62a182..104a26266d 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1576,17 +1576,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2581,22 +2581,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index c2298ed23c..b31965d1ff 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 046f17669d..2ecc2e1531 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1668,7 +1668,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1679,22 +1680,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1707,8 +1709,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1923,6 +1925,13 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle = rte_intr_instance_alloc();
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1940,9 +1949,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2093,6 +2102,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..ded50ed653 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 5502f1ee69..2dd27ab043 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1876,6 +1876,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2175,6 +2178,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle = rte_intr_instance_alloc();
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index b267da462b..3b1572e485 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -547,7 +547,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1619,7 +1619,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1680,17 +1680,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1871,7 +1868,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1921,10 +1918,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1987,7 +1981,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -3107,7 +3101,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3640,7 +3634,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3722,7 +3716,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3756,8 +3750,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 896da8a887..373fcf167f 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 2e24e5f7ff..8d01ec65dd 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,9 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +665,30 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle = rte_intr_instance_alloc();
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +707,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +926,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 6aa36b3f39..e7ae6e37f0 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1641,7 +1640,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1680,15 +1681,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..62fe307b7a 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,36 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle = rte_intr_instance_alloc();
if (!eth_dev->intr_handle) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +671,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +976,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +986,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1011,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index cfffc94c48..45ab4971ed 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -620,11 +620,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -635,8 +633,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -644,17 +641,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -802,7 +801,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -825,7 +826,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1022,10 +1025,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1671,7 +1671,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1681,7 +1683,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..b3e0671e7a 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,13 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] = rte_intr_instance_alloc();
+ if (!ifpga_irq_handle[i])
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1391,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1427,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1520,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 6d17d7a6f3..0f6d180ae2 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -698,6 +698,11 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle = rte_intr_instance_alloc();
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -716,6 +721,8 @@ mlx5_vdpa_dev_probe(struct rte_device *dev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
if (ctx)
@@ -750,6 +757,8 @@ mlx5_vdpa_dev_remove(struct rte_device *dev)
rte_vdpa_unregister_device(priv->vdev);
mlx5_glue->close_device(priv->ctx);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index a27f3fdadb..0c51376dd9 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -139,7 +139,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index bb6722839a..5ec04a875b 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -410,12 +410,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -435,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index f530646058..da9e09f22c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,32 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle = rte_intr_instance_alloc();
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -501,7 +516,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..2c6fa65020 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..cd971036cd 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,43 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
+ rte_intr_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -118,7 +139,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +157,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +185,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +225,10 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle = rte_intr_instance_alloc();
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -256,6 +283,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -282,6 +312,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..cf8e2f2066 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,35 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +122,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +154,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +184,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..95931d7bec 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,38 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc();
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_fd_set(intr_handle, -1);
+ rte_intr_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +364,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 028907bc4b..c7b6162c4f 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4696,13 +4696,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4737,15 +4737,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -4923,12 +4923,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 6/7] eal/interrupts: make interrupt handle structure opaque
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
` (4 preceding siblings ...)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
6 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 197 +++++++++++++++++++++++--
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 ---------
lib/eal/include/rte_interrupts.h | 30 +++-
5 files changed, 224 insertions(+), 83 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index c8da3e2fe8..f274aa4aab 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 434ad63a64..388d59ca14 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -21,6 +21,29 @@
} \
} while (0)
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle (Windows) */
+ };
+ bool is_rte_memory;
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_instance_alloc(void)
{
struct rte_intr_handle *intr_handle;
@@ -39,16 +62,52 @@ struct rte_intr_handle *rte_intr_instance_alloc(void)
return NULL;
}
+ if (is_rte_memory)
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(uint32_t));
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (is_rte_memory)
+ intr_handle->elist =
+ rte_zmalloc(NULL, RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle->is_rte_memory = is_rte_memory;
return intr_handle;
+fail:
+ if (intr_handle->is_rte_memory) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
const struct rte_intr_handle *src)
{
- uint16_t nb_intr;
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -59,17 +118,104 @@ int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
}
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
+ intr_handle->is_rte_memory = src->is_rte_memory;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
intr_handle->efd_counter_size = src->efd_counter_size;
- nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
- memcpy(intr_handle->efds, src->efds, nb_intr);
- memcpy(intr_handle->elist, src->elist, nb_intr);
+ /* Reallocting the interrupt handle resources based on source's
+ * nb_intr.
+ */
+ if (intr_handle->nb_intr != src->nb_intr) {
+ if (src->is_rte_memory)
+ tmp_efds = rte_realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (src->is_rte_memory)
+ tmp_elist = rte_realloc(intr_handle->elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = src->nb_intr;
+ }
+
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
+
+ return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
+{
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ if (intr_handle->is_rte_memory)
+ tmp_efds = rte_realloc(intr_handle->efds, size *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, size *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (intr_handle->is_rte_memory)
+ tmp_elist = rte_realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = size;
return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
fail:
return -rte_errno;
}
@@ -77,10 +223,19 @@ int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle) {
- if (intr_handle->is_rte_memory)
+ if (intr_handle->is_rte_memory) {
+ if (intr_handle) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
+ }
rte_free(intr_handle);
- else
+ } else {
+ if (intr_handle) {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
+ }
free(intr_handle);
+ }
}
}
@@ -130,7 +285,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -141,7 +296,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
@@ -231,6 +386,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -248,6 +409,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -267,6 +434,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -284,6 +457,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = ENOTSUP;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index cbec1dfd99..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle */
- };
- bool is_rte_memory;
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index 3d5649efc1..7620040844 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -25,7 +25,35 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
-#include "rte_eal_interrupts.h"
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+/** Allocate interrupt instance from traditional heap */
+#define RTE_INTR_ALLOC_TRAD_HEAP 0x00000000
+/** Allocate interrupt instance using DPDK memory management APIs */
+#define RTE_INTR_ALLOC_DPDK_ALLOCATOR 0x00000001
+
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
` (5 preceding siblings ...)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-19 18:35 ` Harman Kalra
2021-10-19 21:39 ` Dmitry Kozlyuk
6 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:35 UTC (permalink / raw)
To: dev, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 +++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++++++
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 10 +++++++++-
5 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..7fb9bc1324 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..535ea687ca 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -977,6 +977,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index cd971036cd..167384e79a 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..806158f297 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index cf8e2f2066..56f69d8e6d 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
@@ -68,7 +75,8 @@ rte_eal_alarm_init(void)
goto error;
}
- rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
/* create a timerfd file descriptor */
if (rte_intr_fd_set(intr_handle,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-19 8:50 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-19 18:44 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-19 18:44 UTC (permalink / raw)
To: Harman Kalra, Dmitry Kozlyuk
Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Tuesday, October 19, 2021 2:21 PM
> To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> <mdr@ashroe.eu>; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement
> get set APIs
>
[...]
> > > +
> > > + nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
> >
> > Truncating copy is error-prone.
> > It should be either a reallocation (in the future) or an error (now).
>
> Actually in patch 6, I have made lot of changes to this API wrt nb_intr, where
> efds/elist arrays are reallocated based on src->nb_intr and make
> intr_handle->nb_intr equal to src->nb_intr. I think those changes can be
> moved from patch 6 to patch 2.
Hi Dmitry,
I have addressed all your comments in V4, kindly review.
Regarding this particular comment, I have not made any changes as I already
explained in my previous reply that patch 6 takes care of realloc based on nb_intr,
I thought those changes can be moved from patch 6 to 2, but it's not possible because
till patch 5, arrays efds, elist are kept as static arrays to avoid build breakage. While in
patch 6 I made these arrays as pointers and dynamically allocated memory for them.
Thanks
Harman
[...]
>
> >
> > > + memcpy(intr_handle->efds, src->efds, nb_intr);
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-19 21:27 ` Dmitry Kozlyuk
2021-10-20 9:25 ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-19 21:27 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-20 00:05 (UTC+0530), Harman Kalra:
> Making changes to the interrupt framework to use interrupt handle
> APIs to get/set any field. Direct access to any of the fields
> should be avoided to avoid any ABI breakage in future.
I get and accept the point why EAL also should use the API.
However, mentioning ABI is still a wrong wording.
There is no ABI between EAL structures and EAL functions by definition of ABI.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
> lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
> 2 files changed, 234 insertions(+), 145 deletions(-)
>
> diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
[...]
> @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
> ret = -ENOMEM;
> goto fail;
> } else {
> - src->intr_handle = *intr_handle;
> - TAILQ_INIT(&src->callbacks);
> - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> + src->intr_handle = rte_intr_instance_alloc();
> + if (src->intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Can not create intr instance\n");
> + free(callback);
> + ret = -ENOMEM;
goto fail?
> + } else {
> + rte_intr_instance_copy(src->intr_handle,
> + intr_handle);
> + TAILQ_INIT(&src->callbacks);
> + TAILQ_INSERT_TAIL(&intr_sources, src,
> + next);
> + }
> }
> }
>
[...]
> @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
> struct rte_intr_callback *cb, *next;
>
> /* do parameter checking first */
> - if (intr_handle == NULL || intr_handle->fd < 0) {
> + if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
The handle is checked for NULL inside the accessor, here and in other places:
grep -R 'intr_handle == NULL ||' lib/eal
> RTE_LOG(ERR, EAL,
> "Unregistering with invalid input parameter\n");
> return -EINVAL;
> diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
[...]
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
@ 2021-10-19 21:39 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-19 21:39 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-20 00:05 (UTC+0530), Harman Kalra:
> Implementing alarm cleanup routine, where the memory allocated
> for interrupt instance can be freed.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/common/eal_private.h | 11 +++++++++++
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 7 +++++++
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 10 +++++++++-
> 5 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
> index 86dab1f057..7fb9bc1324 100644
> --- a/lib/eal/common/eal_private.h
> +++ b/lib/eal/common/eal_private.h
> @@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
> */
> int rte_eal_alarm_init(void);
>
> +/**
> + * Init alarm mechanism. This is to allow a callback be called after
> + * specific time.
> + *
> + * This function is private to EAL.
> + *
> + * @return
> + * 0 on success, negative on error
> + */
The comment does not match the function.
> +void rte_eal_alarm_fini(void);
> +
> /**
> * Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
> * etc.) loaded.
[...]
> diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
> index cd971036cd..167384e79a 100644
> --- a/lib/eal/freebsd/eal_alarm.c
> +++ b/lib/eal/freebsd/eal_alarm.c
> @@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
> static struct rte_intr_handle *intr_handle;
> static void eal_alarm_callback(void *arg);
>
> +void
> +rte_eal_alarm_fini(void)
> +{
> + if (intr_handle)
intr_handle != NULL
> + rte_intr_instance_free(intr_handle);
> +}
> +
> int
> rte_eal_alarm_init(void)
> {
[...]
> diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
> index cf8e2f2066..56f69d8e6d 100644
> --- a/lib/eal/linux/eal_alarm.c
> +++ b/lib/eal/linux/eal_alarm.c
> @@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
> static int handler_registered = 0;
> static void eal_alarm_callback(void *arg);
>
> +void
> +rte_eal_alarm_fini(void)
> +{
> + if (intr_handle)
Ditto.
> + rte_intr_instance_free(intr_handle);
> +}
> +
> int
> rte_eal_alarm_init(void)
> {
> @@ -68,7 +75,8 @@ rte_eal_alarm_init(void)
> goto error;
> }
>
> - rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM);
> + if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
> + goto error;
>
> /* create a timerfd file descriptor */
> if (rte_intr_fd_set(intr_handle,
This belongs to a patch 5/7, doesn't it?
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API Harman Kalra
@ 2021-10-19 22:01 ` Dmitry Kozlyuk
2021-10-19 22:04 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-19 22:01 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Anatoly Burakov, david.marchand, mdr, thomas
2021-10-20 00:05 (UTC+0530), Harman Kalra:
[...]
> static unsigned
> check_hugepage_sz(unsigned flags, uint64_t hugepage_sz)
> {
> @@ -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void)
> {
> struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> unsigned int i;
> + int ret;
> const struct internal_config *internal_conf =
> eal_get_internal_configuration();
>
> @@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
> return 0;
A secondary process exits here...
> /* add all IOVA-contiguous areas to the heap */
> - return rte_memseg_contig_walk(malloc_add_seg, NULL);
> + ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
> +
> + if (ret == 0)
> + malloc_ready = true;
...and never knows that malloc is ready.
But malloc is always ready for a secondary process.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API
2021-10-19 22:01 ` Dmitry Kozlyuk
@ 2021-10-19 22:04 ` Dmitry Kozlyuk
2021-10-20 9:01 ` [dpdk-dev] [EXT] " Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-19 22:04 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Anatoly Burakov, david.marchand, mdr, thomas
2021-10-20 01:01 (UTC+0300), Dmitry Kozlyuk:
> 2021-10-20 00:05 (UTC+0530), Harman Kalra:
> [...]
> > static unsigned
> > check_hugepage_sz(unsigned flags, uint64_t hugepage_sz)
> > {
> > @@ -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void)
> > {
> > struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> > unsigned int i;
> > + int ret;
> > const struct internal_config *internal_conf =
> > eal_get_internal_configuration();
> >
> > @@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
> > return 0;
>
> A secondary process exits here...
>
> > /* add all IOVA-contiguous areas to the heap */
> > - return rte_memseg_contig_walk(malloc_add_seg, NULL);
> > + ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
> > +
> > + if (ret == 0)
> > + malloc_ready = true;
>
> ...and never knows that malloc is ready.
> But malloc is always ready for a secondary process.
That is, before returning 0 above for a secondary process
malloc_ready should be set unconditionally.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle Harman Kalra
@ 2021-10-20 1:57 ` Hyong Youb Kim (hyonkim)
0 siblings, 0 replies; 152+ messages in thread
From: Hyong Youb Kim (hyonkim) @ 2021-10-20 1:57 UTC (permalink / raw)
To: Harman Kalra, dev, Nicolas Chautru, Parav Pandit, Xueming Li,
Hemant Agrawal, Sachin Saxena, Rosen Xu, Ferruh Yigit,
Anatoly Burakov, Stephen Hemminger, Long Li, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jerin Jacob,
Ankur Dwivedi, Anoob Joseph, Pavan Nikhilesh, Igor Russkikh,
Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
Shahed Shaikh, Ajit Khaparde, Somnath Kotur, Haiyue Wang,
Marcin Wojtas, Michal Krawczyk, Shai Brandes, Evgeny Schemeilin,
Igor Chauskin, John Daley (johndale),
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer,
Jakub Grajciar -X (jgrajcia - PANTHEON TECH SRO at Cisco),
Matan Azrad, Viacheslav Ovsiienko, Heinrich Kuhn, Jiawen Wu,
Devendra Singh Rawat, Andrew Rybchenko, Keith Wiles,
Maciej Czekaj, Jian Wang, Maxime Coquelin, Chenbo Xia, Yong Wang,
Tianfei zhang, Xiaoyun Li, Guy Kaneti, Bruce Richardson,
Thomas Monjalon
Cc: david.marchand, dmitry.kozliuk, mdr
> -----Original Message-----
> From: Harman Kalra <hkalra@marvell.com>
> Sent: Wednesday, October 20, 2021 3:36 AM
[...]
> Subject: [PATCH v4 5/7] drivers: remove direct access to interrupt handle
>
> Removing direct access to interrupt handle structure fields,
> rather use respective get set APIs for the same.
> Making changes to all the drivers and libraries access the
> interrupt handle fields.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
For net/enic,
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Thanks.
-Hyong
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-20 6:14 ` David Marchand
2021-10-20 14:29 ` Dmitry Kozlyuk
2021-10-20 16:15 ` Dmitry Kozlyuk
1 sibling, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-20 6:14 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, Dmitry Kozlyuk
On Tue, Oct 19, 2021 at 8:36 PM Harman Kalra <hkalra@marvell.com> wrote:
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index 38f7de83e1..7112dbc146 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -109,18 +109,10 @@ DPDK_22 {
> rte_hexdump;
> rte_hypervisor_get;
> rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
> - rte_intr_allow_others;
> rte_intr_callback_register;
> rte_intr_callback_unregister;
> - rte_intr_cap_multiple;
> - rte_intr_disable;
> - rte_intr_dp_is_en;
> - rte_intr_efd_disable;
> - rte_intr_efd_enable;
> rte_intr_enable;
> - rte_intr_free_epoll_fd;
> - rte_intr_rx_ctl;
> - rte_intr_tls_epfd;
> + rte_intr_disable;
Please sort symbols alphabetically, this patch moves rte_intr_disable
after rte_intr_enable.
> rte_keepalive_create; # WINDOWS_NO_EXPORT
> rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
> rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
> @@ -420,12 +412,49 @@ EXPERIMENTAL {
>
> # added in 21.08
> rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> +
> + # added in 21.11
> + rte_intr_fd_get; # WINDOWS_NO_EXPORT
> + rte_intr_fd_set; # WINDOWS_NO_EXPORT
> + rte_intr_instance_alloc;
> + rte_intr_instance_free;
> + rte_intr_type_get;
> + rte_intr_type_set;
> };
>
> INTERNAL {
> global:
>
> rte_firmware_read;
> + rte_intr_allow_others;
> + rte_intr_cap_multiple;
> + rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
> + rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
> + rte_intr_dp_is_en;
> + rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
> + rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
> + rte_intr_efd_disable;
> + rte_intr_efd_enable;
> + rte_intr_efds_index_get; # WINDOWS_NO_EXPORT
> + rte_intr_efds_index_set; # WINDOWS_NO_EXPORT
> + rte_intr_elist_index_get;
> + rte_intr_elist_index_set;
> + rte_intr_event_list_update;
> + rte_intr_free_epoll_fd;
> + rte_intr_instance_copy;
> + rte_intr_instance_windows_handle_get;
> + rte_intr_instance_windows_handle_set;
> + rte_intr_max_intr_get;
> + rte_intr_max_intr_set;
> + rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
> + rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
> + rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
> + rte_intr_rx_ctl;
> + rte_intr_tls_epfd;
> + rte_intr_vec_list_alloc;
> + rte_intr_vec_list_free;
> + rte_intr_vec_list_index_get;
> + rte_intr_vec_list_index_set;
> rte_mem_lock;
> rte_mem_map;
> rte_mem_page_size;
I see at least one link issue on Windows:
FAILED: lib/rte_ethdev-22.dll
"clang" -Wl,/MACHINE:X64 -Wl,/OUT:lib/rte_ethdev-22.dll
lib/librte_ethdev.a.p/ethdev_ethdev_private.c.obj
lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.obj
lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.obj
lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.obj
lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.obj
lib/librte_ethdev.a.p/ethdev_rte_flow.c.obj
lib/librte_ethdev.a.p/ethdev_rte_mtr.c.obj
lib/librte_ethdev.a.p/ethdev_rte_tm.c.obj "-Wl,/nologo" "-Wl,/release"
"-Wl,/nologo" "-Wl,/OPT:REF" "-Wl,/DLL"
"-Wl,/IMPLIB:lib\rte_ethdev.lib" "lib\rte_eal.lib"
"lib\rte_kvargs.lib" "lib\rte_net.lib" "lib\rte_mbuf.lib"
"lib\rte_mempool.lib" "lib\rte_ring.lib" "lib\rte_meter.lib"
"lib\rte_telemetry.lib"
"-Wl,/def:C:\Users\builder\jenkins\workspace\Windows-Compile-DPDK-Meson\dpdk\build\lib\rte_ethdev_exports.def"
"-ldbghelp" "-lsetupapi" "-lws2_32" "-lmincore" "-lkernel32"
"-luser32" "-lgdi32" "-lwinspool" "-lshell32" "-lole32" "-loleaut32"
"-luuid" "-lcomdlg32" "-ladvapi32"
Creating library lib\rte_ethdev.lib and object lib\rte_ethdev.exp
ethdev_rte_ethdev.c.obj : error LNK2019: unresolved external symbol
rte_intr_efds_index_get referenced in function
rte_eth_dev_rx_intr_ctl_q_get_fd
lib\rte_ethdev-22.dll : fatal error LNK1120: 1 unresolved externals
AFAIU, eal_common_interrupts.c hosts those new symbols and this file
is compiled for Windows (from common/meson.build update), so all intr
symbols are available.
There is no reason to filter interrupts symbols for Windows, is there?
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v4 1/7] malloc: introduce malloc is ready API
2021-10-19 22:04 ` Dmitry Kozlyuk
@ 2021-10-20 9:01 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-20 9:01 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Anatoly Burakov, david.marchand, mdr, thomas
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 3:34 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Anatoly Burakov <anatoly.burakov@intel.com>;
> david.marchand@redhat.com; mdr@ashroe.eu; thomas@monjalon.net
> Subject: [EXT] Re: [PATCH v4 1/7] malloc: introduce malloc is ready API
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-20 01:01 (UTC+0300), Dmitry Kozlyuk:
> > 2021-10-20 00:05 (UTC+0530), Harman Kalra:
> > [...]
> > > static unsigned
> > > check_hugepage_sz(unsigned flags, uint64_t hugepage_sz) { @@
> > > -1328,6 +1330,7 @@ rte_eal_malloc_heap_init(void) {
> > > struct rte_mem_config *mcfg = rte_eal_get_configuration()-
> >mem_config;
> > > unsigned int i;
> > > + int ret;
> > > const struct internal_config *internal_conf =
> > > eal_get_internal_configuration();
> > >
> > > @@ -1369,5 +1372,16 @@ rte_eal_malloc_heap_init(void)
> > > return 0;
> >
> > A secondary process exits here...
> >
> > > /* add all IOVA-contiguous areas to the heap */
> > > - return rte_memseg_contig_walk(malloc_add_seg, NULL);
> > > + ret = rte_memseg_contig_walk(malloc_add_seg, NULL);
> > > +
> > > + if (ret == 0)
> > > + malloc_ready = true;
> >
> > ...and never knows that malloc is ready.
> > But malloc is always ready for a secondary process.
>
> That is, before returning 0 above for a secondary process malloc_ready
> should be set unconditionally.
Yes, thanks for catching this, I will fix it V5.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-19 21:27 ` Dmitry Kozlyuk
@ 2021-10-20 9:25 ` Harman Kalra
2021-10-20 9:52 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-20 9:25 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 2:58 AM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: dev@dpdk.org; Bruce Richardson <bruce.richardson@intel.com>;
> david.marchand@redhat.com; mdr@ashroe.eu; thomas@monjalon.net
> Subject: [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to
> interrupt handle
>
> External Email
>
> ----------------------------------------------------------------------
> 2021-10-20 00:05 (UTC+0530), Harman Kalra:
> > Making changes to the interrupt framework to use interrupt handle APIs
> > to get/set any field. Direct access to any of the fields should be
> > avoided to avoid any ABI breakage in future.
>
> I get and accept the point why EAL also should use the API.
> However, mentioning ABI is still a wrong wording.
> There is no ABI between EAL structures and EAL functions by definition of
> ABI.
Sure, I will reword the commit message without ABI inclusion.
>
> >
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > ---
> > lib/eal/freebsd/eal_interrupts.c | 92 ++++++----
> > lib/eal/linux/eal_interrupts.c | 287 +++++++++++++++++++------------
> > 2 files changed, 234 insertions(+), 145 deletions(-)
> >
> > diff --git a/lib/eal/freebsd/eal_interrupts.c
> > b/lib/eal/freebsd/eal_interrupts.c
> [...]
> > @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct
> rte_intr_handle *intr_handle,
> > ret = -ENOMEM;
> > goto fail;
> > } else {
> > - src->intr_handle = *intr_handle;
> > - TAILQ_INIT(&src->callbacks);
> > - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> > + src->intr_handle = rte_intr_instance_alloc();
> > + if (src->intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Can not create
> intr instance\n");
> > + free(callback);
> > + ret = -ENOMEM;
>
> goto fail?
I think goto not required, as we not setting wake_thread = 1 here,
API will just return error after unlocking the spinlock and trace.
>
> > + } else {
> > + rte_intr_instance_copy(src-
> >intr_handle,
> > + intr_handle);
> > + TAILQ_INIT(&src->callbacks);
> > + TAILQ_INSERT_TAIL(&intr_sources,
> src,
> > + next);
> > + }
> > }
> > }
> >
> [...]
> > @@ -213,7 +226,7 @@ rte_intr_callback_unregister_pending(const struct
> rte_intr_handle *intr_handle,
> > struct rte_intr_callback *cb, *next;
> >
> > /* do parameter checking first */
> > - if (intr_handle == NULL || intr_handle->fd < 0) {
> > + if (intr_handle == NULL || rte_intr_fd_get(intr_handle) < 0) {
>
> The handle is checked for NULL inside the accessor, here and in other places:
> grep -R 'intr_handle == NULL ||' lib/eal
Ack, I will remove these NULL checks.
>
> > RTE_LOG(ERR, EAL,
> > "Unregistering with invalid input parameter\n");
> > return -EINVAL;
>
> > diff --git a/lib/eal/linux/eal_interrupts.c
> > b/lib/eal/linux/eal_interrupts.c
> [...]
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle
2021-10-20 9:25 ` [dpdk-dev] [EXT] " Harman Kalra
@ 2021-10-20 9:52 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-20 9:52 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-20 09:25 (UTC+0000), Harman Kalra:
> [...]
> > > diff --git a/lib/eal/freebsd/eal_interrupts.c
> > > b/lib/eal/freebsd/eal_interrupts.c
> > [...]
> > > @@ -135,9 +137,18 @@ rte_intr_callback_register(const struct
> > rte_intr_handle *intr_handle,
> > > ret = -ENOMEM;
> > > goto fail;
> > > } else {
> > > - src->intr_handle = *intr_handle;
> > > - TAILQ_INIT(&src->callbacks);
> > > - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> > > + src->intr_handle = rte_intr_instance_alloc();
> > > + if (src->intr_handle == NULL) {
> > > + RTE_LOG(ERR, EAL, "Can not create
> > intr instance\n");
> > > + free(callback);
> > > + ret = -ENOMEM;
> >
> > goto fail?
>
> I think goto not required, as we not setting wake_thread = 1 here,
> API will just return error after unlocking the spinlock and trace.
Just to emphasize, we're talking about FreeBSD implementation.
There is no "wake_thread" variable there, so "goto fail" is needed.
Your consideration would be valid for similar code in Linux EAL.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs
2021-10-20 6:14 ` David Marchand
@ 2021-10-20 14:29 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-20 14:29 UTC (permalink / raw)
To: David Marchand; +Cc: Harman Kalra, dev, Thomas Monjalon, Ray Kinsella
2021-10-20 08:14 (UTC+0200), David Marchand:
> [...]
>
> AFAIU, eal_common_interrupts.c hosts those new symbols and this file
> is compiled for Windows (from common/meson.build update), so all intr
> symbols are available.
>
> There is no reason to filter interrupts symbols for Windows, is there?
There is no technical reason.
I suggested not exporting those that make no sense on Windows.
If this creates complications, let's export them all.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-19 8:32 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-19 15:58 ` Thomas Monjalon
@ 2021-10-20 15:30 ` Dmitry Kozlyuk
2021-10-21 9:16 ` Harman Kalra
1 sibling, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-20 15:30 UTC (permalink / raw)
To: Harman Kalra
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
2021-10-19 08:32 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Tuesday, October 19, 2021 4:27 AM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: dev@dpdk.org; Thomas Monjalon <thomas@monjalon.net>; Ray Kinsella
> > <mdr@ashroe.eu>; david.marchand@redhat.com;
> > dmitry.kozliuk@gmail.com
> > Subject: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get
> > set APIs
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Tue, 19 Oct 2021 01:07:02 +0530
> > Harman Kalra <hkalra@marvell.com> wrote:
> >
> > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > + mem_allocator = rte_malloc_is_ready();
> > > + if (mem_allocator)
> > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > rte_intr_handle),
> > > + 0);
> > > + else
> > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> >
> > This is problematic way to do this.
> > The reason to use rte_malloc vs malloc should be determined by usage.
> >
> > If the pointer will be shared between primary/secondary process then it has
> > to be in hugepages (ie rte_malloc). If it is not shared then then use regular
> > malloc.
> >
> > But what you have done is created a method which will be a latent bug for
> > anyone using primary/secondary process.
> >
> > Either:
> > intr_handle is not allowed to be used in secondary.
> > Then always use malloc().
> > Or.
> > intr_handle can be used by both primary and secondary.
> > Then always use rte_malloc().
> > Any code path that allocates intr_handle before pool is
> > ready is broken.
>
> Hi Stephan,
>
> Till V2, I implemented this API in a way where user of the API can choose
> If he wants intr handle to be allocated using malloc or rte_malloc by passing
> a flag arg to the rte_intr_instanc_alloc API. User of the API will best know if
> the intr handle is to be shared with secondary or not.
>
> But after some discussions and suggestions from the community we decided
> to drop that flag argument and auto detect on whether rte_malloc APIs are
> ready to be used and thereafter make all further allocations via rte_malloc.
> Currently alarm subsystem (or any driver doing allocation in constructor) gets
> interrupt instance allocated using glibc malloc that too because rte_malloc*
> is not ready by rte_eal_alarm_init(), while all further consumers gets instance
> allocated via rte_malloc.
Just as a comment, bus scanning is the real issue, not the alarms.
Alarms could be initialized after the memory management
(but it's irrelevant because their handle is not accessed from the outside).
However, MM needs to know bus IOVA requirements to initialize,
which is usually determined by at least bus device requirements.
> I think this should not cause any issue in primary/secondary model as all interrupt
> instance pointer will be shared.
What do you mean? Aren't we discussing the issue
that those allocated early are not shared?
> Infact to avoid any surprises of primary/secondary
> not working we thought of making all allocations via rte_malloc.
I don't see why anyone would not make them shared.
In order to only use rte_malloc(), we need:
1. In bus drivers, move handle allocation from scan to probe stage.
2. In EAL, move alarm initialization to after the MM.
It all can be done later with v3 design---but there are out-of-tree drivers.
We need to force them to make step 1 at some point.
I see two options:
a) Right now have an external API that only works with rte_malloc()
and internal API with autodetection. Fix DPDK and drop internal API.
b) Have external API with autodetection. Fix DPDK.
At the next ABI breakage drop autodetection and libc-malloc.
> David, Thomas, Dmitry, please add if I missed anything.
>
> Can we please conclude on this series APIs as API freeze deadline (rc1) is very near.
I support v3 design with no options and autodetection,
because that's the interface we want in the end.
Implementation can be improved later.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-20 6:14 ` David Marchand
@ 2021-10-20 16:15 ` Dmitry Kozlyuk
1 sibling, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-20 16:15 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
Hello Harman,
This patch looks good to me, there are just some tiny comments inline.
2021-10-20 00:05 (UTC+0530), Harman Kalra:
> [...]
> +/* Macros to check for valid port */
> +#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
> + if (intr_handle == NULL) { \
> + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
> + rte_errno = EINVAL; \
> + goto fail; \
> + } \
> +} while (0)
In most cases "goto fail" could be "return -rte_errno".
How about this (feel free to ignore)?
#define CHECK_VALID_INTR_HANDLE_RET(intr_handle) do { \
CHECK_VALID_INTR_HANDLE(intr_handle); \
fail: \
return -rte_errno; \
} while (0)
> [...]
> +struct rte_intr_handle *rte_intr_instance_alloc(void)
> +{
> + struct rte_intr_handle *intr_handle;
> + bool is_rte_memory;
> +
> + /* Detect if DPDK malloc APIs are ready to be used. */
> + is_rte_memory = rte_malloc_is_ready();
> + if (is_rte_memory)
> + intr_handle = rte_zmalloc(NULL, sizeof(struct
> rte_intr_handle),
> + 0);
> + else
> + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
Nit: sizeof(*intr_handle).
> + if (!intr_handle) {
intr_handle == NULL
> + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> + intr_handle->is_rte_memory = is_rte_memory;
> +
> + return intr_handle;
> +}
> +
[...]
> +
> +void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle) {
intr_handle != NULL
> + if (intr_handle->is_rte_memory)
> + rte_free(intr_handle);
> + else
> + free(intr_handle);
> + }
> +}
[...]
> +
> +int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
> + int index, struct rte_epoll_event elist)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> + intr_handle->nb_intr);
Which "size"?
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->elist[index] = elist;
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> +
[...]
> +int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
> + int index)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (!intr_handle->intr_vec) {
> + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
Can be RTE_ASSERT(), because vec_list_size will be 0 in this case.
> +
> + if (index > intr_handle->vec_list_size) {
> + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> + index, intr_handle->vec_list_size);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + return intr_handle->intr_vec[index];
> +fail:
> + return -rte_errno;
> +}
> +
> +int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
> + int index, int vec)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (!intr_handle->intr_vec) {
> + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> + rte_errno = ENOTSUP;
> + goto fail;
> + }
Same here.
> +
> + if (index > intr_handle->vec_list_size) {
> + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> + index, intr_handle->vec_list_size);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->intr_vec[index] = vec;
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> +
> +void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
> +{
> + if (intr_handle) {
> + rte_free(intr_handle->intr_vec);
> + intr_handle->intr_vec = NULL;
intr_handle->vec_list_size = 0;
> + }
> +}
> +
[...]
> diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
[...]
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * It allocates memory for interrupt instance. API auto detects if memory
> + * for the instance should be allocated using DPDK memory management library
> + * APIs or normal heap allocation, based on if DPDK memory subsystem is
> + * initialized and ready to be used.
This is too much implementation detail and not very specific from user PoV.
Suggestion:
After rte_eal_init() has finished, it allocates from DDPK heap,
otherwise it allocates from normal heap. In particular,
it allocates from the normal heap during initial bus scanning.
See also my reply to v3 regarding allocation.
> + *
> + * Default memory allocation for event fds and epoll event array is done which
> + * can be realloced later as per the requirement.
BTW, why do this?
> + *
> + * This function should be called from application or driver, before calling any
> + * of the interrupt APIs.
> + *
> + * @return
> + * - On success, address of first interrupt handle.
Not "first".
> + * - On failure, NULL.
> + */
> +__rte_experimental
> +struct rte_intr_handle *
> +rte_intr_instance_alloc(void);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * This API is used to free the memory allocated for interrupt handle resources.
> + *
> + * @param intr_handle
> + * Base address of interrupt handle array.
Not "array".
> + *
> + */
> +__rte_experimental
> +void
> +rte_intr_instance_free(struct rte_intr_handle *intr_handle);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * This API is used to set the fd field of interrupt handle with user provided
> + * file descriptor.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param fd
> + * file descriptor value provided by user.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
+ "and rte_errno is set" here and in other places.
[...]
> +/**
> + * @internal
> + * This API is used to populate interrupt handle, with src handler fields.
Comma is not needed.
> + *
> + * @param intr_handle
> + * Start address of interrupt handles
It's a single handle.
> + * @param src
> + * Source interrupt handle to be cloned.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src);
> +
[...]
> +/**
> + * @internal
> + * This API is used to set the event fd counter size field of interrupt handle
> + * with user provided efd counter size.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param efd_counter_size
> + * size of efd counter, used for vdev
No need to mention vdev.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
> + uint8_t efd_counter_size);
> +
[...]
> +/**
> + * @internal
> + * Freeing the memory allocated for interrupt vector list array.
"Freeing" -> "Frees"
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + *
> + * @return
> + * - On success, zero
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +void
> +rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
> +
> +/**
> + * @internal
> + * Reallocates the size efds and elist array based on size provided by user.
> + * By default efds and elist array are allocated with default size
> + * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
> + * probe, device may have capability of more interrupts than
> + * RTE_MAX_RXTX_INTR_VEC_ID. Hence using this API, PMDs can reallocate the
"Hence" word is unexpected, I think it should be removed.
> + * arrays as per the max interrupts capability of device.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + * @param size
> + * efds and elist array size.
> + *
> + * @return
> + * - On success, zero
> + * - On failure, a negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
> +
[...]
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-20 15:30 ` Dmitry Kozlyuk
@ 2021-10-21 9:16 ` Harman Kalra
2021-10-21 12:33 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-21 9:16 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
> -----Original Message-----
> From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> Sent: Wednesday, October 20, 2021 9:01 PM
> To: Harman Kalra <hkalra@marvell.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> get set APIs
>
> > >
> > > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > > + mem_allocator = rte_malloc_is_ready();
> > > > + if (mem_allocator)
> > > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > > rte_intr_handle),
> > > > + 0);
> > > > + else
> > > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > >
> > > This is problematic way to do this.
> > > The reason to use rte_malloc vs malloc should be determined by usage.
> > >
> > > If the pointer will be shared between primary/secondary process then
> > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > then use regular malloc.
> > >
> > > But what you have done is created a method which will be a latent
> > > bug for anyone using primary/secondary process.
> > >
> > > Either:
> > > intr_handle is not allowed to be used in secondary.
> > > Then always use malloc().
> > > Or.
> > > intr_handle can be used by both primary and secondary.
> > > Then always use rte_malloc().
> > > Any code path that allocates intr_handle before pool is
> > > ready is broken.
> >
> > Hi Stephan,
> >
> > Till V2, I implemented this API in a way where user of the API can
> > choose If he wants intr handle to be allocated using malloc or
> > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > User of the API will best know if the intr handle is to be shared with
> secondary or not.
> >
> > But after some discussions and suggestions from the community we
> > decided to drop that flag argument and auto detect on whether
> > rte_malloc APIs are ready to be used and thereafter make all further
> allocations via rte_malloc.
> > Currently alarm subsystem (or any driver doing allocation in
> > constructor) gets interrupt instance allocated using glibc malloc that
> > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > all further consumers gets instance allocated via rte_malloc.
>
> Just as a comment, bus scanning is the real issue, not the alarms.
> Alarms could be initialized after the memory management (but it's irrelevant
> because their handle is not accessed from the outside).
> However, MM needs to know bus IOVA requirements to initialize, which is
> usually determined by at least bus device requirements.
>
> > I think this should not cause any issue in primary/secondary model as
> > all interrupt instance pointer will be shared.
>
> What do you mean? Aren't we discussing the issue that those allocated early
> are not shared?
>
> > Infact to avoid any surprises of primary/secondary not working we
> > thought of making all allocations via rte_malloc.
>
> I don't see why anyone would not make them shared.
> In order to only use rte_malloc(), we need:
> 1. In bus drivers, move handle allocation from scan to probe stage.
> 2. In EAL, move alarm initialization to after the MM.
> It all can be done later with v3 design---but there are out-of-tree drivers.
> We need to force them to make step 1 at some point.
> I see two options:
> a) Right now have an external API that only works with rte_malloc()
> and internal API with autodetection. Fix DPDK and drop internal API.
> b) Have external API with autodetection. Fix DPDK.
> At the next ABI breakage drop autodetection and libc-malloc.
>
> > David, Thomas, Dmitry, please add if I missed anything.
> >
> > Can we please conclude on this series APIs as API freeze deadline (rc1) is
> very near.
>
> I support v3 design with no options and autodetection, because that's the
> interface we want in the end.
> Implementation can be improved later.
Hi All,
I came across 2 issues introduced with auto detection mechanism.
1. In case of primary secondary model. Primary application is started which makes lots of allocations via
rte_malloc*
Secondary side:
a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
alloc_more_mem_on_socket()->try_expand_heap())
b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
Primary side:
a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an
alarm (rte_mp_request_async ()->mp_request_async()).
c. Inside alarm setup API, we register an interrupt callback.
d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
while taking up the spinlock because this spinlock is already hold by secondary.
2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
their hugepage traces from respective directories (eg /dev/hugepage).
a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.
There could be more such issues, I think we should firstly fix the DPDK.
1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
to clean up all the memory allocations.
Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
an argument whether its memory should be allocated using glibc malloc or rte_malloc*. Decision for allocation
(malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
Eg. a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
normal malloc.
b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
via normal malloc, so it intr_handle can also be malloc'ed
c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
Intr_handle can also be rte_malloc.
Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
any auto detection.
David, Dmitry, Thomas, Stephan, please share your views....
Thanks
Harman
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-21 9:16 ` Harman Kalra
@ 2021-10-21 12:33 ` Dmitry Kozlyuk
2021-10-21 13:32 ` David Marchand
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-21 12:33 UTC (permalink / raw)
To: Harman Kalra
Cc: Stephen Hemminger, Thomas Monjalon, david.marchand, dev, Ray Kinsella
2021-10-21 09:16 (UTC+0000), Harman Kalra:
> > -----Original Message-----
> > From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > Sent: Wednesday, October 20, 2021 9:01 PM
> > To: Harman Kalra <hkalra@marvell.com>
> > Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> > Monjalon <thomas@monjalon.net>; david.marchand@redhat.com;
> > dev@dpdk.org; Ray Kinsella <mdr@ashroe.eu>
> > Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> > get set APIs
> >
> > > >
> > > > > + /* Detect if DPDK malloc APIs are ready to be used. */
> > > > > + mem_allocator = rte_malloc_is_ready();
> > > > > + if (mem_allocator)
> > > > > + intr_handle = rte_zmalloc(NULL, sizeof(struct
> > > > rte_intr_handle),
> > > > > + 0);
> > > > > + else
> > > > > + intr_handle = calloc(1, sizeof(struct rte_intr_handle));
> > > >
> > > > This is problematic way to do this.
> > > > The reason to use rte_malloc vs malloc should be determined by usage.
> > > >
> > > > If the pointer will be shared between primary/secondary process then
> > > > it has to be in hugepages (ie rte_malloc). If it is not shared then
> > > > then use regular malloc.
> > > >
> > > > But what you have done is created a method which will be a latent
> > > > bug for anyone using primary/secondary process.
> > > >
> > > > Either:
> > > > intr_handle is not allowed to be used in secondary.
> > > > Then always use malloc().
> > > > Or.
> > > > intr_handle can be used by both primary and secondary.
> > > > Then always use rte_malloc().
> > > > Any code path that allocates intr_handle before pool is
> > > > ready is broken.
> > >
> > > Hi Stephan,
> > >
> > > Till V2, I implemented this API in a way where user of the API can
> > > choose If he wants intr handle to be allocated using malloc or
> > > rte_malloc by passing a flag arg to the rte_intr_instanc_alloc API.
> > > User of the API will best know if the intr handle is to be shared with
> > secondary or not.
> > >
> > > But after some discussions and suggestions from the community we
> > > decided to drop that flag argument and auto detect on whether
> > > rte_malloc APIs are ready to be used and thereafter make all further
> > allocations via rte_malloc.
> > > Currently alarm subsystem (or any driver doing allocation in
> > > constructor) gets interrupt instance allocated using glibc malloc that
> > > too because rte_malloc* is not ready by rte_eal_alarm_init(), while
> > > all further consumers gets instance allocated via rte_malloc.
> >
> > Just as a comment, bus scanning is the real issue, not the alarms.
> > Alarms could be initialized after the memory management (but it's irrelevant
> > because their handle is not accessed from the outside).
> > However, MM needs to know bus IOVA requirements to initialize, which is
> > usually determined by at least bus device requirements.
> >
> > > I think this should not cause any issue in primary/secondary model as
> > > all interrupt instance pointer will be shared.
> >
> > What do you mean? Aren't we discussing the issue that those allocated early
> > are not shared?
> >
> > > Infact to avoid any surprises of primary/secondary not working we
> > > thought of making all allocations via rte_malloc.
> >
> > I don't see why anyone would not make them shared.
> > In order to only use rte_malloc(), we need:
> > 1. In bus drivers, move handle allocation from scan to probe stage.
> > 2. In EAL, move alarm initialization to after the MM.
> > It all can be done later with v3 design---but there are out-of-tree drivers.
> > We need to force them to make step 1 at some point.
> > I see two options:
> > a) Right now have an external API that only works with rte_malloc()
> > and internal API with autodetection. Fix DPDK and drop internal API.
> > b) Have external API with autodetection. Fix DPDK.
> > At the next ABI breakage drop autodetection and libc-malloc.
> >
> > > David, Thomas, Dmitry, please add if I missed anything.
> > >
> > > Can we please conclude on this series APIs as API freeze deadline (rc1) is
> > very near.
> >
> > I support v3 design with no options and autodetection, because that's the
> > interface we want in the end.
> > Implementation can be improved later.
>
> Hi All,
>
> I came across 2 issues introduced with auto detection mechanism.
> 1. In case of primary secondary model. Primary application is started which makes lots of allocations via
> rte_malloc*
>
> Secondary side:
> a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
> request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
> alloc_more_mem_on_socket()->try_expand_heap())
> b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
> the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
>
> Primary side:
> a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
> b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an
> alarm (rte_mp_request_async ()->mp_request_async()).
> c. Inside alarm setup API, we register an interrupt callback.
> d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
> e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
> allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
> while taking up the spinlock because this spinlock is already hold by secondary.
>
>
> 2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
> their hugepage traces from respective directories (eg /dev/hugepage).
> a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
> checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
> b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
> sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
>
> But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
> handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.
Sad to hear. But it's a great and thorough analysis.
> There could be more such issues, I think we should firstly fix the DPDK.
> 1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
As I have explained, buses must be able to report IOVA requirement
at this point (`get_iommu_class()` bus method).
Either `scan()` must complete before that
or `get_iommu_class()` must be able to work before `scan()` is called.
> 2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
> to clean up all the memory allocations.
Yes. For most buses it will be just "unplug each device".
In fact, EAL could do it with `unplug()`, but it is not mandatory.
>
> Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
> an argument whether its memory should be allocated using glibc malloc or rte_malloc*.
Seems there's no other option to make it on time.
> Decision for allocation
> (malloc or rte_malloc) can be made on fact that in the existing code is the interrupt handle is shared?
> Eg. a. In case of alarm intr_handle was global entry and not confined to any structure, so this can be allocated from
> normal malloc.
> b. PCI device, had static entry for intr_handle inside "struct rte_pci_device" and memory for struct rte_pci_device is
> via normal malloc, so it intr_handle can also be malloc'ed
> c. Some driver with intr_handle inside its priv structure, and this priv structure gets allocated via rte_malloc, so
> Intr_handle can also be rte_malloc.
>
> Later once DPDK is fixed up, this argument can be removed and all allocations can be via rte_malloc family without
> any auto detection.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-21 12:33 ` Dmitry Kozlyuk
@ 2021-10-21 13:32 ` David Marchand
2021-10-21 16:05 ` Harman Kalra
0 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-21 13:32 UTC (permalink / raw)
To: Dmitry Kozlyuk, Harman Kalra
Cc: Stephen Hemminger, Thomas Monjalon, dev, Ray Kinsella
On Thu, Oct 21, 2021 at 2:33 PM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
> > Hi All,
> >
> > I came across 2 issues introduced with auto detection mechanism.
> > 1. In case of primary secondary model. Primary application is started which makes lots of allocations via
> > rte_malloc*
> >
> > Secondary side:
> > a. Secondary starts, in its "rte_eal_init()" it makes some allocation via rte_*, and in one of the allocation
> > request for heap expand is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id ()->
> > alloc_more_mem_on_socket()->try_expand_heap())
> > b. A request to primary for heap expand is sent. Please note secondary holds the spinlock while making
> > the request. (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
> >
> > Primary side:
> > a. Primary receives the request, install a new hugepage and setups up the heap (handle_alloc_request())
> > b. To inform all the secondaries about the new memseg, primary sends a sync notice where it sets up an
> > alarm (rte_mp_request_async ()->mp_request_async()).
> > c. Inside alarm setup API, we register an interrupt callback.
> > d. Inside rte_intr_callback_register(), a new interrupt instance allocation is requested for "src->intr_handle"
> > e. Since memory management is detected as up, inside "rte_intr_instance_alloc()", call to "rte_zmalloc" for
> > allocating memory and further inside "malloc_heap_alloc_on_heap_id()", primary will experience a deadlock
> > while taking up the spinlock because this spinlock is already hold by secondary.
> >
> >
> > 2. "eal_flags_file_prefix_autotest" is failing because the spawned process by this tests are expected to cleanup
> > their hugepage traces from respective directories (eg /dev/hugepage).
> > a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element to be freed is added to the free list and
> > checked if nearby elements can be joined together and form a big free chunk (malloc_elem_free()).
> > b. If this free chunk is big enough than the hugepage size, respective hugepage can be uninstalled after making
> > sure no allocation from this hugepage exists. (malloc_heap_free()->malloc_heap_free_pages()->eal_memalloc_free_seg())
> >
> > But because of interrupt allocations made for pci intr handles (used for VFIO) and other driver specific interrupt
> > handles are not cleaned up in "rte_eal_cleanup()", these hugepage files are not removed and test fails.
>
> Sad to hear. But it's a great and thorough analysis.
>
> > There could be more such issues, I think we should firstly fix the DPDK.
> > 1. Memory management should be made independent and should be the first thing to come up in rte_eal_init()
>
> As I have explained, buses must be able to report IOVA requirement
> at this point (`get_iommu_class()` bus method).
> Either `scan()` must complete before that
> or `get_iommu_class()` must be able to work before `scan()` is called.
>
> > 2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(), just like bus_probe, we should have bus_remove
> > to clean up all the memory allocations.
>
> Yes. For most buses it will be just "unplug each device".
> In fact, EAL could do it with `unplug()`, but it is not mandatory.
>
> >
> > Regarding this IRQ series, I would like to fall back to our original design i.e. rte_intr_instance_alloc() should take
> > an argument whether its memory should be allocated using glibc malloc or rte_malloc*.
>
> Seems there's no other option to make it on time.
- Sorry, my memory is too short, did we describe where we need to
share rte_intr_handle objects?
I spent some time looking at uses of rte_intr_handle objects.
In many cases intr_handle objects are referenced in malloc() objects.
The cases where rte_intr_handle are shared is in per device private
bits in drivers.
A intr_handle often contains fds.
For them to be used in mp setups, there needs to be a big machinery
with SCM_RIGHTS but I see only 3 drivers which actually reference
this.
So if intr_handle fds are accessed by multiple processes, their
content probably makes no sense wrt fds.
From these two hints, I think we are going backwards, and the main
usecase is that those rte_intr_instance objects are not used in mp.
I even think they are never accessed from other processes.
But I am not sure.
- Seeing how time it short for rc1, I am ok with
rte_intr_instance_alloc() taking a flag argument.
And we can still go back on this API later.
Can we agree on the flag name?
rte_malloc() interest is that it makes objects shared for mp, so how
about RTE_INTR_INSTANCE_F_SHARED ?
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH v3 2/7] eal/interrupts: implement get set APIs
2021-10-21 13:32 ` David Marchand
@ 2021-10-21 16:05 ` Harman Kalra
0 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-21 16:05 UTC (permalink / raw)
To: David Marchand, Dmitry Kozlyuk
Cc: Stephen Hemminger, Thomas Monjalon, dev, Ray Kinsella
Hi Dmitry, David
Please find responses inline.
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Thursday, October 21, 2021 7:03 PM
> To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Harman Kalra
> <hkalra@marvell.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Thomas
> Monjalon <thomas@monjalon.net>; dev@dpdk.org; Ray Kinsella
> <mdr@ashroe.eu>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement
> get set APIs
>
> On Thu, Oct 21, 2021 at 2:33 PM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> wrote:
> > > Hi All,
> > >
> > > I came across 2 issues introduced with auto detection mechanism.
> > > 1. In case of primary secondary model. Primary application is
> > > started which makes lots of allocations via
> > > rte_malloc*
> > >
> > > Secondary side:
> > > a. Secondary starts, in its "rte_eal_init()" it makes some
> > > allocation via rte_*, and in one of the allocation request for heap expand
> is made as current memseg got exhausted. (malloc_heap_alloc_on_heap_id
> ()->
> > > alloc_more_mem_on_socket()->try_expand_heap())
> > > b. A request to primary for heap expand is sent. Please note
> > > secondary holds the spinlock while making the request.
> > > (malloc_heap_alloc_on_heap_id ()->rte_spinlock_lock(&(heap->lock));)
> > >
> > > Primary side:
> > > a. Primary receives the request, install a new hugepage and setups up
> the heap (handle_alloc_request())
> > > b. To inform all the secondaries about the new memseg, primary
> > > sends a sync notice where it sets up an alarm (rte_mp_request_async ()-
> >mp_request_async()).
> > > c. Inside alarm setup API, we register an interrupt callback.
> > > d. Inside rte_intr_callback_register(), a new interrupt instance allocation
> is requested for "src->intr_handle"
> > > e. Since memory management is detected as up, inside
> > > "rte_intr_instance_alloc()", call to "rte_zmalloc" for allocating
> > > memory and further inside "malloc_heap_alloc_on_heap_id()", primary
> will experience a deadlock while taking up the spinlock because this spinlock
> is already hold by secondary.
> > >
> > >
> > > 2. "eal_flags_file_prefix_autotest" is failing because the spawned
> > > process by this tests are expected to cleanup their hugepage traces from
> respective directories (eg /dev/hugepage).
> > > a. Inside eal_cleanup, rte_free()->malloc_heap_free(), where element
> > > to be freed is added to the free list and checked if nearby elements can
> be joined together and form a big free chunk (malloc_elem_free()).
> > > b. If this free chunk is big enough than the hugepage size,
> > > respective hugepage can be uninstalled after making sure no
> > > allocation from this hugepage exists.
> > > (malloc_heap_free()->malloc_heap_free_pages()-
> >eal_memalloc_free_seg
> > > ())
> > >
> > > But because of interrupt allocations made for pci intr handles (used
> > > for VFIO) and other driver specific interrupt handles are not cleaned up in
> "rte_eal_cleanup()", these hugepage files are not removed and test fails.
> >
> > Sad to hear. But it's a great and thorough analysis.
Sad but a good learning, atleast we identified areas to be worked upon.
> >
> > > There could be more such issues, I think we should firstly fix the DPDK.
> > > 1. Memory management should be made independent and should be the
> > > first thing to come up in rte_eal_init()
> >
> > As I have explained, buses must be able to report IOVA requirement at
> > this point (`get_iommu_class()` bus method).
> > Either `scan()` must complete before that or `get_iommu_class()` must
> > be able to work before `scan()` is called.
> >
> > > 2. rte_eal_cleanup() should be exactly opposite to rte_eal_init(),
> > > just like bus_probe, we should have bus_remove to clean up all the
> memory allocations.
> >
> > Yes. For most buses it will be just "unplug each device".
> > In fact, EAL could do it with `unplug()`, but it is not mandatory.
I implemented a rough bus_remove which was similar to unplug, faced
some issue. Not sure but some drivers might not be supporting hotplug, for
them unplug might be a challenge.
> >
> > >
> > > Regarding this IRQ series, I would like to fall back to our original
> > > design i.e. rte_intr_instance_alloc() should take an argument whether its
> memory should be allocated using glibc malloc or rte_malloc*.
> >
> > Seems there's no other option to make it on time.
>
> - Sorry, my memory is too short, did we describe where we need to share
> rte_intr_handle objects?
Intr handle objects are shared in very few drivers.
>
> I spent some time looking at uses of rte_intr_handle objects.
>
> In many cases intr_handle objects are referenced in malloc() objects.
> The cases where rte_intr_handle are shared is in per device private bits in
> drivers.
>
Yes, in V2 design I allocated memory using glibc malloc for such instances by
passing respective flag.
> A intr_handle often contains fds.
> For them to be used in mp setups, there needs to be a big machinery with
> SCM_RIGHTS but I see only 3 drivers which actually reference this.
> So if intr_handle fds are accessed by multiple processes, their content
> probably makes no sense wrt fds.
Those drivers will allocate using SHARED flag.
>
>
> From these two hints, I think we are going backwards, and the main usecase
> is that those rte_intr_instance objects are not used in mp.
> I even think they are never accessed from other processes.
> But I am not sure.
>
>
> - Seeing how time it short for rc1, I am ok with
> rte_intr_instance_alloc() taking a flag argument.
> And we can still go back on this API later.
Sure, I will revert back to original design and send V5 by tomorrow.
>
> Can we agree on the flag name?
> rte_malloc() interest is that it makes objects shared for mp, so how about
> RTE_INTR_INSTANCE_F_SHARED ?
Yeah, it sounds good:
RTE_INTR_INSTANCE_F_SHARED - rte_malloc
RTE_INTR_INSTANCE_F_PRIVATE - malloc
Thanks David, Dmitry, Thomas, Stephan for reviewing the series thoroughly and providing
inputs to improvise it.
Thanks
Harman
>
>
> --
> David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
` (11 preceding siblings ...)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs Harman Kalra
` (9 more replies)
12 siblings, 10 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
Details on each patch of the series:
Patch 1: eal/interrupts: implement get set APIs
This patch provides prototypes and implementation of all the new
get set APIs. Alloc APIs are implemented to allocate memory for
interrupt handle instance. Currently most of the drivers defines
interrupt handle instance as static but now it cant be static as
size of rte_intr_handle is unknown to all the drivers. Drivers are
expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.
This patch also rearranges the headers related to interrupt
framework. Epoll related definitions prototypes are moved into a
new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
which were driver specific are moved to rte_interrupts.h (as anyways
it was accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.
Patch 2: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.
Patch 3: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.
Patch 4: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.
Patch 5: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.
Patch 6: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.
Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
where interrupts are expected on packet arrival.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
Harman Kalra (6):
eal/interrupts: implement get set APIs
eal/interrupts: avoid direct access to interrupt handle
test/interrupt: apply get set interrupt handle APIs
drivers: remove direct access to interrupt handle
eal/interrupts: make interrupt handle structure opaque
eal/alarm: introduce alarm fini routine
MAINTAINERS | 1 +
app/test/test_interrupts.c | 163 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 10 +
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 16 +-
drivers/bus/fslmc/fslmc_vfio.c | 32 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +-
drivers/bus/pci/linux/pci_vfio.c | 115 ++-
drivers/bus/pci/pci_common.c | 29 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 108 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 +--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 23 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 111 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 53 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 36 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +-
drivers/net/thunderx/nicvf_ethdev.c | 12 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 76 +-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
lib/eal/common/eal_private.h | 11 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 53 +-
lib/eal/freebsd/eal_interrupts.c | 112 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 -------
lib/eal/include/rte_eal_trace.h | 24 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 37 +-
lib/eal/linux/eal_dev.c | 63 +-
lib/eal/linux/eal_interrupts.c | 303 +++++---
lib/eal/version.map | 46 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3631 insertions(+), 1713 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
` (8 subsequent siblings)
9 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Thomas Monjalon, Harman Kalra, Ray Kinsella
Cc: david.marchand, dmitry.kozliuk
Prototype/Implement get set APIs for interrupt handle fields.
User wont be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 421 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 209 +-------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 648 ++++++++++++++++++++++++-
lib/eal/version.map | 46 +-
8 files changed, 1232 insertions(+), 213 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 04ea23a04a..d2950400d2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -211,6 +211,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..618782e9cc
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,421 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_interrupts.h>
+
+/* Macros to check for valid port */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+#define RTE_INTR_INSTANCE_KNOWN_FLAGS ( \
+ RTE_INTR_INSTANCE_F_SHARED | \
+ RTE_INTR_INSTANCE_F_UNSHARED)
+
+#define IS_RTE_MEMORY(intr_handle) \
+ !!(intr_handle->alloc_flag & RTE_INTR_INSTANCE_F_SHARED)
+
+struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
+{
+ struct rte_intr_handle *intr_handle;
+ bool is_rte_memory;
+
+ /* Check the flag passed by user, it should be part of the
+ * defined flags.
+ */
+ if ((flags & (flags - 1)) ||
+ (flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
+ RTE_LOG(ERR, EAL, "Invalid alloc flag passed %x\n", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ is_rte_memory = (flags & RTE_INTR_INSTANCE_F_SHARED) != 0;
+ if (is_rte_memory == true)
+ intr_handle = rte_zmalloc(NULL, sizeof(*intr_handle), 0);
+ else
+ intr_handle = calloc(1, sizeof(*intr_handle));
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+ intr_handle->alloc_flag = flags;
+
+ return intr_handle;
+}
+
+int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src)
+{
+ uint16_t nb_intr;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+
+ nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
+ memcpy(intr_handle->efds, src->efds, nb_intr);
+ memcpy(intr_handle->elist, src->elist, nb_intr);
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_instance_alloc_flag_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->alloc_flag;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle != NULL) {
+ if (IS_RTE_MEMORY(intr_handle) != 0)
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+ }
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle,
+ int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec != NULL)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ if (intr_handle->intr_vec == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle != NULL) {
+ rte_free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ intr_handle->vec_list_size = 0;
+ }
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..26c6300826 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle */
};
+ uint32_t alloc_flag; /** Interrupt Instance allocation flag */
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..a29232e16a 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,11 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +25,16 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+/** Interrupt instance would not be shared within primary secondary process. */
+#define RTE_INTR_INSTANCE_F_UNSHARED 0x00000001
+/** Interrupt instance could be shared within primary secondary process. */
+#define RTE_INTR_INSTANCE_F_SHARED 0x00000002
+
+#include "rte_eal_interrupts.h"
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -32,8 +45,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
@@ -163,6 +174,639 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later based on size of MSIX interrupts supported by a PCI
+ * device.
+ *
+ * This function should be called from application or driver, before calling any
+ * of the interrupt APIs.
+ *
+ * @param flags
+ * Memory allocation from DPDK allocator or normal allocation
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for interrupt handle resources.
+ *
+ * @param intr_handle
+ * Interrupt handle address.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to populate interrupt handle with src handler fields.
+ *
+ * @param intr_handle
+ * Interrupt handle pointer.
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
+ const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the number of event fd field of interrupt handle
+ * with user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the number of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the number of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the epoll event object array index with the given
+ * elist instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * epoll event instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of epoll event instance from elist array at a given
+ * index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * number of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * Number of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Frees the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Using this API, PMDs can reallocate the arrays as
+ * per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the sources from where memory is allocated for interrupt
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, 1 corresponds to memory allocated via DPDK allocator APIs
+ * - On success, 0 corresponds to memory allocated from traditional heap.
+ * - On failure, negative value.
+ */
+__rte_internal
+int
+rte_intr_instance_alloc_flag_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API returns the Windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, Windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the Windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * Windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..a506f476a9 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,12 +412,50 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
+ rte_intr_type_get;
+ rte_intr_type_set;
};
INTERNAL {
global:
rte_firmware_read;
+ rte_intr_allow_others;
+ rte_intr_cap_multiple;
+ rte_intr_dev_fd_get; # WINDOWS_NO_EXPORT
+ rte_intr_dev_fd_set; # WINDOWS_NO_EXPORT
+ rte_intr_dp_is_en;
+ rte_intr_efd_counter_size_set; # WINDOWS_NO_EXPORT
+ rte_intr_efd_counter_size_get; # WINDOWS_NO_EXPORT
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_efds_index_get;
+ rte_intr_efds_index_set;
+ rte_intr_elist_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_event_list_update;
+ rte_intr_free_epoll_fd;
+ rte_intr_instance_alloc_flag_get;
+ rte_intr_instance_copy;
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
+ rte_intr_max_intr_get;
+ rte_intr_max_intr_set;
+ rte_intr_nb_efd_get; # WINDOWS_NO_EXPORT
+ rte_intr_nb_efd_set; # WINDOWS_NO_EXPORT
+ rte_intr_nb_intr_get; # WINDOWS_NO_EXPORT
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_free;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_index_set;
rte_mem_lock;
rte_mem_map;
rte_mem_page_size;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
` (7 subsequent siblings)
9 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Harman Kalra, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/freebsd/eal_interrupts.c | 112 ++++++++----
lib/eal/linux/eal_interrupts.c | 303 +++++++++++++++++++------------
2 files changed, 268 insertions(+), 147 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..4df94751ca 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -86,10 +86,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
{
struct rte_intr_callback *callback;
struct rte_intr_source *src;
- int ret = 0, add_event = 0;
+ int ret = 0, add_event = 0, is_rte_memory;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL && rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,9 +137,36 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ /* src->interrupt instance memory allocated
+ * depends on from where intr_handle memory
+ * is allocated.
+ */
+ is_rte_memory =
+ !!(rte_intr_instance_alloc_flag_get(
+ intr_handle) & RTE_INTR_INSTANCE_F_SHARED);
+ if (is_rte_memory == 0)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_INSTANCE_F_UNSHARED);
+ else if (is_rte_memory == 1)
+ src->intr_handle =
+ rte_intr_instance_alloc(
+ RTE_INTR_INSTANCE_F_SHARED);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ goto fail;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&intr_sources, src,
+ next);
+ }
}
}
@@ -151,7 +180,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event || rte_intr_type_get(src->intr_handle) ==
+ RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +203,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ "kevent, %s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +244,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +259,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +300,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +314,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +347,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +399,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -388,7 +423,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +441,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -429,7 +465,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +477,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (intr_handle &&
+ rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +500,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +513,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +584,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle,
+ &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -557,7 +596,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ "%s\n",
+ rte_intr_fd_get(
+ src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +608,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..7c8c0617bb 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
#include <stdbool.h>
#include <rte_common.h>
+#include <rte_epoll.h>
#include <rte_interrupts.h>
#include <rte_memory.h>
#include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +113,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +161,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +206,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +260,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++)
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +327,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +416,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +442,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+ if (write(rte_intr_fd_get(intr_handle), &value,
+ sizeof(value)) < 0) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -475,14 +498,15 @@ int
rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *cb_arg)
{
- int ret, wake_thread;
+ int ret, wake_thread, is_rte_memory;
struct rte_intr_source *src;
struct rte_intr_callback *callback;
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -522,12 +547,35 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
free(callback);
ret = -ENOMEM;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ /* src->interrupt instance memory allocated depends on
+ * from where intr_handle memory is allocated.
+ */
+ is_rte_memory =
+ !!(rte_intr_instance_alloc_flag_get(intr_handle) &
+ RTE_INTR_INSTANCE_F_SHARED);
+ if (is_rte_memory == 0)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_INSTANCE_F_UNSHARED);
+ else if (is_rte_memory == 1)
+ src->intr_handle = rte_intr_instance_alloc(
+ RTE_INTR_INSTANCE_F_SHARED);
+ else
+ RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
+
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ free(callback);
+ ret = -ENOMEM;
+ } else {
+ rte_intr_instance_copy(src->intr_handle,
+ intr_handle);
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,7 +603,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -565,7 +613,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -605,7 +654,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -615,7 +664,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) ==
+ rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +696,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +728,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -734,7 +786,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +809,17 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (intr_handle && rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +852,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +862,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -863,7 +920,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
default:
RTE_LOG(ERR, EAL,
"Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,7 +953,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
+ if (rte_intr_fd_get(src->intr_handle) ==
events[n].data.fd)
break;
if (src == NULL){
@@ -909,7 +966,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1030,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1070,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle,
+ cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1080,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1183,18 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle),
+ &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1247,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1260,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1481,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (!intr_handle || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1490,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1504,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle,
+ efd_idx),
+ rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1516,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1541,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1563,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1572,32 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0,
+ rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle,
+ RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1609,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) >
+ rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i <
+ (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1631,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 3/6] test/interrupt: apply get set interrupt handle APIs
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 4/6] drivers: remove direct access to interrupt handle Harman Kalra
` (6 subsequent siblings)
9 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Harman Kalra; +Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
app/test/test_interrupts.c | 163 ++++++++++++++++++++++---------------
1 file changed, 98 insertions(+), 65 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..56c33fb9c4 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,55 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!intr_handles[i])
+ return -1;
+ }
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle,
+ RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +121,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +153,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +208,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +230,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +255,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +265,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +273,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +281,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +289,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +297,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +318,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +329,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +337,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +345,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +353,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +361,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +383,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +403,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,7 +428,7 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
@@ -445,8 +477,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +486,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +503,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +512,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +562,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 4/6] drivers: remove direct access to interrupt handle
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (2 preceding siblings ...)
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
` (5 subsequent siblings)
9 siblings, 0 replies; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Nicolas Chautru, Parav Pandit, Xueming Li, Hemant Agrawal,
Sachin Saxena, Rosen Xu, Ferruh Yigit, Anatoly Burakov,
Stephen Hemminger, Long Li, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Jerin Jacob, Ankur Dwivedi,
Anoob Joseph, Pavan Nikhilesh, Igor Russkikh, Steven Webster,
Matt Peters, Somalapuram Amaranath, Rasesh Mody, Shahed Shaikh,
Ajit Khaparde, Somnath Kotur, Haiyue Wang, Marcin Wojtas,
Michal Krawczyk, Shai Brandes, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Qi Zhang, Xiao Wang,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti,
Bruce Richardson, Thomas Monjalon
Cc: david.marchand, dmitry.kozliuk, mdr, Harman Kalra
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers and libraries access the
interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
---
drivers/baseband/acc100/rte_acc100_pmd.c | 18 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 ++--
drivers/bus/auxiliary/auxiliary_common.c | 2 +
drivers/bus/auxiliary/linux/auxiliary.c | 10 ++
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 16 ++-
drivers/bus/fslmc/fslmc_vfio.c | 32 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 15 ++-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 21 ++--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 73 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 108 ++++++++++------
drivers/bus/pci/pci_common.c | 29 ++++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 37 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 24 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 108 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 ++++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 47 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 23 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 19 ++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 111 ++++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 61 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 53 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 25 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 42 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 26 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 36 ++++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 32 +++--
drivers/net/thunderx/nicvf_ethdev.c | 12 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 76 +++++++-----
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 48 ++++---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 62 +++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 10 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 ++++---
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/freebsd/eal_alarm.c | 46 ++++++-
lib/eal/include/rte_eal_trace.h | 24 +---
lib/eal/linux/eal_alarm.c | 30 +++--
lib/eal/linux/eal_dev.c | 63 ++++++----
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +--
118 files changed, 1818 insertions(+), 1221 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 05fe6f8b6f..bfeb055925 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,10 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1098,8 +1100,9 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1111,8 +1114,9 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) !=
+ RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4185,7 +4189,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index ee457f3071..65f2f80e8f 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -744,16 +744,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1880,7 +1879,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 703bb611a0..a3fa38b58a 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1015,16 +1015,15 @@ fpga_intr_enable(struct rte_bbdev *dev)
* invoked when any FPGA queue issues interrupt.
*/
for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2370,7 +2369,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..6d44c433b6 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -320,6 +320,8 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ if (adev->intr_handle)
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/linux/auxiliary.c b/drivers/bus/auxiliary/linux/auxiliary.c
index 9bd4ee3295..bcf4c4d20e 100644
--- a/drivers/bus/auxiliary/linux/auxiliary.c
+++ b/drivers/bus/auxiliary/linux/auxiliary.c
@@ -39,6 +39,14 @@ auxiliary_scan_one(const char *dirname, const char *name)
dev->device.name = dev->name;
dev->device.bus = &auxiliary_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle) {
+ free(dev);
+ return -1;
+ }
+
/* Get NUMA node, default to 0 if not present */
snprintf(filename, sizeof(filename), "%s/%s/numa_node",
dirname, name);
@@ -67,6 +75,8 @@ auxiliary_scan_one(const char *dirname, const char *name)
rte_devargs_remove(dev2->device.devargs);
auxiliary_on_scan(dev2);
}
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
}
return 0;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..a3ce0ade6e 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..ba2d38e782 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,8 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +162,15 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +231,11 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ if (dev->intr_handle)
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..c2b469a94b 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,16 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
+
return 0;
}
@@ -711,7 +721,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..4606187d2b 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,14 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +498,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +546,8 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +559,8 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ if (dpio_dev->intr_handle)
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..f2dceaf023 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,14 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!afu_dev->intr_handle) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +197,11 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +407,8 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ if (afu_dev->intr_handle)
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..1a46553be0 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,11 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +122,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..5aaf604aa4 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,20 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +226,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +241,40 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +400,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +446,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +456,18 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..c8da3e2fe8 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,18 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
+
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +391,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +407,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +421,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +436,12 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +724,13 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
+
#endif
/* store PCI address string */
@@ -854,9 +877,12 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +923,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +996,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1010,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1053,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1109,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1123,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..988d4d449c 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->vfio_req_intr_handle) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +271,12 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(
+ dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +318,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..fb5273500d 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,12 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!dev->intr_handle)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..7792712a25 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,16 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +82,23 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..90b34004fa 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,15 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +259,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 74ada6ef42..15f1aae23e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..bc20341288 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,11 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle,
+ PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +52,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +74,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +89,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +116,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +128,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +136,49 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
- plt_err("Vector=%d greater than max_intr=%d or "
- "max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
+ plt_err("Vector=%d greater than max_intr=%d or ",
+ vec, plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +188,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +222,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index a0f01797f1..8b79c68087 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -106,6 +106,33 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_copy rte_intr_instance_copy
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -183,7 +210,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -203,18 +230,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 5a198f53fc..8ac30b75cc 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -359,7 +359,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -478,7 +478,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -524,10 +524,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -607,7 +606,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -637,10 +636,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -691,7 +687,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 932ec90265..988602ee09 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 0250256830..a46480fc97 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -406,7 +406,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2311,7 +2311,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2335,8 +2335,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae..35ffda84f1 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 567ea23828..ad467eab0d 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -230,10 +230,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -258,8 +258,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f385723a9f..5e19f6d2ee 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -846,26 +846,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1473,7 +1471,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1515,10 +1513,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c79cdb8d8a..31f9976854 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -208,7 +208,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -255,13 +255,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -368,9 +369,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -440,7 +442,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -449,7 +451,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1078,20 +1080,33 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index a0270e7852..b4552f2b45 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1145,7 +1145,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1216,8 +1216,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1256,8 +1256,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 73152dec6e..ec433ebcae 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -523,7 +523,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -573,12 +573,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -716,7 +714,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -750,10 +748,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -765,7 +760,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1006,7 +1001,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index dbe811a1ad..a7fb140f56 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -851,12 +851,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -992,7 +992,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1196,7 +1196,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1255,11 +1255,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1418,7 +1417,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1462,10 +1461,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1505,7 +1501,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1531,10 +1527,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2771,7 +2765,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3288,7 +3282,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3319,11 +3313,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3345,7 +3338,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3369,10 +3362,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3410,7 +3402,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5112,7 +5104,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5132,7 +5124,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5210,7 +5202,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5251,8 +5243,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5270,8 +5263,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5281,8 +5274,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index f3b17d70c9..0547f94596 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -495,7 +495,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -955,7 +955,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -977,10 +977,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -996,7 +995,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1016,7 +1015,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1825,7 +1827,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -3113,7 +3115,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -3140,9 +3142,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -3161,7 +3163,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -3169,8 +3173,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index dfc7f5d1f9..5fb91a069a 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1111,8 +1111,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index b87c036e60..485334f4d4 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,24 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!PRIV(dev)->intr_handle) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +310,8 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ if (PRIV(dev)->intr_handle)
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c..949af61a47 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 29de39910c..85ab36d4af 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!intr_handle)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 66f4a5c6df..68896133e6 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2367,7 +2367,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2392,7 +2392,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2420,15 +2420,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2787,7 +2789,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3053,7 +3055,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index c2374ebb67..bb33bae0a0 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1228,13 +1228,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3118,7 +3118,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3128,7 +3128,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3158,7 +3158,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 693048f587..6769a8f290 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5224,7 +5224,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5237,7 +5237,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5296,8 +5296,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5330,8 +5330,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5665,7 +5665,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5688,16 +5688,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5710,20 +5707,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5734,7 +5732,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5744,8 +5742,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5888,7 +5887,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5908,16 +5907,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 54dbd4b798..1725c6617c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1956,7 +1956,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1964,7 +1964,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2016,8 +2016,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2045,8 +2045,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2089,7 +2089,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2107,16 +2107,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2272,7 +2272,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2295,16 +2295,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2317,20 +2314,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2341,7 +2340,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2351,8 +2350,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2816,7 +2816,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2842,7 +2842,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 602548a4f2..0e8a6258f0 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 0a4db0891d..b5ea77666e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1440,7 +1440,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11068,11 +11067,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11087,7 +11086,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11096,11 +11095,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 611f1f7722..45b917af07 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -661,17 +661,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -731,7 +730,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -741,14 +741,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -790,8 +792,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -927,10 +928,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1654,7 +1652,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1673,7 +1672,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1685,7 +1684,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2370,12 +2370,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2409,7 +2409,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0f4dd21d44..bb65dbf04f 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index c9c01a14e3..68c13ac48d 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b8a537cb85..9084459979 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -159,11 +159,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -213,7 +211,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -223,12 +222,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -631,10 +631,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index edbc746327..6c7ba09fb7 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2178,7 +2178,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2375,7 +2375,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2405,7 +2405,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2430,10 +2430,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2447,7 +2444,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3345,10 +3342,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3376,8 +3374,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3386,7 +3385,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3398,7 +3399,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3424,7 +3425,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3444,11 +3445,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4755,19 +4754,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4776,11 +4775,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 2a1ed90b64..f234224080 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -377,7 +377,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -397,7 +397,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -609,7 +609,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -661,10 +661,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -724,7 +721,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -748,8 +745,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -766,8 +763,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -803,7 +800,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -812,7 +809,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -906,7 +904,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -944,10 +942,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1162,7 +1159,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1331,11 +1328,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2076,7 +2073,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2095,7 +2092,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index f94a1fed0a..90000dda24 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1060,7 +1060,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1074,15 +1074,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1111,7 +1106,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 46c95425ad..283e7e0ae2 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1525,7 +1525,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2540,7 +2540,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2595,11 +2595,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2835,7 +2833,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2886,10 +2884,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2973,7 +2968,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4619,7 +4614,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5291,7 +5286,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5354,11 +5349,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5398,7 +5391,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5426,10 +5419,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5441,7 +5431,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5739,7 +5729,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5765,7 +5755,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5781,7 +5771,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5908,7 +5898,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5935,8 +5925,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5957,7 +5949,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6001,8 +5993,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 364e818d65..af65896993 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,26 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +877,11 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ if (cc->intr_handle)
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +937,22 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sock->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +965,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1085,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1149,25 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!pmd->cc->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1182,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 980150293e..221dc84e5c 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1122,11 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle,
+ eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1320,25 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1362,24 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!mq->intr_handle) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1359,6 +1395,7 @@ memif_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..3e2ceb1fd3 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,19 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!priv->intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto port_error;
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1067,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1112,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c418..8059fb4624 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,22 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +261,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +294,11 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle,
+ priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 54e0ba9f3a..678740874e 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2503,9 +2503,8 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev,
*/
if (list[i].info.representor) {
struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
+ intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
if (!intr_handle) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
@@ -2670,7 +2669,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2725,24 +2724,39 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int flags;
struct ibv_context *ctx = sh->cdev->ctx;
- sh->intr_handle.fd = -1;
+ sh->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(ctx->async_fd, F_GETFL);
ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK);
if (ret) {
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ctx->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle, ctx->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
if (!devx_comp) {
@@ -2756,13 +2770,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2779,13 +2794,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 6356b66dc4..df778d8dfc 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,19 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!server_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -1;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -1;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +168,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6f5a78b249..96bf040594 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -996,7 +996,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1154,8 +1154,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_indexed_pool *ipool[MLX5_IPOOL_MAX];
struct mlx5_indexed_pool *mdh_ipools[MLX5_MAX_MODIFY_NUM];
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 2940c95df2..1b2e9a0e50 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,20 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +919,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +933,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 54c2893437..314b2c4465 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1182,7 +1182,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1191,7 +1191,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 9960cc44e7..6fc8b881f4 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -759,11 +759,12 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -787,13 +788,22 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->txpp.intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a405973..521c449429 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 0003fd54dd..50bfa2cf5c 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -804,7 +804,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -824,7 +825,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -874,7 +876,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 1169ea77a8..fc33bb2ffa 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 62cb3536e0..9c1db84733 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615ad..4045fbbf00 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 27f6932dc7..013daf1ee1 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1569,17 +1569,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2554,22 +2554,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index 69414fd839..ab67aa9237 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e4f1ad4521..88e67486f3 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1663,7 +1663,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1674,22 +1675,23 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle,
+ tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1702,8 +1704,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1918,6 +1920,14 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!pmd->intr_handle) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1935,9 +1945,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2088,6 +2098,8 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ if (pmd->intr_handle)
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..ded50ed653 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,24 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
else
- intr_handle->nb_efd = count;
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 8ce9a99dc0..6120adb007 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1858,6 +1858,9 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ if (nic->intr_handle)
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2157,6 +2160,15 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!nic->intr_handle) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 7b46ffb686..53aa064d8a 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -548,7 +548,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1620,7 +1620,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1670,17 +1670,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1861,7 +1858,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1911,10 +1908,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1977,7 +1971,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -2936,8 +2930,8 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
- if (intr_handle->type != RTE_INTR_HANDLE_UIO &&
- intr_handle->type != RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_UIO &&
+ rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_VFIO_MSIX)
wr32(hw, TXGBE_PX_INTA, 1);
/* clear all cause mask */
@@ -3103,7 +3097,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3623,7 +3617,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3705,7 +3699,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3739,8 +3733,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 43dc0ed39b..b3ac392f97 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a7935a716d..b56bb00047 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -529,40 +529,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
if (!handle)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -641,9 +644,9 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle = dev->intr_handle;
if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_instance_free(intr_handle);
}
dev->intr_handle = NULL;
@@ -662,29 +665,31 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
if (dev->intr_handle)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
if (!dev->intr_handle) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -703,13 +708,21 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -914,7 +927,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 047d3f43a3..bcf2ce717b 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1640,7 +1639,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1679,15 +1680,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..460c621284 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -407,22 +407,37 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
+ eth_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
if (!eth_dev->intr_handle) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle,
+ dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle,
+ RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -657,7 +672,7 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
+ rte_intr_instance_free(eth_dev->intr_handle);
eth_dev->intr_handle = NULL;
}
@@ -962,7 +977,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +987,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1012,18 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index a19895af1f..d75377598a 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -619,11 +619,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -634,8 +632,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -643,17 +640,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -801,7 +800,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -824,7 +825,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1021,10 +1024,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1670,7 +1670,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1680,7 +1682,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..502e3b5d3f 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,14 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!ifpga_irq_handle[i])
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1392,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1428,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1521,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 3971f2e335..e34b870ded 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -535,6 +535,12 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!priv->err_intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -553,6 +559,8 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return -rte_errno;
@@ -584,6 +592,8 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev)
if (priv->vdev)
rte_vdpa_unregister_device(priv->vdev);
pthread_mutex_destroy(&priv->vq_config_lock);
+ if (priv->err_intr_handle)
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 5045fea773..cf4f384fa4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -137,7 +137,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 19497597e6..76b26e5ef7 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -411,12 +411,18 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -436,20 +442,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index cfd50d92f5..2af6da2034 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -24,7 +24,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -57,21 +58,24 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ if (virtq->intr_handle)
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -336,21 +340,33 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!virtq->intr_handle) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle,
+ RTE_INTR_HANDLE_EXT))
+ goto error;
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -502,7 +518,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..2c6fa65020 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (!intr_handle) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..1f2ea58175 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,43 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
+ rte_intr_fd_set(intr_handle, -1);
+ return -1;
}
static inline int
@@ -118,7 +139,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +157,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +185,8 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +225,11 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -256,6 +284,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
free(ap);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
count++;
} else {
/* If calling from other context, mark that
@@ -282,6 +313,9 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ if (ap->handle)
+ rte_intr_instance_free(
+ ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..792872dffd 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -149,11 +149,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -162,11 +158,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
rte_trace_point_emit_ptr(cb);
rte_trace_point_emit_ptr(cb_arg);
)
@@ -174,21 +166,13 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
- rte_trace_point_emit_int(handle->fd);
- rte_trace_point_emit_int(handle->type);
- rte_trace_point_emit_u32(handle->max_intr);
- rte_trace_point_emit_u32(handle->nb_efd);
+ rte_trace_point_emit_ptr(handle);
)
/* Memory */
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..3d4307686c 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,36 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+
rte_errno = errno;
return -1;
}
@@ -109,7 +123,8 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime,
+ NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +155,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +185,8 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0,
+ &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..7a64ec04b4 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,38 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_UNSHARED);
+ if (!intr_handle) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ if (intr_handle) {
+ rte_intr_fd_set(intr_handle, -1);
+ rte_intr_instance_free(intr_handle);
+ }
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +364,18 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
+
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 8edca82ce8..eff072ac16 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1f18aa916c..7894d8369d 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4782,13 +4782,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4823,15 +4823,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5009,12 +5009,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "RX Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (3 preceding siblings ...)
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 4/6] drivers: remove direct access to interrupt handle Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
` (4 subsequent siblings)
9 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Anatoly Burakov, Harman Kalra
Cc: david.marchand, dmitry.kozliuk, mdr, thomas
Moving interrupt handle structure definition inside the c file
to make its fields totally opaque to the outside world.
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/bus/pci/linux/pci_vfio.c | 7 +
lib/eal/common/eal_common_interrupts.c | 190 +++++++++++++++++++++++--
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 ----------
lib/eal/include/rte_interrupts.h | 22 ++-
5 files changed, 209 insertions(+), 83 deletions(-)
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index c8da3e2fe8..f274aa4aab 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,13 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle,
+ irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 618782e9cc..ea98058ac4 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -26,6 +26,29 @@
#define IS_RTE_MEMORY(intr_handle) \
!!(intr_handle->alloc_flag & RTE_INTR_INSTANCE_F_SHARED)
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ /** VFIO/UIO cfg device file descriptor */
+ int dev_fd;
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle */
+ };
+ uint32_t alloc_flag; /** Interrupt instance alloc flag */
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
{
struct rte_intr_handle *intr_handle;
@@ -52,16 +75,52 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
return NULL;
}
+ if (is_rte_memory)
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(uint32_t), 0);
+ else
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(uint32_t));
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (is_rte_memory)
+ intr_handle->elist =
+ rte_zmalloc(NULL, RTE_MAX_RXTX_INTR_VEC_ID *
+ sizeof(struct rte_epoll_event), 0);
+ else
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
intr_handle->alloc_flag = flags;
return intr_handle;
+fail:
+ if (is_rte_memory) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
const struct rte_intr_handle *src)
{
- uint16_t nb_intr;
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
CHECK_VALID_INTR_HANDLE(intr_handle);
@@ -72,17 +131,51 @@ int rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
}
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
+ intr_handle->alloc_flag = src->alloc_flag;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
intr_handle->efd_counter_size = src->efd_counter_size;
- nb_intr = RTE_MIN(src->nb_intr, intr_handle->nb_intr);
- memcpy(intr_handle->efds, src->efds, nb_intr);
- memcpy(intr_handle->elist, src->elist, nb_intr);
+ if (intr_handle->nb_intr != src->nb_intr) {
+ if (IS_RTE_MEMORY(src))
+ tmp_efds = rte_realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, src->nb_intr *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (IS_RTE_MEMORY(src))
+ tmp_elist = rte_realloc(intr_handle->elist,
+ src->nb_intr *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, src->nb_intr *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = src->nb_intr;
+ }
+
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
fail:
return -rte_errno;
}
@@ -96,13 +189,68 @@ int rte_intr_instance_alloc_flag_get(const struct rte_intr_handle *intr_handle)
return -rte_errno;
}
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle,
+ int size)
+{
+ struct rte_epoll_event *tmp_elist;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ if (IS_RTE_MEMORY(intr_handle))
+ tmp_efds = rte_realloc(intr_handle->efds, size *
+ sizeof(uint32_t), 0);
+ else
+ tmp_efds = realloc(intr_handle->efds, size *
+ sizeof(uint32_t));
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (IS_RTE_MEMORY(intr_handle))
+ tmp_elist = rte_realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event),
+ 0);
+ else
+ tmp_elist = realloc(intr_handle->elist, size *
+ sizeof(struct rte_epoll_event));
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list");
+ rte_errno = ENOMEM;
+ goto up_efds;
+ }
+
+ intr_handle->efds = tmp_efds;
+ intr_handle->elist = tmp_elist;
+ intr_handle->nb_intr = size;
+
+ return 0;
+up_efds:
+ intr_handle->efds = tmp_efds;
+fail:
+ return -rte_errno;
+}
+
void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle != NULL) {
- if (IS_RTE_MEMORY(intr_handle) != 0)
+ if (IS_RTE_MEMORY(intr_handle)) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
}
@@ -152,7 +300,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -163,7 +311,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
@@ -253,6 +401,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -270,6 +424,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->efds) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -289,6 +449,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -306,6 +472,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (!intr_handle->elist) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 26c6300826..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle */
- };
- uint32_t alloc_flag; /** Interrupt Instance allocation flag */
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index a29232e16a..fc6b2d1210 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -33,7 +33,27 @@ struct rte_intr_handle;
/** Interrupt instance could be shared within primary secondary process. */
#define RTE_INTR_INSTANCE_F_SHARED 0x00000002
-#include "rte_eal_interrupts.h"
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (4 preceding siblings ...)
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-22 20:49 ` Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (3 subsequent siblings)
9 siblings, 1 reply; 152+ messages in thread
From: Harman Kalra @ 2021-10-22 20:49 UTC (permalink / raw)
To: dev, Bruce Richardson
Cc: david.marchand, dmitry.kozliuk, mdr, thomas, Harman Kalra
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
lib/eal/common/eal_private.h | 11 +++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 7 +++++++
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 7 +++++++
5 files changed, 27 insertions(+)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..7fb9bc1324 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Init alarm mechanism. This is to allow a callback be called after
+ * specific time.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_fini(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..535ea687ca 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -977,6 +977,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index 1f2ea58175..cf706f609f 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..806158f297 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
rte_eal_memory_detach();
rte_trace_save();
eal_trace_fini();
+ rte_eal_alarm_fini();
eal_cleanup_config(internal_conf);
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3d4307686c..c3a3c943a8 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_fini(void)
+{
+ if (intr_handle)
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
--
2.18.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs Harman Kalra
@ 2021-10-22 23:33 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 23:33 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Thomas Monjalon, Ray Kinsella, david.marchand
2021-10-23 02:19 (UTC+0530), Harman Kalra:
> Prototype/Implement get set APIs for interrupt handle fields.
> User wont be able to access any of the interrupt handle fields
> directly while should use these get/set APIs to access/manipulate
> them.
>
> Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
> as APIs defined are moved to rte_interrupts.h and epoll specific
> definitions are moved to a new header rte_epoll.h.
> Later in the series rte_eal_interrupt.h will be removed.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Hi Harman,
After fixing some comment below feel free to add:
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> ---
> MAINTAINERS | 1 +
> lib/eal/common/eal_common_interrupts.c | 421 ++++++++++++++++
> lib/eal/common/meson.build | 1 +
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_eal_interrupts.h | 209 +-------
> lib/eal/include/rte_epoll.h | 118 +++++
> lib/eal/include/rte_interrupts.h | 648 ++++++++++++++++++++++++-
> lib/eal/version.map | 46 +-
> 8 files changed, 1232 insertions(+), 213 deletions(-)
> create mode 100644 lib/eal/common/eal_common_interrupts.c
> create mode 100644 lib/eal/include/rte_epoll.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 04ea23a04a..d2950400d2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -211,6 +211,7 @@ F: app/test/test_memzone.c
>
> Interrupt Subsystem
> M: Harman Kalra <hkalra@marvell.com>
> +F: lib/eal/include/rte_epoll.h
> F: lib/eal/*/*interrupts.*
> F: app/test/test_interrupts.c
>
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> [...]
> +int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
> + int index)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + if (index >= intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
"size" -> "index"
> + intr_handle->nb_intr);
> + rte_errno = EINVAL;
> + goto fail;
> + }
> +
> + return intr_handle->efds[index];
> +fail:
> + return -rte_errno;
> +}
> [...]
> +int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
> + const char *name, int size)
> +{
> + CHECK_VALID_INTR_HANDLE(intr_handle);
> +
> + /* Vector list already allocated */
> + if (intr_handle->intr_vec != NULL)
> + return 0;
What if `size > intr_handle->vec_list_size`?
> +
> + if (size > intr_handle->nb_intr) {
> + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
> + intr_handle->nb_intr);
> + rte_errno = ERANGE;
> + goto fail;
> + }
> +
> + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
> + if (intr_handle->intr_vec == NULL) {
> + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
> + rte_errno = ENOMEM;
> + goto fail;
> + }
> +
> + intr_handle->vec_list_size = size;
> +
> + return 0;
> +fail:
> + return -rte_errno;
> +}
> [...]
> diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
> index cc3bf45d8c..a29232e16a 100644
> --- a/lib/eal/include/rte_interrupts.h
> +++ b/lib/eal/include/rte_interrupts.h
> @@ -5,8 +5,11 @@
> #ifndef _RTE_INTERRUPTS_H_
> #define _RTE_INTERRUPTS_H_
>
> +#include <stdbool.h>
> +
> #include <rte_common.h>
> #include <rte_compat.h>
> +#include <rte_epoll.h>
>
> /**
> * @file
> @@ -22,6 +25,16 @@ extern "C" {
> /** Interrupt handle */
> struct rte_intr_handle;
>
> +/** Interrupt instance allocation flags
> + * @see rte_intr_instance_alloc
> + */
> +/** Interrupt instance would not be shared within primary secondary process. */
> +#define RTE_INTR_INSTANCE_F_UNSHARED 0x00000001
> +/** Interrupt instance could be shared within primary secondary process. */
> +#define RTE_INTR_INSTANCE_F_SHARED 0x00000002
Nits:
"would not" -> "will not"
"could" -> "will be"
"within primary secondary process" ->
"between primary and secondary processes"
You previously suggested PRIVATE instead of UNSHARED,
it sounded better, but no strog opinion.
> [...]
> +/**
> + * @internal
> + * This API is used to populate interrupt handle with src handler fields.
"handler" -> "handle"
> + *
> + * @param intr_handle
> + * Interrupt handle pointer.
> + * @param src
> + * Source interrupt handle to be cloned.
> + *
> + * @return
> + * - On success, zero.
> + * - On failure, a negative value and rte_errno is set.
> + */
> +__rte_internal
> +int
> +rte_intr_instance_copy(struct rte_intr_handle *intr_handle,
> + const struct rte_intr_handle *src);
> [...]
> +/**
> + * @internal
> + * This API returns the sources from where memory is allocated for interrupt
> + * instance.
> + *
> + * @param intr_handle
> + * pointer to the interrupt handle.
> + *
> + * @return
> + * - On success, 1 corresponds to memory allocated via DPDK allocator APIs
> + * - On success, 0 corresponds to memory allocated from traditional heap.
> + * - On failure, negative value.
> + */
> +__rte_internal
> +int
> +rte_intr_instance_alloc_flag_get(const struct rte_intr_handle *intr_handle);
It returns flags passed on allow,
so return value type and description is inaccurate.
Should be uint32_t and a reference to the flag constants.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
@ 2021-10-22 23:33 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 23:33 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-23 02:19 (UTC+0530), Harman Kalra:
> Making changes to the interrupt framework to use interrupt handle
> APIs to get/set any field.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/freebsd/eal_interrupts.c | 112 ++++++++----
> lib/eal/linux/eal_interrupts.c | 303 +++++++++++++++++++------------
> 2 files changed, 268 insertions(+), 147 deletions(-)
>
> diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
> [...]
> + /* src->interrupt instance memory allocated
> + * depends on from where intr_handle memory
> + * is allocated.
> + */
> + is_rte_memory =
> + !!(rte_intr_instance_alloc_flag_get(
> + intr_handle) & RTE_INTR_INSTANCE_F_SHARED);
> + if (is_rte_memory == 0)
> + src->intr_handle =
> + rte_intr_instance_alloc(
> + RTE_INTR_INSTANCE_F_UNSHARED);
> + else if (is_rte_memory == 1)
> + src->intr_handle =
> + rte_intr_instance_alloc(
> + RTE_INTR_INSTANCE_F_SHARED);
> + else
> + RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
Why not just get the flags and use them as-is to allocate a new instance?
If you care to use only these flags even if there are others,
a mask can be used.
> +
> + if (src->intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Can not create intr instance\n");
> + free(callback);
> + ret = -ENOMEM;
> + goto fail;
> + } else {
> + rte_intr_instance_copy(src->intr_handle,
> + intr_handle);
> + TAILQ_INIT(&src->callbacks);
> + TAILQ_INSERT_TAIL(&intr_sources, src,
> + next);
> + }
> }
> }
> [...]
> diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
> [...]
> @@ -522,12 +547,35 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
> free(callback);
> ret = -ENOMEM;
> } else {
> - src->intr_handle = *intr_handle;
> - TAILQ_INIT(&src->callbacks);
> - TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
> - TAILQ_INSERT_TAIL(&intr_sources, src, next);
> - wake_thread = 1;
> - ret = 0;
> + /* src->interrupt instance memory allocated depends on
> + * from where intr_handle memory is allocated.
> + */
> + is_rte_memory =
> + !!(rte_intr_instance_alloc_flag_get(intr_handle) &
> + RTE_INTR_INSTANCE_F_SHARED);
> + if (is_rte_memory == 0)
> + src->intr_handle = rte_intr_instance_alloc(
> + RTE_INTR_INSTANCE_F_UNSHARED);
> + else if (is_rte_memory == 1)
> + src->intr_handle = rte_intr_instance_alloc(
> + RTE_INTR_INSTANCE_F_SHARED);
> + else
> + RTE_LOG(ERR, EAL, "Failed to get mem allocator\n");
Likewise.
> +
> + if (src->intr_handle == NULL) {
> + RTE_LOG(ERR, EAL, "Can not create intr instance\n");
> + free(callback);
> + ret = -ENOMEM;
> + } else {
> + rte_intr_instance_copy(src->intr_handle,
> + intr_handle);
> + TAILQ_INIT(&src->callbacks);
> + TAILQ_INSERT_TAIL(&(src->callbacks), callback,
> + next);
> + TAILQ_INSERT_TAIL(&intr_sources, src, next);
> + wake_thread = 1;
> + ret = 0;
> + }
> }
> }
>
[...]
> + if (intr_handle && rte_intr_type_get(intr_handle) ==
> + RTE_INTR_HANDLE_VDEV)
> return 0;
Nit: you have removed `intr_handle` condition everywhere except here.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
@ 2021-10-22 23:33 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 23:33 UTC (permalink / raw)
To: Harman Kalra, mdr; +Cc: dev, Anatoly Burakov, david.marchand, thomas
2021-10-23 02:19 (UTC+0530), Harman Kalra:
> Moving interrupt handle structure definition inside the c file
> to make its fields totally opaque to the outside world.
>
> Dynamically allocating the efds and elist array os intr_handle
> structure, based on size provided by user. Eg size can be
> MSIX interrupts supported by a PCI device.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> drivers/bus/pci/linux/pci_vfio.c | 7 +
> lib/eal/common/eal_common_interrupts.c | 190 +++++++++++++++++++++++--
> lib/eal/include/meson.build | 1 -
> lib/eal/include/rte_eal_interrupts.h | 72 ----------
> lib/eal/include/rte_interrupts.h | 22 ++-
> 5 files changed, 209 insertions(+), 83 deletions(-)
> delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
> [...]
> diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
> index a29232e16a..fc6b2d1210 100644
> --- a/lib/eal/include/rte_interrupts.h
> +++ b/lib/eal/include/rte_interrupts.h
> @@ -33,7 +33,27 @@ struct rte_intr_handle;
> /** Interrupt instance could be shared within primary secondary process. */
> #define RTE_INTR_INSTANCE_F_SHARED 0x00000002
>
> -#include "rte_eal_interrupts.h"
> +#define RTE_MAX_RXTX_INTR_VEC_ID 512
> +#define RTE_INTR_VEC_ZERO_OFFSET 0
> +#define RTE_INTR_VEC_RXTX_OFFSET 1
> +
> +/**
> + * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
> + */
> +enum rte_intr_handle_type {
> + RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
> + RTE_INTR_HANDLE_UIO, /**< uio device handle */
> + RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
> + RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
> + RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
> + RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
> + RTE_INTR_HANDLE_ALARM, /**< alarm handle */
> + RTE_INTR_HANDLE_EXT, /**< external handler */
> + RTE_INTR_HANDLE_VDEV, /**< virtual device */
> + RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
> + RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
> + RTE_INTR_HANDLE_MAX /**< count of elements */
Wasn't this member going to be removed since v1?
Ray, do you agree?
MAX enum members have been scheduled for removal long ago.
This one even seems unlikely to be used as array size and break anything.
> +};
>
> /** Function to be registered for the specific interrupt */
> typedef void (*rte_intr_callback_fn)(void *cb_arg);
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
@ 2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 23:37 ` Dmitry Kozlyuk
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 23:33 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-23 02:19 (UTC+0530), Harman Kalra:
> Implementing alarm cleanup routine, where the memory allocated
> for interrupt instance can be freed.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
> lib/eal/common/eal_private.h | 11 +++++++++++
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 7 +++++++
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 7 +++++++
> 5 files changed, 27 insertions(+)
>
> diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
> index 86dab1f057..7fb9bc1324 100644
> --- a/lib/eal/common/eal_private.h
> +++ b/lib/eal/common/eal_private.h
> @@ -163,6 +163,17 @@ int rte_eal_intr_init(void);
> */
> int rte_eal_alarm_init(void);
>
> +/**
> + * Init alarm mechanism. This is to allow a callback be called after
> + * specific time.
> + *
> + * This function is private to EAL.
> + *
> + * @return
> + * 0 on success, negative on error
> + */
> +void rte_eal_alarm_fini(void);
> +
The description should say the opposite.
> /**
> * Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
> * etc.) loaded.
> diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
> index 56a60f13e9..535ea687ca 100644
> --- a/lib/eal/freebsd/eal.c
> +++ b/lib/eal/freebsd/eal.c
> @@ -977,6 +977,7 @@ rte_eal_cleanup(void)
> rte_eal_memory_detach();
> rte_trace_save();
> eal_trace_fini();
> + rte_eal_alarm_fini();
Alarms are initialized after tracing, so they should be finalized after.
> eal_cleanup_config(internal_conf);
> return 0;
> }
> diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
> index 1f2ea58175..cf706f609f 100644
> --- a/lib/eal/freebsd/eal_alarm.c
> +++ b/lib/eal/freebsd/eal_alarm.c
> @@ -46,6 +46,13 @@ static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
> static struct rte_intr_handle *intr_handle;
> static void eal_alarm_callback(void *arg);
>
> +void
> +rte_eal_alarm_fini(void)
> +{
> + if (intr_handle)
> + rte_intr_instance_free(intr_handle);
> +}
> +
> int
> rte_eal_alarm_init(void)
> {
> diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
> index 0d0fc66668..806158f297 100644
> --- a/lib/eal/linux/eal.c
> +++ b/lib/eal/linux/eal.c
> @@ -1370,6 +1370,7 @@ rte_eal_cleanup(void)
> rte_eal_memory_detach();
> rte_trace_save();
> eal_trace_fini();
> + rte_eal_alarm_fini();
Likewise.
> eal_cleanup_config(internal_conf);
> return 0;
> }
> diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
> index 3d4307686c..c3a3c943a8 100644
> --- a/lib/eal/linux/eal_alarm.c
> +++ b/lib/eal/linux/eal_alarm.c
> @@ -58,6 +58,13 @@ static struct rte_intr_handle *intr_handle;
> static int handler_registered = 0;
> static void eal_alarm_callback(void *arg);
>
> +void
> +rte_eal_alarm_fini(void)
> +{
> + if (intr_handle)
> + rte_intr_instance_free(intr_handle);
> +}
> +
> int
> rte_eal_alarm_init(void)
> {
That being fixed,
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine
2021-10-22 23:33 ` Dmitry Kozlyuk
@ 2021-10-22 23:37 ` Dmitry Kozlyuk
0 siblings, 0 replies; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-22 23:37 UTC (permalink / raw)
To: Harman Kalra; +Cc: dev, Bruce Richardson, david.marchand, mdr, thomas
2021-10-23 02:33 (UTC+0300), Dmitry Kozlyuk:
> 2021-10-23 02:19 (UTC+0530), Harman Kalra:
> [...]
> > /**
> > * Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
> > * etc.) loaded.
> > diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
> > index 56a60f13e9..535ea687ca 100644
> > --- a/lib/eal/freebsd/eal.c
> > +++ b/lib/eal/freebsd/eal.c
> > @@ -977,6 +977,7 @@ rte_eal_cleanup(void)
> > rte_eal_memory_detach();
> > rte_trace_save();
> > eal_trace_fini();
> > + rte_eal_alarm_fini();
>
> Alarms are initialized after tracing, so they should be finalized after.
"...finalized before", sorry.
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (5 preceding siblings ...)
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 1/9] interrupts: add allocator and accessors David Marchand
` (8 more replies)
2021-10-25 13:04 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Raslan Darawsheh
` (2 subsequent siblings)
9 siblings, 9 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 28 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 528 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 44 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3473 insertions(+), 1742 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 1/9] interrupts: add allocator and accessors
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle David Marchand
` (7 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, Ray Kinsella, Thomas Monjalon
From: Harman Kalra <hkalra@marvell.com>
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
- used a single bit to mark instance as shared (default is private),
- removed rte_intr_instance_copy / rte_intr_instance_alloc_flag_get
with a single rte_intr_instance_dup helper,
- made rte_intr_vec_list_alloc alloc_flags-aware,
- exported all symbols for Windows,
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 411 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 207 +-------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 627 +++++++++++++++++++++++++
lib/eal/version.map | 45 +-
8 files changed, 1201 insertions(+), 210 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 587632dce0..097a57f7f6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -211,6 +211,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..d6e6654fbb
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+/* Macros to check for valid interrupt handle */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+#define RTE_INTR_INSTANCE_KNOWN_FLAGS (RTE_INTR_INSTANCE_F_PRIVATE \
+ | RTE_INTR_INSTANCE_F_SHARED \
+ )
+
+#define RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags) \
+ !!(flags & RTE_INTR_INSTANCE_F_SHARED)
+
+struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
+{
+ struct rte_intr_handle *intr_handle;
+ bool uses_rte_memory;
+
+ /* Check the flag passed by user, it should be part of the
+ * defined flags.
+ */
+ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
+ RTE_LOG(ERR, EAL, "Invalid alloc flag passed 0x%x\n", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ uses_rte_memory = RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags);
+ if (uses_rte_memory)
+ intr_handle = rte_zmalloc(NULL, sizeof(*intr_handle), 0);
+ else
+ intr_handle = calloc(1, sizeof(*intr_handle));
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->alloc_flags = flags;
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+
+ return intr_handle;
+}
+
+struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
+{
+ struct rte_intr_handle *intr_handle;
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ intr_handle = rte_intr_instance_alloc(src->alloc_flags);
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
+
+ return intr_handle;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec != NULL)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ else
+ intr_handle->intr_vec = calloc(size, sizeof(int));
+ if (intr_handle->intr_vec == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle->intr_vec);
+ else
+ free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ intr_handle->vec_list_size = 0;
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..60bb60ca59 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle */
};
+ uint32_t alloc_flags; /**< flags passed at allocation */
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
/**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..a515a8c073 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,12 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
+#include <rte_bitops.h>
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +26,15 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+
+/** Interrupt instance will not be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_PRIVATE UINT32_C(0)
+/** Interrupt instance will be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -163,6 +176,620 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later based on size of MSIX interrupts supported by a PCI
+ * device.
+ *
+ * This function should be called from application or driver, before calling
+ * any of the interrupt APIs.
+ *
+ * @param flags
+ * See RTE_INTR_INSTANCE_F_* flags definitions.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for interrupt handle
+ * resources.
+ *
+ * @param intr_handle
+ * Interrupt handle address.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Creates a clone of src by allocating a new handle and copying src content.
+ *
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+struct rte_intr_handle *
+rte_intr_instance_dup(const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the number of event fd field of interrupt handle
+ * with user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the number of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the number of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the epoll event object array index with the given
+ * elist instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * epoll event instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of epoll event instance from elist array at a given
+ * index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * number of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * Number of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Frees the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Using this API, PMDs can reallocate the arrays as
+ * per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the Windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, Windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the Windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * Windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..9d43655b66 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,12 +412,49 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_get;
+ rte_intr_fd_set;
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
+ rte_intr_type_get;
+ rte_intr_type_set;
};
INTERNAL {
global:
rte_firmware_read;
+ rte_intr_allow_others;
+ rte_intr_cap_multiple;
+ rte_intr_dev_fd_get;
+ rte_intr_dev_fd_set;
+ rte_intr_dp_is_en;
+ rte_intr_efd_counter_size_set;
+ rte_intr_efd_counter_size_get;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_efds_index_get;
+ rte_intr_efds_index_set;
+ rte_intr_elist_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_event_list_update;
+ rte_intr_free_epoll_fd;
+ rte_intr_instance_dup;
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
+ rte_intr_max_intr_get;
+ rte_intr_max_intr_set;
+ rte_intr_nb_efd_get;
+ rte_intr_nb_efd_set;
+ rte_intr_nb_intr_get;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_free;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_index_set;
rte_mem_lock;
rte_mem_map;
rte_mem_page_size;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 1/9] interrupts: add allocator and accessors David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-25 6:57 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 3/9] test/interrupts: " David Marchand
` (6 subsequent siblings)
8 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- used new helper rte_intr_instance_dup,
---
lib/eal/freebsd/eal_interrupts.c | 85 +++++----
lib/eal/linux/eal_interrupts.c | 304 +++++++++++++++++--------------
2 files changed, 219 insertions(+), 170 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..d86f22c102 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +103,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +112,9 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL &&
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM &&
+ !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,7 +136,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(src);
+ src = NULL;
+ goto fail;
+ }
TAILQ_INIT(&src->callbacks);
TAILQ_INSERT_TAIL(&intr_sources, src, next);
}
@@ -151,7 +159,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event ||
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +182,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
- RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n",
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +221,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +236,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +276,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +290,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +322,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +374,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -386,9 +396,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +415,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -427,9 +437,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +450,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +472,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) == event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +484,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +555,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle, &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -556,8 +565,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
* remove intr file descriptor from wait list.
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n"
+ rte_intr_fd_get(src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +576,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..f72661e1f0 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -82,7 +82,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +112,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +125,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +145,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +160,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +172,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +189,11 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +204,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +214,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +229,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +240,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +258,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +269,11 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
-
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +284,35 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++) {
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
+ }
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +324,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +335,12 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +353,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +365,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +385,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +396,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +412,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +438,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +465,9 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +478,9 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,9 +497,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
- RTE_LOG(ERR, EAL,
- "Registering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
+ RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n");
return -EINVAL;
}
@@ -503,7 +517,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -519,15 +533,26 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
src = calloc(1, sizeof(*src));
if (src == NULL) {
RTE_LOG(ERR, EAL, "Can not allocate memory\n");
- free(callback);
ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
+ free(src);
+ src = NULL;
+ } else {
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,18 +580,18 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
rte_spinlock_lock(&intr_lock);
/* check if the insterrupt source for the fd is existent */
- TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ TAILQ_FOREACH(src, &intr_sources, next) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
+ }
/* No interrupt source registered for the fd */
if (src == NULL) {
@@ -605,9 +630,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
@@ -615,7 +639,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +670,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +702,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -732,9 +758,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +782,16 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +824,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +834,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -861,9 +890,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,8 +924,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
- events[n].data.fd)
+ if (rte_intr_fd_get(src->intr_handle) == events[n].data.fd)
break;
if (src == NULL){
rte_spinlock_unlock(&intr_lock);
@@ -909,7 +936,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1000,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1040,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1049,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1152,17 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle), &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1215,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1228,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1449,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1458,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1472,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle, efd_idx), rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1482,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1507,8 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1528,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1537,30 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0, rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle, RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1572,18 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) > rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1592,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 3/9] test/interrupts: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 1/9] interrupts: add allocator and accessors David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 4/9] alarm: " David Marchand
` (5 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk
From: Harman Kalra <hkalra@marvell.com>
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- fixed leak on when some interrupt handle can't be allocated,
---
app/test/test_interrupts.c | 164 ++++++++++++++++++++++---------------
1 file changed, 98 insertions(+), 66 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..2a05399f96 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,54 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (!intr_handles[i])
+ return -1;
+ }
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +120,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +152,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +207,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +229,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +254,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +264,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +272,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +280,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +288,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +296,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +317,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +328,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +336,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +344,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +352,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +360,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +382,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +402,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,11 +427,11 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
- return -1;
+ goto out;
}
printf("Check unknown valid interrupt full path\n");
@@ -445,8 +476,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +485,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +502,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +511,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +561,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 4/9] alarm: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (2 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 3/9] test/interrupts: " David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-25 10:49 ` Dmitry Kozlyuk
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 5/9] lib: " David Marchand
` (4 subsequent siblings)
8 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch4,
- merged patch6,
- renamed rte_eal_alarm_fini as rte_eal_alarm_cleanup,
---
lib/eal/common/eal_private.h | 10 ++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 44 +++++++++++++++++++++++++++++++-----
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 ++++++++++++++++++++------
5 files changed, 75 insertions(+), 13 deletions(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..36bcc0b5a4 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,16 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Alarm mechanism cleanup.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_cleanup(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..9935356ed4 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -975,6 +975,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..1a8fcf24c5 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,7 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
+ struct rte_intr_handle *handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +43,46 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ rte_intr_instance_free(intr_handle);
+ return -1;
}
static inline int
@@ -118,7 +142,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +160,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
@@ -164,6 +188,7 @@ eal_alarm_callback(void *arg __rte_unused)
rte_spinlock_lock(&alarm_list_lk);
LIST_REMOVE(ap, next);
+ rte_intr_instance_free(ap->handle);
free(ap);
ap = LIST_FIRST(&alarm_list);
@@ -202,6 +227,11 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
new_alarm->time.tv_nsec = (now.tv_nsec + ns) % NS_PER_S;
new_alarm->time.tv_sec = now.tv_sec + ((now.tv_nsec + ns) / NS_PER_S);
+ new_alarm->handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (new_alarm->handle == NULL)
+ return -ENOMEM;
+
rte_spinlock_lock(&alarm_list_lk);
if (LIST_EMPTY(&alarm_list))
@@ -255,6 +285,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
break;
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ rte_intr_instance_free(ap->handle);
free(ap);
count++;
} else {
@@ -282,6 +313,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
cb_arg == ap->cb_arg)) {
if (ap->executing == 0) {
LIST_REMOVE(ap, next);
+ rte_intr_instance_free(ap->handle);
free(ap);
count++;
ap = ap_prev;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..81fdebc6a0 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1368,6 +1368,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..3b5e894595 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,40 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ rte_intr_instance_free(intr_handle);
rte_errno = errno;
return -1;
}
@@ -109,7 +127,7 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +158,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +188,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0, &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 5/9] lib: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (3 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 4/9] alarm: " David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 6/9] drivers: " David Marchand
` (3 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, Nicolas Chautru, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch4,
---
lib/bbdev/rte_bbdev.c | 4 +--
lib/eal/linux/eal_dev.c | 57 ++++++++++++++++++++++++-----------------
lib/ethdev/rte_ethdev.c | 14 +++++-----
3 files changed, 43 insertions(+), 32 deletions(-)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..b86c5fdcc0 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (intr_handle == NULL) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..06820a3666 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,35 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ rte_intr_instance_free(intr_handle);
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +361,15 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 74de29c2e0..7db84b12d0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4819,13 +4819,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4860,15 +4860,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5046,12 +5046,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 6/9] drivers: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (4 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 5/9] lib: " David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 7/9] interrupts: make interrupt handle structure opaque David Marchand
` (2 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, Hyong Youb Kim, Nicolas Chautru, Parav Pandit,
Xueming Li, Hemant Agrawal, Sachin Saxena, Rosen Xu,
Ferruh Yigit, Anatoly Burakov, Stephen Hemminger, Long Li,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Jerin Jacob, Ankur Dwivedi, Anoob Joseph, Pavan Nikhilesh,
Igor Russkikh, Steven Webster, Matt Peters, Chandubabu Namburu,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Haiyue Wang, Marcin Wojtas, Michal Krawczyk, Shai Brandes,
Evgeny Schemeilin, Igor Chauskin, John Daley, Gaetan Rivet,
Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti,
Thomas Monjalon
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- moved instance allocation to probing for auxiliary,
- fixed dev_irq_register() return value sign on error for
drivers/common/cnxk/roc_irq.c,
---
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 ++--
drivers/bus/auxiliary/auxiliary_common.c | 17 ++-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 ++-
drivers/bus/fslmc/fslmc_vfio.c | 30 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 102 +++++++++------
drivers/bus/pci/pci_common.c | 28 ++++-
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 107 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 48 +++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 21 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 19 ++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 108 +++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 55 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 25 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 33 +++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 ++---
drivers/net/thunderx/nicvf_ethdev.c | 10 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 80 ++++++------
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +++++----
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 62 +++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 ++++---
lib/ethdev/ethdev_pci.h | 2 +-
111 files changed, 1660 insertions(+), 1177 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 05fe6f8b6f..1c6080f2f8 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,8 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1098,8 +1098,8 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1111,8 +1111,8 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4185,7 +4185,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index ee457f3071..15d23d6269 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,17 +743,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1880,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 703bb611a0..92decc3e05 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,17 +1014,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2370,7 +2370,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..2cf8fe672d 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -121,15 +121,27 @@ rte_auxiliary_probe_one_driver(struct rte_auxiliary_driver *drv,
return -EINVAL;
}
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ AUXILIARY_LOG(ERR, "Could not allocate interrupt instance for device %s",
+ dev->name);
+ return -ENOMEM;
+ }
+
dev->driver = drv;
AUXILIARY_LOG(INFO, "Probe auxiliary driver: %s device: %s (NUMA node %i)",
drv->driver.name, dev->name, dev->device.numa_node);
ret = drv->probe(drv, dev);
- if (ret != 0)
+ if (ret != 0) {
dev->driver = NULL;
- else
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ } else {
dev->device.driver = &drv->driver;
+ }
return ret;
}
@@ -320,6 +332,7 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..9a53fdc1fb 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..ac3cb4aa5a 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,7 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +161,15 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,10 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..b4704eeae4 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,14 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
return 0;
}
@@ -711,7 +719,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..2210a0fa4a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,14 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +498,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +546,7 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +558,7 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..cbc6809284 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,14 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (afu_dev->intr_handle == NULL) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +197,10 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +406,7 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..9a11f99ae3 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,10 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +121,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..2ee5d04672 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,19 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +225,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +240,38 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +397,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +443,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +453,17 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..7b2f8296c5 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,16 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +389,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +405,9 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +418,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +433,11 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +720,12 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -854,9 +872,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +917,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +990,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1004,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1047,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1103,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1117,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..ee24bb66d4 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -230,6 +230,24 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
}
if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ dev->vfio_req_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->vfio_req_intr_handle == NULL) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
/* map resources for devices that use igb_uio */
ret = rte_pci_map_device(dev);
if (ret != 0) {
@@ -253,8 +271,11 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
* driver needs mapped resources.
*/
!(ret > 0 &&
- (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES)))
+ (dr->drv_flags & RTE_PCI_DRV_KEEP_MAPPED_RES))) {
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
} else {
dev->device.driver = &dr->driver;
}
@@ -296,9 +317,12 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
dev->driver = NULL;
dev->device.driver = NULL;
- if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
/* unmap resources for devices that use igb_uio */
rte_pci_unmap_device(dev);
+ rte_intr_instance_free(dev->intr_handle);
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ }
return 0;
}
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..f502783f7a 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,12 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..9c5c1aeca3 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,15 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +81,22 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..336296d6a8 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,14 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +258,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 56744184ae..f0e52ae18f 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..c7549e2724 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,10 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle, PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +51,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +73,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +88,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +115,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +127,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +135,49 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
- plt_err("Vector=%d greater than max_intr=%d or "
- "max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
+ plt_err("Vector=%d greater than max_intr=%d or ",
+ vec, plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return -errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +187,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +221,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index a0f01797f1..60227b72d0 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -106,6 +106,32 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_dup rte_intr_instance_dup
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -183,7 +209,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -203,18 +229,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index f7bfac796c..1c03e8bfa1 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -359,7 +359,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -478,7 +478,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -524,10 +524,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -607,7 +606,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -637,10 +636,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -691,7 +687,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 9eabdf0901..7ac55584ff 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index dab0c6775d..7d40c18a86 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -406,7 +406,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2311,7 +2311,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2335,8 +2335,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 59fa9175ad..32d8c666f9 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 78fc717ec4..f36ad30e17 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -230,10 +230,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -258,8 +258,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2791a5c62d..5a34bb96d0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -846,26 +846,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1473,7 +1471,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1515,10 +1513,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 89ea7dd47c..b9bf9d2966 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -208,7 +208,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -255,13 +255,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -368,9 +369,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -440,7 +442,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -449,7 +451,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1072,26 +1074,38 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->qp = qp;
/* Set up the device interrupt handler */
- if (!dev->intr_handle) {
+ if (dev->intr_handle == NULL) {
struct rte_dpaa_device *dpaa_dev;
struct rte_device *rdev = dev->device;
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 59e728577f..73d17f7b3c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1145,7 +1145,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1216,8 +1216,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1256,8 +1256,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 9da477e59d..18fea4e0ac 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -523,7 +523,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -573,12 +573,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -716,7 +714,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -750,10 +748,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -765,7 +760,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1006,7 +1001,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index ae3bc4a9c2..ff06575f03 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -851,12 +851,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -992,7 +992,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1196,7 +1196,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1255,11 +1255,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1418,7 +1417,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1462,10 +1461,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1505,7 +1501,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1531,10 +1527,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2771,7 +2765,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3288,7 +3282,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3319,11 +3313,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3345,7 +3338,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3369,10 +3362,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3410,7 +3402,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5112,7 +5104,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5132,7 +5124,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5210,7 +5202,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5251,8 +5243,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5270,8 +5263,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5281,8 +5274,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 572d7c20f9..634c97acf6 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -494,7 +494,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -954,7 +954,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -976,10 +976,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -995,7 +994,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1015,7 +1014,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1824,7 +1826,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -3112,7 +3114,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -3139,9 +3141,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -3160,7 +3162,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -3168,8 +3172,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index f7ae84767f..5cc6d9f017 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1111,8 +1111,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 82d595b1d1..ad6b43538e 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,23 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (PRIV(dev)->intr_handle == NULL) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +309,7 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 5f4810051d..14b87a54ab 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index a3a8a1c82e..822883bc2f 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index d256334bfd..c25c323140 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2367,7 +2367,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2392,7 +2392,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2420,15 +2420,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2787,7 +2789,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3053,7 +3055,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4cd5a85d5f..9cabd3e0c1 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1228,13 +1228,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3118,7 +3118,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3128,7 +3128,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3158,7 +3158,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9881659ceb..1437a07372 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5224,7 +5224,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5237,7 +5237,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5296,8 +5296,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5330,8 +5330,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5665,7 +5665,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5688,16 +5688,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5710,20 +5707,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5734,7 +5732,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5744,8 +5742,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5888,7 +5887,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5908,16 +5907,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index c0c1f1c4c1..873924927c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1956,7 +1956,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1964,7 +1964,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2016,8 +2016,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2045,8 +2045,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2089,7 +2089,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2107,16 +2107,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2272,7 +2272,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2295,16 +2295,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2317,20 +2314,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2341,7 +2340,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2351,8 +2350,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2816,7 +2816,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2842,7 +2842,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index b633aabb14..ceb98025f8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 293df887bf..62e374d19e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1440,7 +1440,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11068,11 +11067,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11087,7 +11086,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11096,11 +11095,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index b2b413c247..f892306f18 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -646,17 +646,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -716,7 +715,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -726,14 +726,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -775,8 +777,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -912,10 +913,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1639,7 +1637,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1658,7 +1657,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1670,7 +1669,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2355,12 +2355,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2394,7 +2394,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0f4dd21d44..bb65dbf04f 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7b7df5eebb..084f7a53db 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 7cb8066416..7c71a48010 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -144,11 +144,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -198,7 +196,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -208,12 +207,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -623,10 +623,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6a6637a15a..ef6ee1c386 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2178,7 +2178,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2375,7 +2375,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2405,7 +2405,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2430,10 +2430,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2447,7 +2444,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3345,10 +3342,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3376,8 +3374,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3386,7 +3385,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3398,7 +3399,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3424,7 +3425,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3444,11 +3445,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4755,19 +4754,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4776,11 +4775,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 7ce80a442b..8189ad412a 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -377,7 +377,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -397,7 +397,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -609,7 +609,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -661,10 +661,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -724,7 +721,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -748,8 +745,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -766,8 +763,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -803,7 +800,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -812,7 +809,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -906,7 +904,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -944,10 +942,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1162,7 +1159,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1331,11 +1328,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2076,7 +2073,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2095,7 +2092,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index c688c3735c..28280c5377 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1060,7 +1060,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1074,15 +1074,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1111,7 +1106,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a87c607106..1911cf2fab 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1525,7 +1525,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2539,7 +2539,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2594,11 +2594,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2834,7 +2832,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2885,10 +2883,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2972,7 +2967,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4618,7 +4613,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5290,7 +5285,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5353,11 +5348,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5397,7 +5390,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5425,10 +5418,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5440,7 +5430,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5738,7 +5728,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5764,7 +5754,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5780,7 +5770,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5907,7 +5897,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5934,8 +5924,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5956,7 +5948,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6000,8 +5992,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 8533e39f69..d48c3685d9 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,25 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +876,10 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +935,21 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sock->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +962,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1082,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1146,25 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1179,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 9deb7a5f13..8cec493ffd 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,10 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1121,9 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1317,24 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1358,23 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1359,6 +1390,7 @@ memif_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..cccc71f757 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,19 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto port_error;
+
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1067,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1112,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index 2aab0f60a7..01057482ec 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,21 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +260,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +293,10 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle, priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index f17e1aac3c..72bbb665cf 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2458,11 +2458,9 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev,
* Representor interrupts handle is released in mlx5_dev_stop().
*/
if (list[i].info.representor) {
- struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
- if (!intr_handle) {
+ struct rte_intr_handle *intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (intr_handle == NULL) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
"Rx interrupts will not be supported",
@@ -2626,7 +2624,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2690,24 +2688,38 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int flags;
struct ibv_context *ctx = sh->cdev->ctx;
- sh->intr_handle.fd = -1;
+ sh->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(ctx->async_fd, F_GETFL);
ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK);
if (ret) {
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ctx->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle, ctx->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
if (!devx_comp) {
@@ -2721,13 +2733,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2744,13 +2757,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 902b8ec934..db474f030a 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,19 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (server_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -rte_errno;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +168,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5da5ceaafe..5768b82935 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -996,7 +996,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1160,8 +1160,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_indexed_pool *ipool[MLX5_IPOOL_MAX];
struct mlx5_indexed_pool *mdh_ipools[MLX5_MAX_MODIFY_NUM];
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis[16]; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 5fed42324d..54689a6d2f 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,19 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +918,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +932,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index dacf7ff272..d916c8addc 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1183,7 +1183,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1192,7 +1192,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 48f03fcd79..34f92faa67 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -759,11 +759,11 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -787,13 +787,22 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->txpp.intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9c4ae80e7e..8a950403ac 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 3ea697c544..f8978e803a 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -804,7 +804,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -824,7 +825,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -874,7 +876,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index e08e594b04..830863af28 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 817fe64dbc..5557a1e002 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fc76b84b5b..466e089b34 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c907d7fd83..8ca00e7f6c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1569,17 +1569,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2554,22 +2554,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index 69414fd839..ab67aa9237 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index ef3399ee0f..a9a7658147 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1663,7 +1663,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1674,22 +1675,22 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle, tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1702,8 +1703,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1918,6 +1919,13 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->intr_handle == NULL) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1935,9 +1943,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2088,6 +2096,7 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..b91f6ad449 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,23 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 762647e3b6..fc334cf734 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1858,6 +1858,8 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2157,6 +2159,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (nic->intr_handle == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 4b3b703029..169272ded5 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -548,7 +548,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1620,7 +1620,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1670,17 +1670,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1861,7 +1858,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1911,10 +1908,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1977,7 +1971,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -2936,8 +2930,8 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
- if (intr_handle->type != RTE_INTR_HANDLE_UIO &&
- intr_handle->type != RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_UIO &&
+ rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_VFIO_MSIX)
wr32(hw, TXGBE_PX_INTA, 1);
/* clear all cause mask */
@@ -3103,7 +3097,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3623,7 +3617,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3705,7 +3699,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3739,8 +3733,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 283b52e8f3..4dda55b0c2 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index beb4b8de2d..5111304ff9 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -523,40 +523,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
- if (!handle)
+ if (handle == NULL)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -634,12 +637,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = dev->intr_handle;
- if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ if (intr_handle != NULL) {
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_instance_free(intr_handle);
}
-
dev->intr_handle = NULL;
}
@@ -653,32 +654,31 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
int ret;
/* uninstall firstly if we are reconnecting */
- if (dev->intr_handle)
+ if (dev->intr_handle != NULL)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
- if (!dev->intr_handle) {
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle, sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -697,13 +697,20 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i, vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -908,7 +915,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 94120b3490..26de006c77 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1643,7 +1642,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1682,15 +1683,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..35aa76b1ff 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -406,23 +406,37 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
uint32_t i;
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
- if (!eth_dev->intr_handle) {
+ if (eth_dev->intr_handle == NULL) {
+ eth_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (eth_dev->intr_handle == NULL) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
- for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+ }
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle, dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
+
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -656,10 +670,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
{
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
- eth_dev->intr_handle = NULL;
- }
+ rte_intr_instance_free(eth_dev->intr_handle);
+ eth_dev->intr_handle = NULL;
virtio_user_stop_device(dev);
@@ -962,7 +974,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +984,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1009,17 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 26d9edf531..d1ef1cad08 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -619,11 +619,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -634,8 +632,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -643,17 +640,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -801,7 +800,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -824,7 +825,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1021,10 +1024,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1670,7 +1670,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1680,7 +1682,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..8d9db585a4 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,14 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (ifpga_irq_handle[i] == NULL)
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1392,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1428,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1521,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 9a6f64797b..b9e84dd9bf 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -543,6 +543,12 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->err_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -561,6 +567,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return -rte_errno;
@@ -592,6 +599,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev)
if (priv->vdev)
rte_vdpa_unregister_device(priv->vdev);
pthread_mutex_destroy(&priv->vq_config_lock);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 5045fea773..cf4f384fa4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -137,7 +137,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 19497597e6..042d22777f 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -411,12 +411,17 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -436,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index c5b357a83b..cb37ba097c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -25,7 +25,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -58,21 +59,23 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -337,21 +340,33 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (virtq->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -506,7 +521,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 59c5d7b40f..71aa4b2e98 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 7/9] interrupts: make interrupt handle structure opaque
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (5 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 6/9] drivers: " David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 8/9] interrupts: rename device specific file descriptor David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk
From: Harman Kalra <hkalra@marvell.com>
Moving interrupt handle structure definition inside a EAL private
header to make its fields totally opaque to the outside world.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- let rte_intr_handle fields untouched:
- split vfio / uio fd renames in a separate commit,
- split event list update in a separate commit,
- moved rte_intr_handle definition to a EAL private header,
- preserved dumping all info in interrupt tracepoints,
---
lib/eal/common/eal_common_interrupts.c | 2 +
lib/eal/common/eal_interrupts.h | 37 +++++++++++++
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 --------------------------
lib/eal/include/rte_eal_trace.h | 2 +
lib/eal/include/rte_interrupts.h | 24 ++++++++-
6 files changed, 63 insertions(+), 75 deletions(-)
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index d6e6654fbb..1337c560e4 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -10,6 +10,8 @@
#include <rte_log.h>
#include <rte_malloc.h>
+#include "eal_interrupts.h"
+
/* Macros to check for valid interrupt handle */
#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
if (intr_handle == NULL) { \
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
new file mode 100644
index 0000000000..beacc04b62
--- /dev/null
+++ b/lib/eal/common/eal_interrupts.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef EAL_INTERRUPTS_H
+#define EAL_INTERRUPTS_H
+
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ RTE_STD_C11
+ union {
+ /** VFIO device file descriptor */
+ int vfio_dev_fd;
+ /** UIO cfg file desc for uio_pci_generic */
+ int uio_cfg_fd;
+ };
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle */
+ };
+ uint32_t alloc_flags; /**< flags passed at allocation */
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
+ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
+#endif /* EAL_INTERRUPTS_H */
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 60bb60ca59..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle */
- };
- uint32_t alloc_flags; /**< flags passed at allocation */
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..af7b2d0bf0 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -19,6 +19,8 @@ extern "C" {
#include <rte_interrupts.h>
#include <rte_trace_point.h>
+#include "eal_interrupts.h"
+
/* Alarm */
RTE_TRACE_POINT(
rte_eal_trace_alarm_set,
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index a515a8c073..edbf0faeef 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -35,6 +35,28 @@ struct rte_intr_handle;
/** Interrupt instance will be shared between primary and secondary processes. */
#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -45,8 +67,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 8/9] interrupts: rename device specific file descriptor
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (6 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 7/9] interrupts: make interrupt handle structure opaque David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk
From: Harman Kalra <hkalra@marvell.com>
VFIO/UIO are mutually exclusive, storing file descriptor in a single
field is enough.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch5,
---
lib/eal/common/eal_common_interrupts.c | 6 +++---
lib/eal/common/eal_interrupts.h | 8 +-------
lib/eal/include/rte_eal_trace.h | 8 ++++----
3 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 1337c560e4..3285c4335f 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -72,7 +72,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
intr_handle = rte_intr_instance_alloc(src->alloc_flags);
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
@@ -139,7 +139,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -150,7 +150,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index beacc04b62..1a4e5573b2 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -9,13 +9,7 @@ struct rte_intr_handle {
RTE_STD_C11
union {
struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
+ int dev_fd; /**< VFIO/UIO cfg device file descriptor */
int fd; /**< interrupt event file descriptor */
};
void *windows_handle; /**< device driver handle */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index af7b2d0bf0..5ef4398230 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -151,7 +151,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -164,7 +164,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -176,7 +176,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -186,7 +186,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
` (7 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 8/9] interrupts: rename device specific file descriptor David Marchand
@ 2021-10-24 20:04 ` David Marchand
2021-10-25 10:49 ` Dmitry Kozlyuk
8 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-24 20:04 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, Anatoly Burakov, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao
From: Harman Kalra <hkalra@marvell.com>
Dynamically allocating the efds and elist array os intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch5,
---
drivers/bus/pci/linux/pci_vfio.c | 6 ++
drivers/common/cnxk/roc_platform.h | 1 +
lib/eal/common/eal_common_interrupts.c | 119 ++++++++++++++++++++++++-
lib/eal/common/eal_interrupts.h | 5 +-
4 files changed, 126 insertions(+), 5 deletions(-)
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index 7b2f8296c5..f622e7f8e6 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,12 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle, irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 60227b72d0..5da23fe5f8 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -121,6 +121,7 @@
#define plt_intr_instance_alloc rte_intr_instance_alloc
#define plt_intr_instance_dup rte_intr_instance_dup
#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
#define plt_intr_max_intr_get rte_intr_max_intr_get
#define plt_intr_max_intr_set rte_intr_max_intr_set
#define plt_intr_nb_efd_get rte_intr_nb_efd_get
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 3285c4335f..7feb9da8fa 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -53,10 +53,46 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
return NULL;
}
+ if (uses_rte_memory) {
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(int), 0);
+ } else {
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(int));
+ }
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (uses_rte_memory) {
+ intr_handle->elist = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(struct rte_epoll_event),
+ 0);
+ } else {
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ }
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
intr_handle->alloc_flags = flags;
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
return intr_handle;
+fail:
+ if (uses_rte_memory) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
@@ -83,14 +119,69 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
return intr_handle;
}
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
+{
+ struct rte_epoll_event *tmp_elist;
+ bool uses_rte_memory;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ uses_rte_memory =
+ RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags);
+ if (uses_rte_memory) {
+ tmp_efds = rte_realloc(intr_handle->efds, size * sizeof(int),
+ 0);
+ } else {
+ tmp_efds = realloc(intr_handle->efds, size * sizeof(int));
+ }
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->efds = tmp_efds;
+
+ if (uses_rte_memory) {
+ tmp_elist = rte_realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event), 0);
+ } else {
+ tmp_elist = realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event));
+ }
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->elist = tmp_elist;
+
+ intr_handle->nb_intr = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle == NULL)
return;
- if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags)) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
@@ -239,6 +330,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -256,6 +353,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -275,6 +378,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
intr_handle->nb_intr);
@@ -292,6 +401,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
{
CHECK_VALID_INTR_HANDLE(intr_handle);
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "Event list not allocated\n");
+ rte_errno = EFAULT;
+ goto fail;
+ }
+
if (index >= intr_handle->nb_intr) {
RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
intr_handle->nb_intr);
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index 1a4e5573b2..482781b862 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -21,9 +21,8 @@ struct rte_intr_handle {
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
uint16_t nb_intr;
/**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle David Marchand
@ 2021-10-25 6:57 ` David Marchand
0 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 6:57 UTC (permalink / raw)
To: Harman Kalra, dev; +Cc: Dmitry Kozlyuk, Bruce Richardson
On Sun, Oct 24, 2021 at 10:05 PM David Marchand
<david.marchand@redhat.com> wrote:
> @@ -556,8 +565,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
> * remove intr file descriptor from wait list.
> */
> if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
> - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
> - "%s\n", src->intr_handle.fd,
> + RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n"
> + rte_intr_fd_get(src->intr_handle),
> strerror(errno));
> /* removing non-existent even is an expected
> * condition in some circumstances
My fault, missing a ,
--
David Marchand.
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/9] alarm: remove direct access to interrupt handle
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 4/9] alarm: " David Marchand
@ 2021-10-25 10:49 ` Dmitry Kozlyuk
2021-10-25 11:09 ` David Marchand
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-25 10:49 UTC (permalink / raw)
To: David Marchand; +Cc: hkalra, dev, Bruce Richardson
2021-10-24 22:04 (UTC+0200), David Marchand:
> From: Harman Kalra <hkalra@marvell.com>
>
> Removing direct access to interrupt handle structure fields,
> rather use respective get set APIs for the same.
> Making changes to all the libraries access the interrupt handle fields.
>
> Implementing alarm cleanup routine, where the memory allocated
> for interrupt instance can be freed.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Changes since v5:
> - split from patch4,
> - merged patch6,
> - renamed rte_eal_alarm_fini as rte_eal_alarm_cleanup,
>
> ---
[...]
> diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
> index c38b2e04f8..1a8fcf24c5 100644
> --- a/lib/eal/freebsd/eal_alarm.c
> +++ b/lib/eal/freebsd/eal_alarm.c
> @@ -32,7 +32,7 @@
>
> struct alarm_entry {
> LIST_ENTRY(alarm_entry) next;
> - struct rte_intr_handle handle;
> + struct rte_intr_handle *handle;
This field is never used and can be just removed.
> struct timespec time;
> rte_eal_alarm_callback cb_fn;
> void *cb_arg;
[...]
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list David Marchand
@ 2021-10-25 10:49 ` Dmitry Kozlyuk
2021-10-25 11:11 ` David Marchand
0 siblings, 1 reply; 152+ messages in thread
From: Dmitry Kozlyuk @ 2021-10-25 10:49 UTC (permalink / raw)
To: David Marchand
Cc: hkalra, dev, Anatoly Burakov, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao
Hi David,
With some nits below,
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
2021-10-24 22:04 (UTC+0200), David Marchand:
> From: Harman Kalra <hkalra@marvell.com>
>
> Dynamically allocating the efds and elist array os intr_handle
Typo: "os" -> "of"
> structure, based on size provided by user. Eg size can be
> MSIX interrupts supported by a PCI device.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Changes since v5:
> - split from patch5,
>
> ---
[...]
> diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> index 3285c4335f..7feb9da8fa 100644
> --- a/lib/eal/common/eal_common_interrupts.c
> +++ b/lib/eal/common/eal_common_interrupts.c
[...]
> int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
> @@ -239,6 +330,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
> {
> CHECK_VALID_INTR_HANDLE(intr_handle);
>
> + if (intr_handle->efds == NULL) {
> + RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
> + rte_errno = EFAULT;
> + goto fail;
> + }
> +
Here and below:
The check for `nb_intr` will already catch not allocated `efds`,
because `nb_intr` is necessarily 0 in this case.
> if (index >= intr_handle->nb_intr) {
> RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
> intr_handle->nb_intr);
> @@ -256,6 +353,12 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
> {
> CHECK_VALID_INTR_HANDLE(intr_handle);
>
> + if (intr_handle->efds == NULL) {
> + RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
> + rte_errno = EFAULT;
> + goto fail;
> + }
> +
> if (index >= intr_handle->nb_intr) {
> RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> intr_handle->nb_intr);
> @@ -275,6 +378,12 @@ struct rte_epoll_event *rte_intr_elist_index_get(
> {
> CHECK_VALID_INTR_HANDLE(intr_handle);
>
> + if (intr_handle->elist == NULL) {
> + RTE_LOG(ERR, EAL, "Event list not allocated\n");
> + rte_errno = EFAULT;
> + goto fail;
> + }
> +
> if (index >= intr_handle->nb_intr) {
> RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
> intr_handle->nb_intr);
> @@ -292,6 +401,12 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
> {
> CHECK_VALID_INTR_HANDLE(intr_handle);
>
> + if (intr_handle->elist == NULL) {
> + RTE_LOG(ERR, EAL, "Event list not allocated\n");
> + rte_errno = EFAULT;
> + goto fail;
> + }
> +
> if (index >= intr_handle->nb_intr) {
> RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
> intr_handle->nb_intr);
[...]
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/9] alarm: remove direct access to interrupt handle
2021-10-25 10:49 ` Dmitry Kozlyuk
@ 2021-10-25 11:09 ` David Marchand
0 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 11:09 UTC (permalink / raw)
To: Dmitry Kozlyuk; +Cc: Harman Kalra, dev, Bruce Richardson
On Mon, Oct 25, 2021 at 12:49 PM Dmitry Kozlyuk
<dmitry.kozliuk@gmail.com> wrote:
> > diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
> > index c38b2e04f8..1a8fcf24c5 100644
> > --- a/lib/eal/freebsd/eal_alarm.c
> > +++ b/lib/eal/freebsd/eal_alarm.c
> > @@ -32,7 +32,7 @@
> >
> > struct alarm_entry {
> > LIST_ENTRY(alarm_entry) next;
> > - struct rte_intr_handle handle;
> > + struct rte_intr_handle *handle;
>
> This field is never used and can be just removed.
Indeed, removed.
>
> > struct timespec time;
> > rte_eal_alarm_callback cb_fn;
> > void *cb_arg;
> [...]
>
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list
2021-10-25 10:49 ` Dmitry Kozlyuk
@ 2021-10-25 11:11 ` David Marchand
0 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 11:11 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: Harman Kalra, dev, Anatoly Burakov, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao
On Mon, Oct 25, 2021 at 12:49 PM Dmitry Kozlyuk
<dmitry.kozliuk@gmail.com> wrote:
> > diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
> > index 3285c4335f..7feb9da8fa 100644
> > --- a/lib/eal/common/eal_common_interrupts.c
> > +++ b/lib/eal/common/eal_common_interrupts.c
> [...]
> > int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
> > @@ -239,6 +330,12 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
> > {
> > CHECK_VALID_INTR_HANDLE(intr_handle);
> >
> > + if (intr_handle->efds == NULL) {
> > + RTE_LOG(ERR, EAL, "Event fd list not allocated\n");
> > + rte_errno = EFAULT;
> > + goto fail;
> > + }
> > +
>
> Here and below:
> The check for `nb_intr` will already catch not allocated `efds`,
> because `nb_intr` is necessarily 0 in this case.
+1.
Thanks Dmitry.
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (6 preceding siblings ...)
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
@ 2021-10-25 13:04 ` Raslan Darawsheh
2021-10-25 13:09 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
9 siblings, 1 reply; 152+ messages in thread
From: Raslan Darawsheh @ 2021-10-25 13:04 UTC (permalink / raw)
To: Harman Kalra, dev
Cc: david.marchand, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Friday, October 22, 2021 11:49 PM
> To: dev@dpdk.org
> Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Harman Kalra <hkalra@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> 7C1000&sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> 3D&reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined
> and also hides struct rte_intr_handle definition.
>
> Details on each patch of the series:
> Patch 1: eal/interrupts: implement get set APIs
> This patch provides prototypes and implementation of all the new
> get set APIs. Alloc APIs are implemented to allocate memory for
> interrupt handle instance. Currently most of the drivers defines
> interrupt handle instance as static but now it cant be static as
> size of rte_intr_handle is unknown to all the drivers. Drivers are
> expected to allocate interrupt instances during initialization
> and free these instances during cleanup phase.
> This patch also rearranges the headers related to interrupt
> framework. Epoll related definitions prototypes are moved into a
> new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> which were driver specific are moved to rte_interrupts.h (as anyways
> it was accessible and used outside DPDK library. Later in the series
> rte_eal_interrupts.h is removed.
>
> Patch 2: eal/interrupts: avoid direct access to interrupt handle
> Modifying the interrupt framework for linux and freebsd to use these
> get set alloc APIs as per requirement and avoid accessing the fields
> directly.
>
> Patch 3: test/interrupt: apply get set interrupt handle APIs
> Updating interrupt test suite to use interrupt handle APIs.
>
> Patch 4: drivers: remove direct access to interrupt handle fields
> Modifying all the drivers and libraries which are currently directly
> accessing the interrupt handle fields. Drivers are expected to
> allocated the interrupt instance, use get set APIs with the allocated
> interrupt handle and free it on cleanup.
>
> Patch 5: eal/interrupts: make interrupt handle structure opaque
> In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> definition is moved to c file to make it completely opaque. As part of
> interrupt handle allocation, array like efds and elist(which are currently
> static) are dynamically allocated with default size
> (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> device requirement using new API rte_intr_handle_event_list_update().
> Eg, on PCI device probing MSIX size can be queried and these arrays can
> be reallocated accordingly.
>
> Patch 6: eal/alarm: introduce alarm fini routine
> Introducing alarm fini routine, as the memory allocated for alarm interrupt
> instance can be freed in alarm fini.
>
> Testing performed:
> 1. Validated the series by running interrupts and alarm test suite.
> 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> where interrupts are expected on packet arrival.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
>
> Harman Kalra (6):
> eal/interrupts: implement get set APIs
> eal/interrupts: avoid direct access to interrupt handle
> test/interrupt: apply get set interrupt handle APIs
> drivers: remove direct access to interrupt handle
> eal/interrupts: make interrupt handle structure opaque
> eal/alarm: introduce alarm fini routine
>
> MAINTAINERS | 1 +
> app/test/test_interrupts.c | 163 +++--
> drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
> drivers/bus/auxiliary/auxiliary_common.c | 2 +
> drivers/bus/auxiliary/linux/auxiliary.c | 10 +
> drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> drivers/bus/dpaa/dpaa_bus.c | 28 +-
> drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> drivers/bus/fslmc/fslmc_bus.c | 16 +-
> drivers/bus/fslmc/fslmc_vfio.c | 32 +-
> drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
> drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> drivers/bus/fslmc/rte_fslmc.h | 2 +-
> drivers/bus/ifpga/ifpga_bus.c | 15 +-
> drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> drivers/bus/pci/bsd/pci.c | 21 +-
> drivers/bus/pci/linux/pci.c | 4 +-
> drivers/bus/pci/linux/pci_uio.c | 73 +-
> drivers/bus/pci/linux/pci_vfio.c | 115 ++-
> drivers/bus/pci/pci_common.c | 29 +-
> drivers/bus/pci/pci_common_uio.c | 21 +-
> drivers/bus/pci/rte_bus_pci.h | 4 +-
> drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
> drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
> drivers/common/cnxk/roc_cpt.c | 8 +-
> drivers/common/cnxk/roc_dev.c | 14 +-
> drivers/common/cnxk/roc_irq.c | 108 +--
> drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> drivers/common/cnxk/roc_nix_irq.c | 36 +-
> drivers/common/cnxk/roc_npa.c | 2 +-
> drivers/common/cnxk/roc_platform.h | 49 +-
> drivers/common/cnxk/roc_sso.c | 4 +-
> drivers/common/cnxk/roc_tim.c | 4 +-
> drivers/common/octeontx2/otx2_dev.c | 14 +-
> drivers/common/octeontx2/otx2_irq.c | 117 +--
> .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> drivers/net/atlantic/atl_ethdev.c | 20 +-
> drivers/net/avp/avp_ethdev.c | 8 +-
> drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> drivers/net/axgbe/axgbe_mdio.c | 6 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> drivers/net/bnxt/bnxt_irq.c | 4 +-
> drivers/net/dpaa/dpaa_ethdev.c | 47 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> drivers/net/e1000/em_ethdev.c | 23 +-
> drivers/net/e1000/igb_ethdev.c | 79 +--
> drivers/net/ena/ena_ethdev.c | 35 +-
> drivers/net/enic/enic_main.c | 26 +-
> drivers/net/failsafe/failsafe.c | 23 +-
> drivers/net/failsafe/failsafe_intr.c | 43 +-
> drivers/net/failsafe/failsafe_ops.c | 19 +-
> drivers/net/failsafe/failsafe_private.h | 2 +-
> drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> drivers/net/hns3/hns3_ethdev.c | 57 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> drivers/net/hns3/hns3_rxtx.c | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 53 +-
> drivers/net/iavf/iavf_ethdev.c | 42 +-
> drivers/net/iavf/iavf_vchnl.c | 4 +-
> drivers/net/ice/ice_dcf.c | 10 +-
> drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> drivers/net/ice/ice_ethdev.c | 49 +-
> drivers/net/igc/igc_ethdev.c | 45 +-
> drivers/net/ionic/ionic_ethdev.c | 17 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> drivers/net/memif/memif_socket.c | 111 ++-
> drivers/net/memif/memif_socket.h | 4 +-
> drivers/net/memif/rte_eth_memif.c | 61 +-
> drivers/net/memif/rte_eth_memif.h | 2 +-
> drivers/net/mlx4/mlx4.c | 19 +-
> drivers/net/mlx4/mlx4.h | 2 +-
> drivers/net/mlx4/mlx4_intr.c | 47 +-
> drivers/net/mlx5/linux/mlx5_os.c | 53 +-
> drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> drivers/net/mlx5/mlx5.h | 6 +-
> drivers/net/mlx5/mlx5_rxq.c | 42 +-
> drivers/net/mlx5/mlx5_trigger.c | 4 +-
> drivers/net/mlx5/mlx5_txpp.c | 26 +-
> drivers/net/netvsc/hn_ethdev.c | 4 +-
> drivers/net/nfp/nfp_common.c | 34 +-
> drivers/net/nfp/nfp_ethdev.c | 13 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> drivers/net/qede/qede_ethdev.c | 16 +-
> drivers/net/sfc/sfc_intr.c | 30 +-
> drivers/net/tap/rte_eth_tap.c | 36 +-
> drivers/net/tap/rte_eth_tap.h | 2 +-
> drivers/net/tap/tap_intr.c | 32 +-
> drivers/net/thunderx/nicvf_ethdev.c | 12 +
> drivers/net/thunderx/nicvf_struct.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> drivers/net/vhost/rte_eth_vhost.c | 76 +-
> drivers/net/virtio/virtio_ethdev.c | 21 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> drivers/raw/ntb/ntb.c | 9 +-
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
> drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
> drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
> lib/bbdev/rte_bbdev.c | 4 +-
> lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
> lib/eal/common/eal_private.h | 11 +
> lib/eal/common/meson.build | 1 +
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 53 +-
> lib/eal/freebsd/eal_interrupts.c | 112 ++-
> lib/eal/include/meson.build | 2 +-
> lib/eal/include/rte_eal_interrupts.h | 269 -------
> lib/eal/include/rte_eal_trace.h | 24 +-
> lib/eal/include/rte_epoll.h | 118 ++++
> lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 37 +-
> lib/eal/linux/eal_dev.c | 63 +-
> lib/eal/linux/eal_interrupts.c | 303 +++++---
> lib/eal/version.map | 46 +-
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_ethdev.c | 14 +-
> 132 files changed, 3631 insertions(+), 1713 deletions(-)
> create mode 100644 lib/eal/common/eal_common_interrupts.c
> delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> create mode 100644 lib/eal/include/rte_epoll.h
>
> --
> 2.18.0
This series is causing this seg fault with MLX5 pmd:
Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
1512 if (__atomic_load_n(&rev->status,
(gdb) bt
#0 rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
#1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
#2 0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
#3 0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
#4 0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
#5 0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
(gdb) f 1
#1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
934 rte_intr_free_epoll_fd(intr_handle);
It can be easily reproduced as following:
dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
2021-10-25 13:04 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Raslan Darawsheh
@ 2021-10-25 13:09 ` David Marchand
0 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:09 UTC (permalink / raw)
To: Raslan Darawsheh
Cc: Harman Kalra, dev, dmitry.kozliuk, mdr, NBU-Contact-Thomas Monjalon
On Mon, Oct 25, 2021 at 3:04 PM Raslan Darawsheh <rasland@nvidia.com> wrote:
>
> Hi,
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > Sent: Friday, October 22, 2021 11:49 PM
> > To: dev@dpdk.org
> > Cc: david.marchand@redhat.com; dmitry.kozliuk@gmail.com;
> > mdr@ashroe.eu; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Harman Kalra <hkalra@marvell.com>
> > Subject: [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal
> >
> > Moving struct rte_intr_handle as an internal structure to
> > avoid any ABI breakages in future. Since this structure defines
> > some static arrays and changing respective macros breaks the ABI.
> > Eg:
> > Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> > MSI-X interrupts that can be defined for a PCI device, while PCI
> > specification allows maximum 2048 MSI-X interrupts that can be used.
> > If some PCI device requires more than 512 vectors, either change the
> > RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> > PCI device MSI-X size on probe time. Either way its an ABI breakage.
> >
> > Change already included in 21.11 ABI improvement spreadsheet (item 42):
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> > efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> > 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> > 567d8ee2e3c842a9e59808d9959d822e%7C43083d15727340c1b7db39efd9ccc1
> > 7a%7C0%7C0%7C637705326003996997%7CUnknown%7CTWFpbGZsb3d8eyJ
> > WIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> > 7C1000&sdata=7UgxpkEtH%2Fnjk7xo9qELjqWi58XLzzCH2pimeDWLzvc%
> > 3D&reserved=0
> > preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> > 23gid-
> > 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> > 7JdkxT_Z_SU6RrS37ys4U
> > XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> > &s=lh6DEGhR
> > Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
> >
> > This series makes struct rte_intr_handle totally opaque to the outside
> > world by wrapping it inside a .c file and providing get set wrapper APIs
> > to read or manipulate its fields.. Any changes to be made to any of the
> > fields should be done via these get set APIs.
> > Introduced a new eal_common_interrupts.c where all these APIs are
> > defined
> > and also hides struct rte_intr_handle definition.
> >
> > Details on each patch of the series:
> > Patch 1: eal/interrupts: implement get set APIs
> > This patch provides prototypes and implementation of all the new
> > get set APIs. Alloc APIs are implemented to allocate memory for
> > interrupt handle instance. Currently most of the drivers defines
> > interrupt handle instance as static but now it cant be static as
> > size of rte_intr_handle is unknown to all the drivers. Drivers are
> > expected to allocate interrupt instances during initialization
> > and free these instances during cleanup phase.
> > This patch also rearranges the headers related to interrupt
> > framework. Epoll related definitions prototypes are moved into a
> > new header i.e. rte_epoll.h and APIs defined in rte_eal_interrupts.h
> > which were driver specific are moved to rte_interrupts.h (as anyways
> > it was accessible and used outside DPDK library. Later in the series
> > rte_eal_interrupts.h is removed.
> >
> > Patch 2: eal/interrupts: avoid direct access to interrupt handle
> > Modifying the interrupt framework for linux and freebsd to use these
> > get set alloc APIs as per requirement and avoid accessing the fields
> > directly.
> >
> > Patch 3: test/interrupt: apply get set interrupt handle APIs
> > Updating interrupt test suite to use interrupt handle APIs.
> >
> > Patch 4: drivers: remove direct access to interrupt handle fields
> > Modifying all the drivers and libraries which are currently directly
> > accessing the interrupt handle fields. Drivers are expected to
> > allocated the interrupt instance, use get set APIs with the allocated
> > interrupt handle and free it on cleanup.
> >
> > Patch 5: eal/interrupts: make interrupt handle structure opaque
> > In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
> > definition is moved to c file to make it completely opaque. As part of
> > interrupt handle allocation, array like efds and elist(which are currently
> > static) are dynamically allocated with default size
> > (RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
> > device requirement using new API rte_intr_handle_event_list_update().
> > Eg, on PCI device probing MSIX size can be queried and these arrays can
> > be reallocated accordingly.
> >
> > Patch 6: eal/alarm: introduce alarm fini routine
> > Introducing alarm fini routine, as the memory allocated for alarm interrupt
> > instance can be freed in alarm fini.
> >
> > Testing performed:
> > 1. Validated the series by running interrupts and alarm test suite.
> > 2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
> > where interrupts are expected on packet arrival.
> >
> > v1:
> > * Fixed freebsd compilation failure
> > * Fixed seg fault in case of memif
> >
> > v2:
> > * Merged the prototype and implementation patch to 1.
> > * Restricting allocation of single interrupt instance.
> > * Removed base APIs, as they were exposing internally
> > allocated memory information.
> > * Fixed some memory leak issues.
> > * Marked some library specific APIs as internal.
> >
> > v3:
> > * Removed flag from instance alloc API, rather auto detect
> > if memory should be allocated using glibc malloc APIs or
> > rte_malloc*
> > * Added APIs for get/set windows handle.
> > * Defined macros for repeated checks.
> >
> > v4:
> > * Rectified some typo in the APIs documentation.
> > * Better names for some internal variables.
> >
> > v5:
> > * Reverted back to passing flag to instance alloc API, as
> > with auto detect some multiprocess issues existing in the
> > library were causing tests failure.
> > * Rebased to top of tree.
> >
> > Harman Kalra (6):
> > eal/interrupts: implement get set APIs
> > eal/interrupts: avoid direct access to interrupt handle
> > test/interrupt: apply get set interrupt handle APIs
> > drivers: remove direct access to interrupt handle
> > eal/interrupts: make interrupt handle structure opaque
> > eal/alarm: introduce alarm fini routine
> >
> > MAINTAINERS | 1 +
> > app/test/test_interrupts.c | 163 +++--
> > drivers/baseband/acc100/rte_acc100_pmd.c | 18 +-
> > .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 21 +-
> > drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 21 +-
> > drivers/bus/auxiliary/auxiliary_common.c | 2 +
> > drivers/bus/auxiliary/linux/auxiliary.c | 10 +
> > drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> > drivers/bus/dpaa/dpaa_bus.c | 28 +-
> > drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> > drivers/bus/fslmc/fslmc_bus.c | 16 +-
> > drivers/bus/fslmc/fslmc_vfio.c | 32 +-
> > drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 20 +-
> > drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> > drivers/bus/fslmc/rte_fslmc.h | 2 +-
> > drivers/bus/ifpga/ifpga_bus.c | 15 +-
> > drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> > drivers/bus/pci/bsd/pci.c | 21 +-
> > drivers/bus/pci/linux/pci.c | 4 +-
> > drivers/bus/pci/linux/pci_uio.c | 73 +-
> > drivers/bus/pci/linux/pci_vfio.c | 115 ++-
> > drivers/bus/pci/pci_common.c | 29 +-
> > drivers/bus/pci/pci_common_uio.c | 21 +-
> > drivers/bus/pci/rte_bus_pci.h | 4 +-
> > drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> > drivers/bus/vmbus/linux/vmbus_uio.c | 37 +-
> > drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> > drivers/bus/vmbus/vmbus_common_uio.c | 24 +-
> > drivers/common/cnxk/roc_cpt.c | 8 +-
> > drivers/common/cnxk/roc_dev.c | 14 +-
> > drivers/common/cnxk/roc_irq.c | 108 +--
> > drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> > drivers/common/cnxk/roc_nix_irq.c | 36 +-
> > drivers/common/cnxk/roc_npa.c | 2 +-
> > drivers/common/cnxk/roc_platform.h | 49 +-
> > drivers/common/cnxk/roc_sso.c | 4 +-
> > drivers/common/cnxk/roc_tim.c | 4 +-
> > drivers/common/octeontx2/otx2_dev.c | 14 +-
> > drivers/common/octeontx2/otx2_irq.c | 117 +--
> > .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> > drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> > drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> > drivers/net/atlantic/atl_ethdev.c | 20 +-
> > drivers/net/avp/avp_ethdev.c | 8 +-
> > drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> > drivers/net/axgbe/axgbe_mdio.c | 6 +-
> > drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> > drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> > drivers/net/bnxt/bnxt_irq.c | 4 +-
> > drivers/net/dpaa/dpaa_ethdev.c | 47 +-
> > drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> > drivers/net/e1000/em_ethdev.c | 23 +-
> > drivers/net/e1000/igb_ethdev.c | 79 +--
> > drivers/net/ena/ena_ethdev.c | 35 +-
> > drivers/net/enic/enic_main.c | 26 +-
> > drivers/net/failsafe/failsafe.c | 23 +-
> > drivers/net/failsafe/failsafe_intr.c | 43 +-
> > drivers/net/failsafe/failsafe_ops.c | 19 +-
> > drivers/net/failsafe/failsafe_private.h | 2 +-
> > drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> > drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> > drivers/net/hns3/hns3_ethdev.c | 57 +-
> > drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> > drivers/net/hns3/hns3_rxtx.c | 2 +-
> > drivers/net/i40e/i40e_ethdev.c | 53 +-
> > drivers/net/iavf/iavf_ethdev.c | 42 +-
> > drivers/net/iavf/iavf_vchnl.c | 4 +-
> > drivers/net/ice/ice_dcf.c | 10 +-
> > drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> > drivers/net/ice/ice_ethdev.c | 49 +-
> > drivers/net/igc/igc_ethdev.c | 45 +-
> > drivers/net/ionic/ionic_ethdev.c | 17 +-
> > drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> > drivers/net/memif/memif_socket.c | 111 ++-
> > drivers/net/memif/memif_socket.h | 4 +-
> > drivers/net/memif/rte_eth_memif.c | 61 +-
> > drivers/net/memif/rte_eth_memif.h | 2 +-
> > drivers/net/mlx4/mlx4.c | 19 +-
> > drivers/net/mlx4/mlx4.h | 2 +-
> > drivers/net/mlx4/mlx4_intr.c | 47 +-
> > drivers/net/mlx5/linux/mlx5_os.c | 53 +-
> > drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> > drivers/net/mlx5/mlx5.h | 6 +-
> > drivers/net/mlx5/mlx5_rxq.c | 42 +-
> > drivers/net/mlx5/mlx5_trigger.c | 4 +-
> > drivers/net/mlx5/mlx5_txpp.c | 26 +-
> > drivers/net/netvsc/hn_ethdev.c | 4 +-
> > drivers/net/nfp/nfp_common.c | 34 +-
> > drivers/net/nfp/nfp_ethdev.c | 13 +-
> > drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> > drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> > drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> > drivers/net/qede/qede_ethdev.c | 16 +-
> > drivers/net/sfc/sfc_intr.c | 30 +-
> > drivers/net/tap/rte_eth_tap.c | 36 +-
> > drivers/net/tap/rte_eth_tap.h | 2 +-
> > drivers/net/tap/tap_intr.c | 32 +-
> > drivers/net/thunderx/nicvf_ethdev.c | 12 +
> > drivers/net/thunderx/nicvf_struct.h | 2 +-
> > drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> > drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> > drivers/net/vhost/rte_eth_vhost.c | 76 +-
> > drivers/net/virtio/virtio_ethdev.c | 21 +-
> > .../net/virtio/virtio_user/virtio_user_dev.c | 48 +-
> > drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> > drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> > drivers/raw/ntb/ntb.c | 9 +-
> > .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> > drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> > drivers/vdpa/mlx5/mlx5_vdpa.c | 10 +
> > drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> > drivers/vdpa/mlx5/mlx5_vdpa_event.c | 22 +-
> > drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 45 +-
> > lib/bbdev/rte_bbdev.c | 4 +-
> > lib/eal/common/eal_common_interrupts.c | 588 +++++++++++++++
> > lib/eal/common/eal_private.h | 11 +
> > lib/eal/common/meson.build | 1 +
> > lib/eal/freebsd/eal.c | 1 +
> > lib/eal/freebsd/eal_alarm.c | 53 +-
> > lib/eal/freebsd/eal_interrupts.c | 112 ++-
> > lib/eal/include/meson.build | 2 +-
> > lib/eal/include/rte_eal_interrupts.h | 269 -------
> > lib/eal/include/rte_eal_trace.h | 24 +-
> > lib/eal/include/rte_epoll.h | 118 ++++
> > lib/eal/include/rte_interrupts.h | 668 +++++++++++++++++-
> > lib/eal/linux/eal.c | 1 +
> > lib/eal/linux/eal_alarm.c | 37 +-
> > lib/eal/linux/eal_dev.c | 63 +-
> > lib/eal/linux/eal_interrupts.c | 303 +++++---
> > lib/eal/version.map | 46 +-
> > lib/ethdev/ethdev_pci.h | 2 +-
> > lib/ethdev/rte_ethdev.c | 14 +-
> > 132 files changed, 3631 insertions(+), 1713 deletions(-)
> > create mode 100644 lib/eal/common/eal_common_interrupts.c
> > delete mode 100644 lib/eal/include/rte_eal_interrupts.h
> > create mode 100644 lib/eal/include/rte_epoll.h
> >
> > --
> > 2.18.0
>
> This series is causing this seg fault with MLX5 pmd:
> Thread 1 "dpdk-l3fwd-powe" received signal SIGSEGV, Segmentation fault.
> rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> 1512 if (__atomic_load_n(&rev->status,
> (gdb) bt
> #0 rte_intr_free_epoll_fd (intr_handle=0x0) at ../lib/eal/linux/eal_interrupts.c:1512
> #1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> #2 0x0000555556de73da in mlx5_rx_intr_vec_enable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:836
> #3 0x0000555556e04012 in mlx5_dev_start (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1146
> #4 0x0000555555b82da7 in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1823
> #5 0x000055555575e66d in main (argc=7, argv=0x7fffffffe3f0) at ../examples/l3fwd-power/main.c:2811
> (gdb) f 1
> #1 0x0000555556de7814 in mlx5_rx_intr_vec_disable (dev=0x55555b554a40 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_rxq.c:934
> 934 rte_intr_free_epoll_fd(intr_handle);
>
>
> It can be easily reproduced as following:
> dpdk-l3fwd-power -n 4 -a 0000:08:00.0,txq_inline_mpw=439,rx_vec_en=1 -a 0000:08:00.,txq_inline_mpw=439,rx_vec_en=1 -c 0xfffffff -- -p 0x3 -P --interrupt-only --parse-ptype --config='(0, 0, 0)(1, 0, 1)(0, 1, 2)(1, 1, 3)(0, 2, 4)(1, 2, 5)(0, 3, 6)(1, 3, 7)'
>
That confirms my suspicion on pci bus update that look at
RTE_PCI_DRV_NEED_MAPPING.
v7 incoming.
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 0/9] make rte_intr_handle internal
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (7 preceding siblings ...)
2021-10-25 13:04 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Raslan Darawsheh
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 1/9] interrupts: add allocator and accessors David Marchand
` (8 more replies)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
9 siblings, 9 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
RTE_PCI_DRV_NEED_MAPPING,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 47 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 504 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3453 insertions(+), 1748 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 1/9] interrupts: add allocator and accessors
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 2/9] interrupts: remove direct access to interrupt handle David Marchand
` (7 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Ray Kinsella
From: Harman Kalra <hkalra@marvell.com>
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
- used a single bit to mark instance as shared (default is private),
- removed rte_intr_instance_copy / rte_intr_instance_alloc_flag_get
with a single rte_intr_instance_dup helper,
- made rte_intr_vec_list_alloc alloc_flags-aware,
- exported all symbols for Windows,
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 411 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 207 +-------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 627 +++++++++++++++++++++++++
lib/eal/version.map | 45 +-
8 files changed, 1201 insertions(+), 210 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 587632dce0..097a57f7f6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -211,6 +211,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..d6e6654fbb
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+/* Macros to check for valid interrupt handle */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+#define RTE_INTR_INSTANCE_KNOWN_FLAGS (RTE_INTR_INSTANCE_F_PRIVATE \
+ | RTE_INTR_INSTANCE_F_SHARED \
+ )
+
+#define RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags) \
+ !!(flags & RTE_INTR_INSTANCE_F_SHARED)
+
+struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
+{
+ struct rte_intr_handle *intr_handle;
+ bool uses_rte_memory;
+
+ /* Check the flag passed by user, it should be part of the
+ * defined flags.
+ */
+ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
+ RTE_LOG(ERR, EAL, "Invalid alloc flag passed 0x%x\n", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ uses_rte_memory = RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags);
+ if (uses_rte_memory)
+ intr_handle = rte_zmalloc(NULL, sizeof(*intr_handle), 0);
+ else
+ intr_handle = calloc(1, sizeof(*intr_handle));
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->alloc_flags = flags;
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+
+ return intr_handle;
+}
+
+struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
+{
+ struct rte_intr_handle *intr_handle;
+
+ if (src == NULL) {
+ RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ intr_handle = rte_intr_instance_alloc(src->alloc_flags);
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
+
+ return intr_handle;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec != NULL)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ else
+ intr_handle->intr_vec = calloc(size, sizeof(int));
+ if (intr_handle->intr_vec == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ RTE_ASSERT(intr_handle->vec_list_size != 0);
+
+ if (index > intr_handle->vec_list_size) {
+ RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle->intr_vec);
+ else
+ free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ intr_handle->vec_list_size = 0;
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..60bb60ca59 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle */
};
+ uint32_t alloc_flags; /**< flags passed at allocation */
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
/**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..a515a8c073 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,12 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
+#include <rte_bitops.h>
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +26,15 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+
+/** Interrupt instance will not be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_PRIVATE UINT32_C(0)
+/** Interrupt instance will be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -163,6 +176,620 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later based on size of MSIX interrupts supported by a PCI
+ * device.
+ *
+ * This function should be called from application or driver, before calling
+ * any of the interrupt APIs.
+ *
+ * @param flags
+ * See RTE_INTR_INSTANCE_F_* flags definitions.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for interrupt handle
+ * resources.
+ *
+ * @param intr_handle
+ * Interrupt handle address.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Creates a clone of src by allocating a new handle and copying src content.
+ *
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+struct rte_intr_handle *
+rte_intr_instance_dup(const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the number of event fd field of interrupt handle
+ * with user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the number of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the number of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the epoll event object array index with the given
+ * elist instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * epoll event instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of epoll event instance from elist array at a given
+ * index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * number of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * Number of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Frees the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Using this API, PMDs can reallocate the arrays as
+ * per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the Windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, Windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the Windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * Windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..9d43655b66 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,12 +412,49 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_get;
+ rte_intr_fd_set;
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
+ rte_intr_type_get;
+ rte_intr_type_set;
};
INTERNAL {
global:
rte_firmware_read;
+ rte_intr_allow_others;
+ rte_intr_cap_multiple;
+ rte_intr_dev_fd_get;
+ rte_intr_dev_fd_set;
+ rte_intr_dp_is_en;
+ rte_intr_efd_counter_size_set;
+ rte_intr_efd_counter_size_get;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_efds_index_get;
+ rte_intr_efds_index_set;
+ rte_intr_elist_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_event_list_update;
+ rte_intr_free_epoll_fd;
+ rte_intr_instance_dup;
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
+ rte_intr_max_intr_get;
+ rte_intr_max_intr_set;
+ rte_intr_nb_efd_get;
+ rte_intr_nb_efd_set;
+ rte_intr_nb_intr_get;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_free;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_index_set;
rte_mem_lock;
rte_mem_map;
rte_mem_page_size;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 2/9] interrupts: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 1/9] interrupts: add allocator and accessors David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 3/9] test/interrupts: " David Marchand
` (6 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- fixed compilation on FreeBSD,
Changes since v5:
- used new helper rte_intr_instance_dup,
---
lib/eal/freebsd/eal_interrupts.c | 85 +++++----
lib/eal/linux/eal_interrupts.c | 304 +++++++++++++++++--------------
2 files changed, 219 insertions(+), 170 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..10aa91cc09 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +103,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +112,9 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL &&
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM &&
+ !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,7 +136,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(src);
+ src = NULL;
+ goto fail;
+ }
TAILQ_INIT(&src->callbacks);
TAILQ_INSERT_TAIL(&intr_sources, src, next);
}
@@ -151,7 +159,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event ||
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +182,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
- RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n",
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +221,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +236,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +276,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +290,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +322,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +374,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -386,9 +396,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +415,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -427,9 +437,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +450,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +472,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) == event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +484,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +555,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle, &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -556,8 +565,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
* remove intr file descriptor from wait list.
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
+ rte_intr_fd_get(src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +576,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..f72661e1f0 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -82,7 +82,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +112,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +125,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +145,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +160,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +172,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +189,11 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +204,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +214,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +229,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +240,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +258,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +269,11 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
-
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +284,35 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++) {
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
+ }
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +324,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +335,12 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +353,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +365,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +385,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +396,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +412,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +438,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +465,9 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +478,9 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,9 +497,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
- RTE_LOG(ERR, EAL,
- "Registering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
+ RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n");
return -EINVAL;
}
@@ -503,7 +517,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -519,15 +533,26 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
src = calloc(1, sizeof(*src));
if (src == NULL) {
RTE_LOG(ERR, EAL, "Can not allocate memory\n");
- free(callback);
ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
+ free(src);
+ src = NULL;
+ } else {
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,18 +580,18 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
rte_spinlock_lock(&intr_lock);
/* check if the insterrupt source for the fd is existent */
- TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ TAILQ_FOREACH(src, &intr_sources, next) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
+ }
/* No interrupt source registered for the fd */
if (src == NULL) {
@@ -605,9 +630,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
@@ -615,7 +639,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +670,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +702,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -732,9 +758,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +782,16 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +824,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +834,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -861,9 +890,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,8 +924,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
- events[n].data.fd)
+ if (rte_intr_fd_get(src->intr_handle) == events[n].data.fd)
break;
if (src == NULL){
rte_spinlock_unlock(&intr_lock);
@@ -909,7 +936,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1000,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1040,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1049,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1152,17 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle), &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1215,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1228,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1449,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1458,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1472,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle, efd_idx), rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1482,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1507,8 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1528,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1537,30 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0, rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle, RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1572,18 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) > rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1592,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 3/9] test/interrupts: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 1/9] interrupts: add allocator and accessors David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 2/9] interrupts: remove direct access to interrupt handle David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 4/9] alarm: " David Marchand
` (5 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- fixed leak on when some interrupt handle can't be allocated,
---
app/test/test_interrupts.c | 164 ++++++++++++++++++++++---------------
1 file changed, 98 insertions(+), 66 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..2a05399f96 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,54 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (!intr_handles[i])
+ return -1;
+ }
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +120,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +152,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +207,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +229,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +254,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +264,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +272,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +280,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +288,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +296,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +317,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +328,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +336,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +344,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +352,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +360,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +382,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +402,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,11 +427,11 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
- return -1;
+ goto out;
}
printf("Check unknown valid interrupt full path\n");
@@ -445,8 +476,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +485,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +502,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +511,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +561,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 4/9] alarm: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (2 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 3/9] test/interrupts: " David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 5/9] lib: " David Marchand
` (4 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- removed unused interrupt handle in FreeBSD alarm code,
Changes since v5:
- split from patch4,
- merged patch6,
- renamed rte_eal_alarm_fini as rte_eal_alarm_cleanup,
---
lib/eal/common/eal_private.h | 10 ++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +++++++++++++++++++++++++++++------
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +++++++++++++++++++++++++-------
5 files changed, 66 insertions(+), 13 deletions(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..36bcc0b5a4 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,16 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Alarm mechanism cleanup.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_cleanup(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..9935356ed4 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -975,6 +975,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..1023c32937 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,6 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +42,46 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ rte_intr_instance_free(intr_handle);
+ return -1;
}
static inline int
@@ -118,7 +141,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +159,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..81fdebc6a0 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1368,6 +1368,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..3b5e894595 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,40 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ rte_intr_instance_free(intr_handle);
rte_errno = errno;
return -1;
}
@@ -109,7 +127,7 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +158,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +188,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0, &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 5/9] lib: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (3 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 4/9] alarm: " David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-28 6:14 ` Jiang, YuX
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 6/9] drivers: " David Marchand
` (3 subsequent siblings)
8 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Nicolas Chautru, Ferruh Yigit,
Andrew Rybchenko
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch4,
---
lib/bbdev/rte_bbdev.c | 4 +--
lib/eal/linux/eal_dev.c | 57 ++++++++++++++++++++++++-----------------
lib/ethdev/rte_ethdev.c | 14 +++++-----
3 files changed, 43 insertions(+), 32 deletions(-)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..b86c5fdcc0 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (intr_handle == NULL) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..06820a3666 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,35 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ rte_intr_instance_free(intr_handle);
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +361,15 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 74de29c2e0..7db84b12d0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4819,13 +4819,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4860,15 +4860,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5046,12 +5046,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 6/9] drivers: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (4 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 5/9] lib: " David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 7/9] interrupts: make interrupt handle structure opaque David Marchand
` (2 subsequent siblings)
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Hyong Youb Kim, Nicolas Chautru,
Parav Pandit, Xueming Li, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Ferruh Yigit, Anatoly Burakov, Stephen Hemminger,
Long Li, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Jerin Jacob, Ankur Dwivedi, Anoob Joseph,
Pavan Nikhilesh, Igor Russkikh, Steven Webster, Matt Peters,
Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Haiyue Wang, Marcin Wojtas, Michal Krawczyk,
Shai Brandes, Evgeny Schemeilin, Igor Chauskin, John Daley,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- fixed interrupt handle allocation for drivers without
RTE_PCI_DRV_NEED_MAPPING,
Changes since v5:
- moved instance allocation to probing for auxiliary,
- fixed dev_irq_register() return value sign on error for
drivers/common/cnxk/roc_irq.c,
---
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 ++--
drivers/bus/auxiliary/auxiliary_common.c | 17 ++-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 ++-
drivers/bus/fslmc/fslmc_vfio.c | 30 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 102 +++++++++------
drivers/bus/pci/pci_common.c | 47 +++++--
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 107 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 48 +++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 21 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 19 ++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 108 +++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 55 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 25 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 33 +++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 ++---
drivers/net/thunderx/nicvf_ethdev.c | 10 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 80 ++++++------
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +++++----
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 62 +++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 ++++---
lib/ethdev/ethdev_pci.h | 2 +-
111 files changed, 1673 insertions(+), 1183 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 05fe6f8b6f..1c6080f2f8 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,8 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1098,8 +1098,8 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1111,8 +1111,8 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4185,7 +4185,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index ee457f3071..15d23d6269 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,17 +743,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1880,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 703bb611a0..92decc3e05 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,17 +1014,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2370,7 +2370,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..2cf8fe672d 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -121,15 +121,27 @@ rte_auxiliary_probe_one_driver(struct rte_auxiliary_driver *drv,
return -EINVAL;
}
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ AUXILIARY_LOG(ERR, "Could not allocate interrupt instance for device %s",
+ dev->name);
+ return -ENOMEM;
+ }
+
dev->driver = drv;
AUXILIARY_LOG(INFO, "Probe auxiliary driver: %s device: %s (NUMA node %i)",
drv->driver.name, dev->name, dev->device.numa_node);
ret = drv->probe(drv, dev);
- if (ret != 0)
+ if (ret != 0) {
dev->driver = NULL;
- else
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ } else {
dev->device.driver = &drv->driver;
+ }
return ret;
}
@@ -320,6 +332,7 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..9a53fdc1fb 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..ac3cb4aa5a 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,7 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +161,15 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,10 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..b4704eeae4 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,14 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
return 0;
}
@@ -711,7 +719,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..2210a0fa4a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,14 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +498,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +546,7 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +558,7 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..cbc6809284 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,14 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (afu_dev->intr_handle == NULL) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +197,10 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +406,7 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..9a11f99ae3 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,10 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +121,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..2ee5d04672 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,19 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +225,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +240,38 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +397,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +443,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +453,17 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..7b2f8296c5 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,16 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +389,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +405,9 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +418,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +433,11 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +720,12 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -854,9 +872,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +917,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +990,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1004,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1047,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1103,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1117,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..f8fff2c98e 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -226,16 +226,39 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
return -EINVAL;
}
- dev->driver = dr;
- }
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
- if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
- /* map resources for devices that use igb_uio */
- ret = rte_pci_map_device(dev);
- if (ret != 0) {
- dev->driver = NULL;
- return ret;
+ dev->vfio_req_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->vfio_req_intr_handle == NULL) {
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
+ ret = rte_pci_map_device(dev);
+ if (ret != 0) {
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ return ret;
+ }
}
+
+ dev->driver = dr;
}
RTE_LOG(INFO, EAL, "Probe PCI driver: %s (%x:%x) device: "PCI_PRI_FMT" (socket %i)\n",
@@ -248,6 +271,10 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
return ret; /* no rollback if already succeeded earlier */
if (ret) {
dev->driver = NULL;
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
if ((dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) &&
/* Don't unmap if device is unsupported and
* driver needs mapped resources.
@@ -295,6 +322,10 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
/* clear driver structure */
dev->driver = NULL;
dev->device.driver = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
/* unmap resources for devices that use igb_uio */
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..f502783f7a 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,12 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..9c5c1aeca3 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,15 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +81,22 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..336296d6a8 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,14 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +258,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 56744184ae..f0e52ae18f 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..3b34467b96 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,10 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle, PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +51,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +73,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +88,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +115,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +127,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +135,49 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
- plt_err("Vector=%d greater than max_intr=%d or "
- "max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
+ plt_err("Vector=%d greater than max_intr=%d or ",
+ vec, plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return -errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +187,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +221,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index a0f01797f1..60227b72d0 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -106,6 +106,32 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_dup rte_intr_instance_dup
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -183,7 +209,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -203,18 +229,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index f7bfac796c..1c03e8bfa1 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -359,7 +359,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -478,7 +478,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -524,10 +524,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -607,7 +606,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -637,10 +636,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -691,7 +687,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 9eabdf0901..7ac55584ff 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index dab0c6775d..7d40c18a86 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -406,7 +406,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2311,7 +2311,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2335,8 +2335,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 59fa9175ad..32d8c666f9 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 78fc717ec4..f36ad30e17 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -230,10 +230,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -258,8 +258,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2791a5c62d..5a34bb96d0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -846,26 +846,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1473,7 +1471,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1515,10 +1513,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 89ea7dd47c..b9bf9d2966 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -208,7 +208,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -255,13 +255,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -368,9 +369,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -440,7 +442,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -449,7 +451,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1072,26 +1074,38 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->qp = qp;
/* Set up the device interrupt handler */
- if (!dev->intr_handle) {
+ if (dev->intr_handle == NULL) {
struct rte_dpaa_device *dpaa_dev;
struct rte_device *rdev = dev->device;
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 59e728577f..73d17f7b3c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1145,7 +1145,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1216,8 +1216,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1256,8 +1256,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 9da477e59d..18fea4e0ac 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -523,7 +523,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -573,12 +573,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -716,7 +714,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -750,10 +748,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -765,7 +760,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1006,7 +1001,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index ae3bc4a9c2..ff06575f03 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -851,12 +851,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -992,7 +992,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1196,7 +1196,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1255,11 +1255,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1418,7 +1417,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1462,10 +1461,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1505,7 +1501,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1531,10 +1527,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2771,7 +2765,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3288,7 +3282,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3319,11 +3313,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3345,7 +3338,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3369,10 +3362,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3410,7 +3402,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5112,7 +5104,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5132,7 +5124,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5210,7 +5202,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5251,8 +5243,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5270,8 +5263,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5281,8 +5274,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 572d7c20f9..634c97acf6 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -494,7 +494,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -954,7 +954,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -976,10 +976,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -995,7 +994,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1015,7 +1014,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1824,7 +1826,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -3112,7 +3114,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -3139,9 +3141,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -3160,7 +3162,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -3168,8 +3172,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index f7ae84767f..5cc6d9f017 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1111,8 +1111,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 82d595b1d1..ad6b43538e 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,23 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (PRIV(dev)->intr_handle == NULL) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +309,7 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 5f4810051d..14b87a54ab 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index a3a8a1c82e..822883bc2f 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index d256334bfd..c25c323140 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2367,7 +2367,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2392,7 +2392,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2420,15 +2420,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2787,7 +2789,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3053,7 +3055,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4cd5a85d5f..9cabd3e0c1 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1228,13 +1228,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3118,7 +3118,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3128,7 +3128,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3158,7 +3158,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9881659ceb..1437a07372 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5224,7 +5224,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5237,7 +5237,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5296,8 +5296,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5330,8 +5330,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5665,7 +5665,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5688,16 +5688,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5710,20 +5707,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5734,7 +5732,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5744,8 +5742,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5888,7 +5887,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5908,16 +5907,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index c0c1f1c4c1..873924927c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1956,7 +1956,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1964,7 +1964,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2016,8 +2016,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2045,8 +2045,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2089,7 +2089,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2107,16 +2107,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2272,7 +2272,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2295,16 +2295,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2317,20 +2314,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2341,7 +2340,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2351,8 +2350,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2816,7 +2816,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2842,7 +2842,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index b633aabb14..ceb98025f8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 293df887bf..62e374d19e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1440,7 +1440,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11068,11 +11067,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11087,7 +11086,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11096,11 +11095,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index b2b413c247..f892306f18 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -646,17 +646,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -716,7 +715,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -726,14 +726,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -775,8 +777,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -912,10 +913,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1639,7 +1637,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1658,7 +1657,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1670,7 +1669,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2355,12 +2355,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2394,7 +2394,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0f4dd21d44..bb65dbf04f 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7b7df5eebb..084f7a53db 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 7cb8066416..7c71a48010 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -144,11 +144,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -198,7 +196,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -208,12 +207,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -623,10 +623,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6a6637a15a..ef6ee1c386 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2178,7 +2178,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2375,7 +2375,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2405,7 +2405,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2430,10 +2430,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2447,7 +2444,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3345,10 +3342,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3376,8 +3374,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3386,7 +3385,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3398,7 +3399,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3424,7 +3425,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3444,11 +3445,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4755,19 +4754,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4776,11 +4775,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 7ce80a442b..8189ad412a 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -377,7 +377,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -397,7 +397,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -609,7 +609,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -661,10 +661,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -724,7 +721,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -748,8 +745,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -766,8 +763,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -803,7 +800,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -812,7 +809,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -906,7 +904,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -944,10 +942,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1162,7 +1159,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1331,11 +1328,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2076,7 +2073,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2095,7 +2092,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index c688c3735c..28280c5377 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1060,7 +1060,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1074,15 +1074,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1111,7 +1106,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a87c607106..1911cf2fab 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1525,7 +1525,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2539,7 +2539,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2594,11 +2594,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2834,7 +2832,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2885,10 +2883,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2972,7 +2967,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4618,7 +4613,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5290,7 +5285,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5353,11 +5348,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5397,7 +5390,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5425,10 +5418,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5440,7 +5430,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5738,7 +5728,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5764,7 +5754,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5780,7 +5770,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5907,7 +5897,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5934,8 +5924,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5956,7 +5948,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6000,8 +5992,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 8533e39f69..d48c3685d9 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,25 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +876,10 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +935,21 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sock->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +962,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1082,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1146,25 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1179,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 9deb7a5f13..8cec493ffd 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,10 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1121,9 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1317,24 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1358,23 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1359,6 +1390,7 @@ memif_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..cccc71f757 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,19 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto port_error;
+
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1067,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1112,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index 2aab0f60a7..01057482ec 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,21 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +260,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +293,10 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle, priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index f17e1aac3c..72bbb665cf 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2458,11 +2458,9 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev,
* Representor interrupts handle is released in mlx5_dev_stop().
*/
if (list[i].info.representor) {
- struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
- if (!intr_handle) {
+ struct rte_intr_handle *intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (intr_handle == NULL) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
"Rx interrupts will not be supported",
@@ -2626,7 +2624,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2690,24 +2688,38 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int flags;
struct ibv_context *ctx = sh->cdev->ctx;
- sh->intr_handle.fd = -1;
+ sh->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(ctx->async_fd, F_GETFL);
ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK);
if (ret) {
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ctx->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle, ctx->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
if (!devx_comp) {
@@ -2721,13 +2733,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2744,13 +2757,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 902b8ec934..db474f030a 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,19 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (server_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -rte_errno;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +168,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5da5ceaafe..5768b82935 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -996,7 +996,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1160,8 +1160,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_indexed_pool *ipool[MLX5_IPOOL_MAX];
struct mlx5_indexed_pool *mdh_ipools[MLX5_MAX_MODIFY_NUM];
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis[16]; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 5fed42324d..4f02fe02b9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,19 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +918,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +932,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index dacf7ff272..d916c8addc 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1183,7 +1183,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1192,7 +1192,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 48f03fcd79..34f92faa67 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -759,11 +759,11 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -787,13 +787,22 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->txpp.intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9c4ae80e7e..8a950403ac 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 3ea697c544..f8978e803a 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -804,7 +804,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -824,7 +825,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -874,7 +876,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index e08e594b04..830863af28 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 817fe64dbc..5557a1e002 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fc76b84b5b..466e089b34 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c907d7fd83..8ca00e7f6c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1569,17 +1569,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2554,22 +2554,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index 69414fd839..ab67aa9237 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index ef3399ee0f..a9a7658147 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1663,7 +1663,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1674,22 +1675,22 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle, tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1702,8 +1703,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1918,6 +1919,13 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->intr_handle == NULL) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1935,9 +1943,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2088,6 +2096,7 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..56c343acea 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,23 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 762647e3b6..fc334cf734 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1858,6 +1858,8 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2157,6 +2159,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (nic->intr_handle == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 4b3b703029..169272ded5 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -548,7 +548,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1620,7 +1620,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1670,17 +1670,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1861,7 +1858,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1911,10 +1908,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1977,7 +1971,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -2936,8 +2930,8 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
- if (intr_handle->type != RTE_INTR_HANDLE_UIO &&
- intr_handle->type != RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_UIO &&
+ rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_VFIO_MSIX)
wr32(hw, TXGBE_PX_INTA, 1);
/* clear all cause mask */
@@ -3103,7 +3097,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3623,7 +3617,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3705,7 +3699,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3739,8 +3733,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 283b52e8f3..4dda55b0c2 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index beb4b8de2d..5111304ff9 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -523,40 +523,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
- if (!handle)
+ if (handle == NULL)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -634,12 +637,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = dev->intr_handle;
- if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ if (intr_handle != NULL) {
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_instance_free(intr_handle);
}
-
dev->intr_handle = NULL;
}
@@ -653,32 +654,31 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
int ret;
/* uninstall firstly if we are reconnecting */
- if (dev->intr_handle)
+ if (dev->intr_handle != NULL)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
- if (!dev->intr_handle) {
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle, sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -697,13 +697,20 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i, vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -908,7 +915,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 94120b3490..26de006c77 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1643,7 +1642,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1682,15 +1683,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..35aa76b1ff 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -406,23 +406,37 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
uint32_t i;
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
- if (!eth_dev->intr_handle) {
+ if (eth_dev->intr_handle == NULL) {
+ eth_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (eth_dev->intr_handle == NULL) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
- for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+ }
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle, dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
+
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -656,10 +670,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
{
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
- eth_dev->intr_handle = NULL;
- }
+ rte_intr_instance_free(eth_dev->intr_handle);
+ eth_dev->intr_handle = NULL;
virtio_user_stop_device(dev);
@@ -962,7 +974,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +984,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1009,17 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 26d9edf531..d1ef1cad08 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -619,11 +619,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -634,8 +632,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -643,17 +640,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -801,7 +800,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -824,7 +825,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1021,10 +1024,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1670,7 +1670,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1680,7 +1682,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..8d9db585a4 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,14 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (ifpga_irq_handle[i] == NULL)
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1392,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1428,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1521,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 9a6f64797b..b9e84dd9bf 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -543,6 +543,12 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->err_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -561,6 +567,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return -rte_errno;
@@ -592,6 +599,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev)
if (priv->vdev)
rte_vdpa_unregister_device(priv->vdev);
pthread_mutex_destroy(&priv->vq_config_lock);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 5045fea773..cf4f384fa4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -137,7 +137,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 19497597e6..042d22777f 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -411,12 +411,17 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -436,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index c5b357a83b..cb37ba097c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -25,7 +25,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -58,21 +59,23 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -337,21 +340,33 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (virtq->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -506,7 +521,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 59c5d7b40f..71aa4b2e98 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 7/9] interrupts: make interrupt handle structure opaque
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (5 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 6/9] drivers: " David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 8/9] interrupts: rename device specific file descriptor David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 9/9] interrupts: extend event list David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
Moving interrupt handle structure definition inside a EAL private
header to make its fields totally opaque to the outside world.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- let rte_intr_handle fields untouched:
- split vfio / uio fd renames in a separate commit,
- split event list update in a separate commit,
- moved rte_intr_handle definition to a EAL private header,
- preserved dumping all info in interrupt tracepoints,
---
lib/eal/common/eal_common_interrupts.c | 2 +
lib/eal/common/eal_interrupts.h | 37 +++++++++++++
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 --------------------------
lib/eal/include/rte_eal_trace.h | 2 +
lib/eal/include/rte_interrupts.h | 24 ++++++++-
6 files changed, 63 insertions(+), 75 deletions(-)
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index d6e6654fbb..1337c560e4 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -10,6 +10,8 @@
#include <rte_log.h>
#include <rte_malloc.h>
+#include "eal_interrupts.h"
+
/* Macros to check for valid interrupt handle */
#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
if (intr_handle == NULL) { \
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
new file mode 100644
index 0000000000..beacc04b62
--- /dev/null
+++ b/lib/eal/common/eal_interrupts.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef EAL_INTERRUPTS_H
+#define EAL_INTERRUPTS_H
+
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ RTE_STD_C11
+ union {
+ /** VFIO device file descriptor */
+ int vfio_dev_fd;
+ /** UIO cfg file desc for uio_pci_generic */
+ int uio_cfg_fd;
+ };
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle */
+ };
+ uint32_t alloc_flags; /**< flags passed at allocation */
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
+ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
+#endif /* EAL_INTERRUPTS_H */
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 60bb60ca59..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle */
- };
- uint32_t alloc_flags; /**< flags passed at allocation */
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..af7b2d0bf0 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -19,6 +19,8 @@ extern "C" {
#include <rte_interrupts.h>
#include <rte_trace_point.h>
+#include "eal_interrupts.h"
+
/* Alarm */
RTE_TRACE_POINT(
rte_eal_trace_alarm_set,
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index a515a8c073..edbf0faeef 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -35,6 +35,28 @@ struct rte_intr_handle;
/** Interrupt instance will be shared between primary and secondary processes. */
#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -45,8 +67,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 8/9] interrupts: rename device specific file descriptor
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (6 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 7/9] interrupts: make interrupt handle structure opaque David Marchand
@ 2021-10-25 13:34 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 9/9] interrupts: extend event list David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
VFIO/UIO are mutually exclusive, storing file descriptor in a single
field is enough.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch5,
---
lib/eal/common/eal_common_interrupts.c | 6 +++---
lib/eal/common/eal_interrupts.h | 8 +-------
lib/eal/include/rte_eal_trace.h | 8 ++++----
3 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 1337c560e4..3285c4335f 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -72,7 +72,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
intr_handle = rte_intr_instance_alloc(src->alloc_flags);
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
@@ -139,7 +139,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -150,7 +150,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index beacc04b62..1a4e5573b2 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -9,13 +9,7 @@ struct rte_intr_handle {
RTE_STD_C11
union {
struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
+ int dev_fd; /**< VFIO/UIO cfg device file descriptor */
int fd; /**< interrupt event file descriptor */
};
void *windows_handle; /**< device driver handle */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index af7b2d0bf0..5ef4398230 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -151,7 +151,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -164,7 +164,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -176,7 +176,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -186,7 +186,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v7 9/9] interrupts: extend event list
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
` (7 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 8/9] interrupts: rename device specific file descriptor David Marchand
@ 2021-10-25 13:34 ` David Marchand
8 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 13:34 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Anatoly Burakov,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
From: Harman Kalra <hkalra@marvell.com>
Dynamically allocating the efds and elist array of intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
Changes since v6:
- removed unneeded checks on elist/efds array initialisation,
Changes since v5:
- split from patch5,
---
drivers/bus/pci/linux/pci_vfio.c | 6 ++
drivers/common/cnxk/roc_platform.h | 1 +
lib/eal/common/eal_common_interrupts.c | 95 +++++++++++++++++++++++++-
lib/eal/common/eal_interrupts.h | 5 +-
4 files changed, 102 insertions(+), 5 deletions(-)
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index 7b2f8296c5..f622e7f8e6 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,12 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle, irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 60227b72d0..5da23fe5f8 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -121,6 +121,7 @@
#define plt_intr_instance_alloc rte_intr_instance_alloc
#define plt_intr_instance_dup rte_intr_instance_dup
#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
#define plt_intr_max_intr_get rte_intr_max_intr_get
#define plt_intr_max_intr_set rte_intr_max_intr_set
#define plt_intr_nb_efd_get rte_intr_nb_efd_get
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 3285c4335f..636bbfce72 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -53,10 +53,46 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
return NULL;
}
+ if (uses_rte_memory) {
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(int), 0);
+ } else {
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(int));
+ }
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (uses_rte_memory) {
+ intr_handle->elist = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(struct rte_epoll_event),
+ 0);
+ } else {
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ }
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
intr_handle->alloc_flags = flags;
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
return intr_handle;
+fail:
+ if (uses_rte_memory) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
@@ -83,14 +119,69 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
return intr_handle;
}
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
+{
+ struct rte_epoll_event *tmp_elist;
+ bool uses_rte_memory;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(ERR, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ uses_rte_memory =
+ RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags);
+ if (uses_rte_memory) {
+ tmp_efds = rte_realloc(intr_handle->efds, size * sizeof(int),
+ 0);
+ } else {
+ tmp_efds = realloc(intr_handle->efds, size * sizeof(int));
+ }
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->efds = tmp_efds;
+
+ if (uses_rte_memory) {
+ tmp_elist = rte_realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event), 0);
+ } else {
+ tmp_elist = realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event));
+ }
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->elist = tmp_elist;
+
+ intr_handle->nb_intr = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle == NULL)
return;
- if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags)) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index 1a4e5573b2..482781b862 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -21,9 +21,8 @@ struct rte_intr_handle {
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
uint16_t nb_intr;
/**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
` (8 preceding siblings ...)
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 1/9] interrupts: add allocator and accessors David Marchand
` (10 more replies)
9 siblings, 11 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.
Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.
v1:
* Fixed freebsd compilation failure
* Fixed seg fault in case of memif
v2:
* Merged the prototype and implementation patch to 1.
* Restricting allocation of single interrupt instance.
* Removed base APIs, as they were exposing internally
allocated memory information.
* Fixed some memory leak issues.
* Marked some library specific APIs as internal.
v3:
* Removed flag from instance alloc API, rather auto detect
if memory should be allocated using glibc malloc APIs or
rte_malloc*
* Added APIs for get/set windows handle.
* Defined macros for repeated checks.
v4:
* Rectified some typo in the APIs documentation.
* Better names for some internal variables.
v5:
* Reverted back to passing flag to instance alloc API, as
with auto detect some multiprocess issues existing in the
library were causing tests failure.
* Rebased to top of tree.
v6:
* renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
* changed API and removed need for alloc_flag content exposure
(see rte_intr_instance_dup() in patch 1 and 2),
* exported all symbols for Windows,
* fixed leak in unit tests in case of alloc failure,
* split (previously) patch 4 into three patches
* (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
are squashed in it,
* (now) patch 5 concerns other libraries updates,
* (now) patch 6 concerns drivers updates:
* instance allocation is moved to probing for auxiliary,
* there might be a bug for PCI drivers non requesting
RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
* split (previously) patch 5 into three patches
* (now) patch 7 only hides structure, but keep it in a EAL private
header, this makes it possible to keep info in tracepoints,
* (now) patch 8 deals with VFIO/UIO internal fds merge,
* (now) patch 9 extends event list,
v7:
* fixed compilation on FreeBSD,
* removed unused interrupt handle in FreeBSD alarm code,
* fixed interrupt handle allocation for PCI drivers without
RTE_PCI_DRV_NEED_MAPPING,
v8:
* lowered logs level to DEBUG in sanity checks,
* fixed corner case with vector list access,
--
David Marchand
Harman Kalra (9):
interrupts: add allocator and accessors
interrupts: remove direct access to interrupt handle
test/interrupts: remove direct access to interrupt handle
alarm: remove direct access to interrupt handle
lib: remove direct access to interrupt handle
drivers: remove direct access to interrupt handle
interrupts: make interrupt handle structure opaque
interrupts: rename device specific file descriptor
interrupts: extend event list
MAINTAINERS | 1 +
app/test/test_interrupts.c | 164 +++--
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
drivers/bus/auxiliary/auxiliary_common.c | 17 +-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 +-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 +-
drivers/bus/fslmc/fslmc_vfio.c | 30 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +-
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +-
drivers/bus/pci/linux/pci_vfio.c | 108 ++-
drivers/bus/pci/pci_common.c | 47 +-
drivers/bus/pci/pci_common_uio.c | 21 +-
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +-
drivers/common/cnxk/roc_irq.c | 107 +--
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +-
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 49 +-
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +-
drivers/common/octeontx2/otx2_irq.c | 117 ++--
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 +-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +-
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 +-
drivers/net/e1000/igb_ethdev.c | 79 +--
drivers/net/ena/ena_ethdev.c | 35 +-
drivers/net/enic/enic_main.c | 26 +-
drivers/net/failsafe/failsafe.c | 21 +-
drivers/net/failsafe/failsafe_intr.c | 43 +-
drivers/net/failsafe/failsafe_ops.c | 19 +-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 +-
drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 +-
drivers/net/iavf/iavf_ethdev.c | 42 +-
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 +-
drivers/net/ice/ice_ethdev.c | 49 +-
drivers/net/igc/igc_ethdev.c | 45 +-
drivers/net/ionic/ionic_ethdev.c | 17 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
drivers/net/memif/memif_socket.c | 108 ++-
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +-
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 +-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 +-
drivers/net/mlx5/linux/mlx5_os.c | 55 +-
drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 +-
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 +-
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 +-
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 +-
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
drivers/net/qede/qede_ethdev.c | 16 +-
drivers/net/sfc/sfc_intr.c | 30 +-
drivers/net/tap/rte_eth_tap.c | 33 +-
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 +-
drivers/net/thunderx/nicvf_ethdev.c | 10 +
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +-
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
drivers/net/vhost/rte_eth_vhost.c | 80 ++-
drivers/net/virtio/virtio_ethdev.c | 21 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
lib/bbdev/rte_bbdev.c | 4 +-
lib/eal/common/eal_common_interrupts.c | 500 ++++++++++++++
lib/eal/common/eal_interrupts.h | 30 +
lib/eal/common/eal_private.h | 10 +
lib/eal/common/meson.build | 1 +
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +-
lib/eal/freebsd/eal_interrupts.c | 85 ++-
lib/eal/include/meson.build | 2 +-
lib/eal/include/rte_eal_interrupts.h | 269 --------
lib/eal/include/rte_eal_trace.h | 10 +-
lib/eal/include/rte_epoll.h | 118 ++++
lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +-
lib/eal/linux/eal_dev.c | 57 +-
lib/eal/linux/eal_interrupts.c | 304 ++++----
lib/eal/version.map | 45 +-
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_ethdev.c | 14 +-
132 files changed, 3449 insertions(+), 1748 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
create mode 100644 lib/eal/include/rte_epoll.h
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 1/9] interrupts: add allocator and accessors
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 2/9] interrupts: remove direct access to interrupt handle David Marchand
` (9 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Ray Kinsella
From: Harman Kalra <hkalra@marvell.com>
Prototype/Implement get set APIs for interrupt handle fields.
User won't be able to access any of the interrupt handle fields
directly while should use these get/set APIs to access/manipulate
them.
Internal interrupt header i.e. rte_eal_interrupt.h is rearranged,
as APIs defined are moved to rte_interrupts.h and epoll specific
definitions are moved to a new header rte_epoll.h.
Later in the series rte_eal_interrupt.h will be removed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v7:
- lowered checks log level to DEBUG,
- removed asserts on vector list size, and fixed check on list size
for drivers like mlx5 who expects list is not initialized,
Changes since v5:
- renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
- used a single bit to mark instance as shared (default is private),
- removed rte_intr_instance_copy / rte_intr_instance_alloc_flag_get
with a single rte_intr_instance_dup helper,
- made rte_intr_vec_list_alloc alloc_flags-aware,
- exported all symbols for Windows,
---
MAINTAINERS | 1 +
lib/eal/common/eal_common_interrupts.c | 407 ++++++++++++++++
lib/eal/common/meson.build | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_eal_interrupts.h | 207 +-------
lib/eal/include/rte_epoll.h | 118 +++++
lib/eal/include/rte_interrupts.h | 627 +++++++++++++++++++++++++
lib/eal/version.map | 45 +-
8 files changed, 1197 insertions(+), 210 deletions(-)
create mode 100644 lib/eal/common/eal_common_interrupts.c
create mode 100644 lib/eal/include/rte_epoll.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 587632dce0..097a57f7f6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -211,6 +211,7 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
new file mode 100644
index 0000000000..46064870f4
--- /dev/null
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -0,0 +1,407 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <stdlib.h>
+#include <string.h>
+
+#include <rte_errno.h>
+#include <rte_interrupts.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+
+/* Macros to check for valid interrupt handle */
+#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
+ if (intr_handle == NULL) { \
+ RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \
+ rte_errno = EINVAL; \
+ goto fail; \
+ } \
+} while (0)
+
+#define RTE_INTR_INSTANCE_KNOWN_FLAGS (RTE_INTR_INSTANCE_F_PRIVATE \
+ | RTE_INTR_INSTANCE_F_SHARED \
+ )
+
+#define RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags) \
+ !!(flags & RTE_INTR_INSTANCE_F_SHARED)
+
+struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
+{
+ struct rte_intr_handle *intr_handle;
+ bool uses_rte_memory;
+
+ /* Check the flag passed by user, it should be part of the
+ * defined flags.
+ */
+ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
+ RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ uses_rte_memory = RTE_INTR_INSTANCE_USES_RTE_MEMORY(flags);
+ if (uses_rte_memory)
+ intr_handle = rte_zmalloc(NULL, sizeof(*intr_handle), 0);
+ else
+ intr_handle = calloc(1, sizeof(*intr_handle));
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ intr_handle->alloc_flags = flags;
+ intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
+
+ return intr_handle;
+}
+
+struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
+{
+ struct rte_intr_handle *intr_handle;
+
+ if (src == NULL) {
+ RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n");
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ intr_handle = rte_intr_instance_alloc(src->alloc_flags);
+
+ intr_handle->fd = src->fd;
+ intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->type = src->type;
+ intr_handle->max_intr = src->max_intr;
+ intr_handle->nb_efd = src->nb_efd;
+ intr_handle->efd_counter_size = src->efd_counter_size;
+ memcpy(intr_handle->efds, src->efds, src->nb_intr);
+ memcpy(intr_handle->elist, src->elist, src->nb_intr);
+
+ return intr_handle;
+}
+
+void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle);
+ else
+ free(intr_handle);
+}
+
+int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->fd;
+fail:
+ return -1;
+}
+
+int rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->type = type;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+enum rte_intr_handle_type rte_intr_type_get(
+ const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->type;
+fail:
+ return RTE_INTR_HANDLE_UNKNOWN;
+}
+
+int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->vfio_dev_fd = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->vfio_dev_fd;
+fail:
+ return -1;
+}
+
+int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
+ int max_intr)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (max_intr > intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)\n", max_intr,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->max_intr = max_intr;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->max_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->nb_efd = nb_efd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_efd;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->nb_intr;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->efd_counter_size = efd_counter_size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->efd_counter_size;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ return intr_handle->efds[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
+ int index, int fd)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->efds[index] = fd;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+struct rte_epoll_event *rte_intr_elist_index_get(
+ struct rte_intr_handle *intr_handle, int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return &intr_handle->elist[index];
+fail:
+ return NULL;
+}
+
+int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
+ int index, struct rte_epoll_event elist)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->elist[index] = elist;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
+ const char *name, int size)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ /* Vector list already allocated */
+ if (intr_handle->intr_vec != NULL)
+ return 0;
+
+ if (size > intr_handle->nb_intr) {
+ RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size,
+ intr_handle->nb_intr);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
+ else
+ intr_handle->intr_vec = calloc(size, sizeof(int));
+ if (intr_handle->intr_vec == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size);
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ intr_handle->vec_list_size = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->vec_list_size) {
+ RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ return intr_handle->intr_vec[index];
+fail:
+ return -rte_errno;
+}
+
+int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
+ int index, int vec)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (index >= intr_handle->vec_list_size) {
+ RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n",
+ index, intr_handle->vec_list_size);
+ rte_errno = ERANGE;
+ goto fail;
+ }
+
+ intr_handle->intr_vec[index] = vec;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
+void rte_intr_vec_list_free(struct rte_intr_handle *intr_handle)
+{
+ if (intr_handle == NULL)
+ return;
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ rte_free(intr_handle->intr_vec);
+ else
+ free(intr_handle->intr_vec);
+ intr_handle->intr_vec = NULL;
+ intr_handle->vec_list_size = 0;
+}
+
+void *rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ return intr_handle->windows_handle;
+fail:
+ return NULL;
+}
+
+int rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle)
+{
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ intr_handle->windows_handle = windows_handle;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
index 6d01b0f072..917758cc65 100644
--- a/lib/eal/common/meson.build
+++ b/lib/eal/common/meson.build
@@ -15,6 +15,7 @@ sources += files(
'eal_common_errno.c',
'eal_common_fbarray.c',
'eal_common_hexdump.c',
+ 'eal_common_interrupts.c',
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 88a9eba12f..8e258607b8 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -19,6 +19,7 @@ headers += files(
'rte_eal_memconfig.h',
'rte_eal_trace.h',
'rte_errno.h',
+ 'rte_epoll.h',
'rte_fbarray.h',
'rte_hexdump.h',
'rte_hypervisor.h',
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
index 00bcc19b6d..60bb60ca59 100644
--- a/lib/eal/include/rte_eal_interrupts.h
+++ b/lib/eal/include/rte_eal_interrupts.h
@@ -39,32 +39,6 @@ enum rte_intr_handle_type {
RTE_INTR_HANDLE_MAX /**< count of elements */
};
-#define RTE_INTR_EVENT_ADD 1UL
-#define RTE_INTR_EVENT_DEL 2UL
-
-typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
-
-struct rte_epoll_data {
- uint32_t event; /**< event type */
- void *data; /**< User data */
- rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
- void *cb_arg; /**< IN: callback arg */
-};
-
-enum {
- RTE_EPOLL_INVALID = 0,
- RTE_EPOLL_VALID,
- RTE_EPOLL_EXEC,
-};
-
-/** interrupt epoll event obj, taken by epoll_event.ptr */
-struct rte_epoll_event {
- uint32_t status; /**< OUT: event status */
- int fd; /**< OUT: event fd */
- int epfd; /**< OUT: epoll instance the ev associated with */
- struct rte_epoll_data epdata;
-};
-
/** Handle for interrupts. */
struct rte_intr_handle {
RTE_STD_C11
@@ -79,191 +53,20 @@ struct rte_intr_handle {
};
int fd; /**< interrupt event file descriptor */
};
- void *handle; /**< device driver handle (Windows) */
+ void *windows_handle; /**< device driver handle */
};
+ uint32_t alloc_flags; /**< flags passed at allocation */
enum rte_intr_handle_type type; /**< handle type */
uint32_t max_intr; /**< max interrupt requested */
uint32_t nb_efd; /**< number of available efd(event fd) */
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
/**< intr vector epoll event */
+ uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
-#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
-
-/**
- * It waits for events on the epoll instance.
- * Retries if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-int
-rte_epoll_wait(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It waits for events on the epoll instance.
- * Does not retry if signal received.
- *
- * @param epfd
- * Epoll instance fd on which the caller wait for events.
- * @param events
- * Memory area contains the events that will be available for the caller.
- * @param maxevents
- * Up to maxevents are returned, must greater than zero.
- * @param timeout
- * Specifying a timeout of -1 causes a block indefinitely.
- * Specifying a timeout equal to zero cause to return immediately.
- * @return
- * - On success, returns the number of available event.
- * - On failure, a negative value.
- */
-__rte_experimental
-int
-rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
- int maxevents, int timeout);
-
-/**
- * It performs control operations on epoll instance referred by the epfd.
- * It requests that the operation op be performed for the target fd.
- *
- * @param epfd
- * Epoll instance fd on which the caller perform control operations.
- * @param op
- * The operation be performed for the target fd.
- * @param fd
- * The target fd on which the control ops perform.
- * @param event
- * Describes the object linked to the fd.
- * Note: The caller must take care the object deletion after CTL_DEL.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_epoll_ctl(int epfd, int op, int fd,
- struct rte_epoll_event *event);
-
-/**
- * The function returns the per thread epoll instance.
- *
- * @return
- * epfd the epoll instance referred to.
- */
-int
-rte_intr_tls_epfd(void);
-
-/**
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param epfd
- * Epoll instance fd which the intr vector associated to.
- * @param op
- * The operation be performed for the vector.
- * Operation type of {ADD, DEL}.
- * @param vec
- * RX intr vector number added to the epoll instance wait list.
- * @param data
- * User raw data.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
- int epfd, int op, unsigned int vec, void *data);
-
-/**
- * It deletes registered eventfds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
-
-/**
- * It enables the packet I/O interrupt event if it's necessary.
- * It creates event fd for each interrupt vector when MSIX is used,
- * otherwise it multiplexes a single event fd.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- * @param nb_efd
- * Number of interrupt vector trying to enable.
- * The value 0 is not allowed.
- * @return
- * - On success, zero.
- * - On failure, a negative value.
- */
-int
-rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
-
-/**
- * It disables the packet I/O interrupt event.
- * It deletes registered eventfds and closes the open fds.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-void
-rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
-
-/**
- * The packet I/O interrupt on datapath is enabled or not.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
-
-/**
- * The interrupt handle instance allows other causes or not.
- * Other causes stand for any none packet I/O interrupts.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_allow_others(struct rte_intr_handle *intr_handle);
-
-/**
- * The multiple interrupt vector capability of interrupt handle instance.
- * It returns zero if no multiple interrupt vector support.
- *
- * @param intr_handle
- * Pointer to the interrupt handle.
- */
-int
-rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
-
-/**
- * @warning
- * @b EXPERIMENTAL: this API may change without prior notice
- *
- * @internal
- * Check if currently executing in interrupt context
- *
- * @return
- * - non zero in case of interrupt context
- * - zero in case of process context
- */
-__rte_experimental
-int
-rte_thread_is_intr(void);
-
#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_epoll.h b/lib/eal/include/rte_epoll.h
new file mode 100644
index 0000000000..56b7b6bad6
--- /dev/null
+++ b/lib/eal/include/rte_epoll.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell International Ltd.
+ */
+
+#ifndef __RTE_EPOLL_H__
+#define __RTE_EPOLL_H__
+
+/**
+ * @file
+ * The rte_epoll provides interfaces functions to add delete events,
+ * wait poll for an event.
+ */
+
+#include <stdint.h>
+
+#include <rte_compat.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_INTR_EVENT_ADD 1UL
+#define RTE_INTR_EVENT_DEL 2UL
+
+typedef void (*rte_intr_event_cb_t)(int fd, void *arg);
+
+struct rte_epoll_data {
+ uint32_t event; /**< event type */
+ void *data; /**< User data */
+ rte_intr_event_cb_t cb_fun; /**< IN: callback fun */
+ void *cb_arg; /**< IN: callback arg */
+};
+
+enum {
+ RTE_EPOLL_INVALID = 0,
+ RTE_EPOLL_VALID,
+ RTE_EPOLL_EXEC,
+};
+
+/** interrupt epoll event obj, taken by epoll_event.ptr */
+struct rte_epoll_event {
+ uint32_t status; /**< OUT: event status */
+ int fd; /**< OUT: event fd */
+ int epfd; /**< OUT: epoll instance the ev associated with */
+ struct rte_epoll_data epdata;
+};
+
+#define RTE_EPOLL_PER_THREAD -1 /**< to hint using per thread epfd */
+
+/**
+ * It waits for events on the epoll instance.
+ * Retries if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_wait(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It waits for events on the epoll instance.
+ * Does not retry if signal received.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller wait for events.
+ * @param events
+ * Memory area contains the events that will be available for the caller.
+ * @param maxevents
+ * Up to maxevents are returned, must greater than zero.
+ * @param timeout
+ * Specifying a timeout of -1 causes a block indefinitely.
+ * Specifying a timeout equal to zero cause to return immediately.
+ * @return
+ * - On success, returns the number of available event.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_epoll_wait_interruptible(int epfd, struct rte_epoll_event *events,
+ int maxevents, int timeout);
+
+/**
+ * It performs control operations on epoll instance referred by the epfd.
+ * It requests that the operation op be performed for the target fd.
+ *
+ * @param epfd
+ * Epoll instance fd on which the caller perform control operations.
+ * @param op
+ * The operation be performed for the target fd.
+ * @param fd
+ * The target fd on which the control ops perform.
+ * @param event
+ * Describes the object linked to the fd.
+ * Note: The caller must take care the object deletion after CTL_DEL.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+int
+rte_epoll_ctl(int epfd, int op, int fd,
+ struct rte_epoll_event *event);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __RTE_EPOLL_H__ */
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index cc3bf45d8c..a515a8c073 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -5,8 +5,12 @@
#ifndef _RTE_INTERRUPTS_H_
#define _RTE_INTERRUPTS_H_
+#include <stdbool.h>
+
+#include <rte_bitops.h>
#include <rte_common.h>
#include <rte_compat.h>
+#include <rte_epoll.h>
/**
* @file
@@ -22,6 +26,15 @@ extern "C" {
/** Interrupt handle */
struct rte_intr_handle;
+/** Interrupt instance allocation flags
+ * @see rte_intr_instance_alloc
+ */
+
+/** Interrupt instance will not be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_PRIVATE UINT32_C(0)
+/** Interrupt instance will be shared between primary and secondary processes. */
+#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -163,6 +176,620 @@ int rte_intr_disable(const struct rte_intr_handle *intr_handle);
__rte_experimental
int rte_intr_ack(const struct rte_intr_handle *intr_handle);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if currently executing in interrupt context
+ *
+ * @return
+ * - non zero in case of interrupt context
+ * - zero in case of process context
+ */
+__rte_experimental
+int
+rte_thread_is_intr(void);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * It allocates memory for interrupt instance. API takes flag as an argument
+ * which define from where memory should be allocated i.e. using DPDK memory
+ * management library APIs or normal heap allocation.
+ * Default memory allocation for event fds and event list array is done which
+ * can be realloced later based on size of MSIX interrupts supported by a PCI
+ * device.
+ *
+ * This function should be called from application or driver, before calling
+ * any of the interrupt APIs.
+ *
+ * @param flags
+ * See RTE_INTR_INSTANCE_F_* flags definitions.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_experimental
+struct rte_intr_handle *
+rte_intr_instance_alloc(uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to free the memory allocated for interrupt handle
+ * resources.
+ *
+ * @param intr_handle
+ * Interrupt handle address.
+ *
+ */
+__rte_experimental
+void
+rte_intr_instance_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the fd field of interrupt handle with user provided
+ * file descriptor.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * file descriptor value provided by user.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, fd field.
+ * - On failure, a negative value.
+ */
+__rte_experimental
+int
+rte_intr_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * This API is used to set the type field of interrupt handle with user provided
+ * interrupt type.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param type
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_intr_type_set(struct rte_intr_handle *intr_handle,
+ enum rte_intr_handle_type type);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Returns the type field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, interrupt type
+ * - On failure, RTE_INTR_HANDLE_UNKNOWN.
+ */
+__rte_experimental
+enum rte_intr_handle_type
+rte_intr_type_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The function returns the per thread epoll instance.
+ *
+ * @return
+ * epfd the epoll instance referred to.
+ */
+__rte_internal
+int
+rte_intr_tls_epfd(void);
+
+/**
+ * @internal
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param epfd
+ * Epoll instance fd which the intr vector associated to.
+ * @param op
+ * The operation be performed for the vector.
+ * Operation type of {ADD, DEL}.
+ * @param vec
+ * RX intr vector number added to the epoll instance wait list.
+ * @param data
+ * User raw data.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_rx_ctl(struct rte_intr_handle *intr_handle,
+ int epfd, int op, unsigned int vec, void *data);
+
+/**
+ * @internal
+ * It deletes registered eventfds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * It enables the packet I/O interrupt event if it's necessary.
+ * It creates event fd for each interrupt vector when MSIX is used,
+ * otherwise it multiplexes a single event fd.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ * @param nb_efd
+ * Number of interrupt vector trying to enable.
+ * The value 0 is not allowed.
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value.
+ */
+__rte_internal
+int
+rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd);
+
+/**
+ * @internal
+ * It disables the packet I/O interrupt event.
+ * It deletes registered eventfds and closes the open fds.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+void
+rte_intr_efd_disable(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The packet I/O interrupt on datapath is enabled or not.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_dp_is_en(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The interrupt handle instance allows other causes or not.
+ * Other causes stand for any none packet I/O interrupts.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_allow_others(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * The multiple interrupt vector capability of interrupt handle instance.
+ * It returns zero if no multiple interrupt vector support.
+ *
+ * @param intr_handle
+ * Pointer to the interrupt handle.
+ */
+__rte_internal
+int
+rte_intr_cap_multiple(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Creates a clone of src by allocating a new handle and copying src content.
+ *
+ * @param src
+ * Source interrupt handle to be cloned.
+ *
+ * @return
+ * - On success, address of interrupt handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+struct rte_intr_handle *
+rte_intr_instance_dup(const struct rte_intr_handle *src);
+
+/**
+ * @internal
+ * This API is used to set the device fd field of interrupt handle with user
+ * provided dev fd. Device fd corresponds to VFIO device fd or UIO config fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param fd
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd);
+
+/**
+ * @internal
+ * Returns the device fd field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, dev fd.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the max intr field of interrupt handle with user
+ * provided max intr value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param max_intr
+ * interrupt type
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, int max_intr);
+
+/**
+ * @internal
+ * Returns the max intr field of the given interrupt handle instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, max intr.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_max_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the number of event fd field of interrupt handle
+ * with user provided available event file descriptor value.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param nb_efd
+ * Available event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_set(struct rte_intr_handle *intr_handle, int nb_efd);
+
+/**
+ * @internal
+ * Returns the number of available event fd field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_efd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_efd_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Returns the number of interrupt vector field of the given interrupt handle
+ * instance. This field is to configured on device probe time, and based on
+ * this value efds and elist arrays are dynamically allocated. By default
+ * this value is set to RTE_MAX_RXTX_INTR_VEC_ID.
+ * For eg. in case of PCI device, its msix size is queried and efds/elist
+ * arrays are allocated accordingly.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, nb_intr
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_nb_intr_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd counter size field of interrupt handle
+ * with user provided efd counter size.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param efd_counter_size
+ * size of efd counter.
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_set(struct rte_intr_handle *intr_handle,
+ uint8_t efd_counter_size);
+
+/**
+ * @internal
+ * Returns the event fd counter size field of the given interrupt handle
+ * instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, efd_counter_size
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efd_counter_size_get(const struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API is used to set the event fd array index with the given fd.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be set
+ * @param fd
+ * event fd
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, int index, int fd);
+
+/**
+ * @internal
+ * Returns the fd value of event fds array at a given index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * efds array index to be returned
+ *
+ * @return
+ * - On success, fd
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * This API is used to set the epoll event object array index with the given
+ * elist instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be set
+ * @param elist
+ * epoll event instance of struct rte_epoll_event
+ *
+ * @return
+ * - On success, zero.
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, int index,
+ struct rte_epoll_event elist);
+
+/**
+ * @internal
+ * Returns the address of epoll event instance from elist array at a given
+ * index.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * elist array index to be returned
+ *
+ * @return
+ * - On success, elist
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+struct rte_epoll_event *
+rte_intr_elist_index_get(struct rte_intr_handle *intr_handle, int index);
+
+/**
+ * @internal
+ * Allocates the memory of interrupt vector list array, with size defining the
+ * number of elements required in the array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param name
+ * Name assigned to the allocation, or NULL.
+ * @param size
+ * Number of element required in the array.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, const char *name,
+ int size);
+
+/**
+ * @internal
+ * Sets the vector value at given index of interrupt vector list field of given
+ * interrupt handle.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be set
+ * @param vec
+ * Interrupt vector value.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, int index,
+ int vec);
+
+/**
+ * @internal
+ * Returns the vector value at the given index of interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param index
+ * intr_vec array index to be returned
+ *
+ * @return
+ * - On success, interrupt vector
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
+ int index);
+
+/**
+ * @internal
+ * Frees the memory allocated for interrupt vector list array.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+void
+rte_intr_vec_list_free(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * Reallocates the size efds and elist array based on size provided by user.
+ * By default efds and elist array are allocated with default size
+ * RTE_MAX_RXTX_INTR_VEC_ID on interrupt handle array creation. Later on device
+ * probe, device may have capability of more interrupts than
+ * RTE_MAX_RXTX_INTR_VEC_ID. Using this API, PMDs can reallocate the arrays as
+ * per the max interrupts capability of device.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param size
+ * efds and elist array size.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size);
+
+/**
+ * @internal
+ * This API returns the Windows handle of the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ *
+ * @return
+ * - On success, Windows handle.
+ * - On failure, NULL.
+ */
+__rte_internal
+void *
+rte_intr_instance_windows_handle_get(struct rte_intr_handle *intr_handle);
+
+/**
+ * @internal
+ * This API set the Windows handle for the given interrupt instance.
+ *
+ * @param intr_handle
+ * pointer to the interrupt handle.
+ * @param windows_handle
+ * Windows handle to be set.
+ *
+ * @return
+ * - On success, zero
+ * - On failure, a negative value and rte_errno is set.
+ */
+__rte_internal
+int
+rte_intr_instance_windows_handle_set(struct rte_intr_handle *intr_handle,
+ void *windows_handle);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 38f7de83e1..9d43655b66 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -109,18 +109,10 @@ DPDK_22 {
rte_hexdump;
rte_hypervisor_get;
rte_hypervisor_get_name; # WINDOWS_NO_EXPORT
- rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
- rte_intr_cap_multiple;
rte_intr_disable;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
rte_intr_enable;
- rte_intr_free_epoll_fd;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
rte_keepalive_create; # WINDOWS_NO_EXPORT
rte_keepalive_dispatch_pings; # WINDOWS_NO_EXPORT
rte_keepalive_mark_alive; # WINDOWS_NO_EXPORT
@@ -420,12 +412,49 @@ EXPERIMENTAL {
# added in 21.08
rte_power_monitor_multi; # WINDOWS_NO_EXPORT
+
+ # added in 21.11
+ rte_intr_fd_get;
+ rte_intr_fd_set;
+ rte_intr_instance_alloc;
+ rte_intr_instance_free;
+ rte_intr_type_get;
+ rte_intr_type_set;
};
INTERNAL {
global:
rte_firmware_read;
+ rte_intr_allow_others;
+ rte_intr_cap_multiple;
+ rte_intr_dev_fd_get;
+ rte_intr_dev_fd_set;
+ rte_intr_dp_is_en;
+ rte_intr_efd_counter_size_set;
+ rte_intr_efd_counter_size_get;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
+ rte_intr_efds_index_get;
+ rte_intr_efds_index_set;
+ rte_intr_elist_index_get;
+ rte_intr_elist_index_set;
+ rte_intr_event_list_update;
+ rte_intr_free_epoll_fd;
+ rte_intr_instance_dup;
+ rte_intr_instance_windows_handle_get;
+ rte_intr_instance_windows_handle_set;
+ rte_intr_max_intr_get;
+ rte_intr_max_intr_set;
+ rte_intr_nb_efd_get;
+ rte_intr_nb_efd_set;
+ rte_intr_nb_intr_get;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_intr_vec_list_alloc;
+ rte_intr_vec_list_free;
+ rte_intr_vec_list_index_get;
+ rte_intr_vec_list_index_set;
rte_mem_lock;
rte_mem_map;
rte_mem_page_size;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 2/9] interrupts: remove direct access to interrupt handle
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 1/9] interrupts: add allocator and accessors David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 3/9] test/interrupts: " David Marchand
` (8 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- fixed compilation on FreeBSD,
Changes since v5:
- used new helper rte_intr_instance_dup,
---
lib/eal/freebsd/eal_interrupts.c | 85 +++++----
lib/eal/linux/eal_interrupts.c | 304 +++++++++++++++++--------------
2 files changed, 219 insertions(+), 170 deletions(-)
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..10aa91cc09 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -60,7 +60,7 @@ static int
intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
{
/* alarm callbacks are special case */
- if (ih->type == RTE_INTR_HANDLE_ALARM) {
+ if (rte_intr_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
uint64_t timeout_ns;
/* get soonest alarm timeout */
@@ -75,7 +75,7 @@ intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
} else {
ke->filter = EVFILT_READ;
}
- ke->ident = ih->fd;
+ ke->ident = rte_intr_fd_get(ih);
return 0;
}
@@ -89,7 +89,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
int ret = 0, add_event = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
RTE_LOG(ERR, EAL,
"Registering with invalid input parameter\n");
return -EINVAL;
@@ -103,7 +103,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* find the source for this intr_handle */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
}
@@ -112,8 +112,9 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* thing on the list should be eal_alarm_callback() and we may
* be called just to reset the timer.
*/
- if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
- !TAILQ_EMPTY(&src->callbacks)) {
+ if (src != NULL &&
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM &&
+ !TAILQ_EMPTY(&src->callbacks)) {
callback = NULL;
} else {
/* allocate a new interrupt callback entity */
@@ -135,7 +136,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ret = -ENOMEM;
goto fail;
} else {
- src->intr_handle = *intr_handle;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(src);
+ src = NULL;
+ goto fail;
+ }
TAILQ_INIT(&src->callbacks);
TAILQ_INSERT_TAIL(&intr_sources, src, next);
}
@@ -151,7 +159,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* add events to the queue. timer events are special as we need to
* re-set the timer.
*/
- if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+ if (add_event ||
+ rte_intr_type_get(src->intr_handle) == RTE_INTR_HANDLE_ALARM) {
struct kevent ke;
memset(&ke, 0, sizeof(ke));
@@ -173,12 +182,11 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
*/
if (errno == ENODEV)
RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
- src->intr_handle.fd);
+ rte_intr_fd_get(src->intr_handle));
else
- RTE_LOG(ERR, EAL, "Error adding fd %d "
- "kevent, %s\n",
- src->intr_handle.fd,
- strerror(errno));
+ RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n",
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
ret = -errno;
goto fail;
}
@@ -213,7 +221,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -228,7 +236,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -268,7 +276,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0) {
RTE_LOG(ERR, EAL,
"Unregistering with invalid input parameter\n");
return -EINVAL;
@@ -282,7 +290,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -314,7 +322,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
/* removing non-existent even is an expected condition
* in some circumstances (e.g. oneshot events).
*/
@@ -365,17 +374,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -386,9 +396,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -406,17 +415,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ if (rte_intr_fd_get(intr_handle) < 0 ||
+ rte_intr_dev_fd_get(intr_handle) < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* not used at this moment */
case RTE_INTR_HANDLE_ALARM:
rc = -1;
@@ -427,9 +437,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -441,7 +450,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
return -1;
@@ -463,7 +472,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == event_fd)
+ if (rte_intr_fd_get(src->intr_handle) == event_fd)
break;
if (src == NULL) {
rte_spinlock_unlock(&intr_lock);
@@ -475,7 +484,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_ALARM:
bytes_read = 0;
call = true;
@@ -546,7 +555,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
/* mark for deletion from the queue */
ke.flags = EV_DELETE;
- if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+ if (intr_source_to_kevent(src->intr_handle, &ke) < 0) {
RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
rte_spinlock_unlock(&intr_lock);
return;
@@ -556,8 +565,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
* remove intr file descriptor from wait list.
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
- "%s\n", src->intr_handle.fd,
+ RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
+ rte_intr_fd_get(src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
* condition in some circumstances
@@ -567,7 +576,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
}
}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..f72661e1f0 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -82,7 +82,7 @@ struct rte_intr_callback {
struct rte_intr_source {
TAILQ_ENTRY(rte_intr_source) next;
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
struct rte_intr_cb_list callbacks; /**< user callbacks */
uint32_t active;
};
@@ -112,7 +112,7 @@ static int
vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
int *fd_ptr;
len = sizeof(irq_set_buf);
@@ -125,13 +125,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -144,11 +145,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -159,7 +160,7 @@ static int
vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -171,11 +172,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -187,11 +189,11 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL,
- "Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -202,6 +204,7 @@ static int
vfio_ack_intx(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set irq_set;
+ int vfio_dev_fd;
/* unmask INTx */
memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +214,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
irq_set.start = 0;
- if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -225,7 +229,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -236,13 +240,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
return 0;
@@ -253,7 +258,7 @@ static int
vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -264,11 +269,11 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
-
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -279,30 +284,35 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
int len, ret;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd, i;
len = sizeof(irq_set_buf);
irq_set = (struct vfio_irq_set *) irq_set_buf;
irq_set->argsz = len;
/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
- irq_set->count = intr_handle->max_intr ?
- (intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
- RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+ irq_set->count = rte_intr_max_intr_get(intr_handle) ?
+ (rte_intr_max_intr_get(intr_handle) >
+ RTE_MAX_RXTX_INTR_VEC_ID + 1 ? RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+ rte_intr_max_intr_get(intr_handle)) : 1;
+
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
/* INTR vector offset 0 reserve for non-efds mapping */
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
- memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
- sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_fd_get(intr_handle);
+ for (i = 0; i < rte_intr_nb_efd_get(intr_handle); i++) {
+ fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+ rte_intr_efds_index_get(intr_handle, i);
+ }
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -314,7 +324,7 @@ static int
vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
struct vfio_irq_set *irq_set;
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -325,11 +335,12 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL,
- "Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n",
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -342,7 +353,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -354,13 +365,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *) &irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -373,7 +385,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -384,11 +396,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
irq_set->start = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -399,20 +412,22 @@ static int
uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error disabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -423,20 +438,22 @@ static int
uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
{
unsigned char command_high;
+ int uio_cfg_fd;
/* use UIO config file descriptor for uio_pci_generic */
- if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error reading interrupts status for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
- if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+ if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
RTE_LOG(ERR, EAL,
"Error enabling interrupts for fd %d\n",
- intr_handle->uio_cfg_fd);
+ uio_cfg_fd);
return -1;
}
@@ -448,10 +465,9 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
{
const int value = 0;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error disabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -462,10 +478,9 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
{
const int value = 1;
- if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL,
- "Error enabling interrupts for fd %d (%s)\n",
- intr_handle->fd, strerror(errno));
+ if (write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
+ RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n",
+ rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
return 0;
@@ -482,9 +497,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
wake_thread = 0;
/* first do parameter checking */
- if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
- RTE_LOG(ERR, EAL,
- "Registering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
+ RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n");
return -EINVAL;
}
@@ -503,7 +517,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* check if there is at least one callback registered for the fd */
TAILQ_FOREACH(src, &intr_sources, next) {
- if (src->intr_handle.fd == intr_handle->fd) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle)) {
/* we had no interrupts for this */
if (TAILQ_EMPTY(&src->callbacks))
wake_thread = 1;
@@ -519,15 +533,26 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
src = calloc(1, sizeof(*src));
if (src == NULL) {
RTE_LOG(ERR, EAL, "Can not allocate memory\n");
- free(callback);
ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
} else {
- src->intr_handle = *intr_handle;
- TAILQ_INIT(&src->callbacks);
- TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
- TAILQ_INSERT_TAIL(&intr_sources, src, next);
- wake_thread = 1;
- ret = 0;
+ src->intr_handle = rte_intr_instance_dup(intr_handle);
+ if (src->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ ret = -ENOMEM;
+ free(callback);
+ callback = NULL;
+ free(src);
+ src = NULL;
+ } else {
+ TAILQ_INIT(&src->callbacks);
+ TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+ next);
+ TAILQ_INSERT_TAIL(&intr_sources, src, next);
+ wake_thread = 1;
+ ret = 0;
+ }
}
}
@@ -555,18 +580,18 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
rte_spinlock_lock(&intr_lock);
/* check if the insterrupt source for the fd is existent */
- TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ TAILQ_FOREACH(src, &intr_sources, next) {
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
+ }
/* No interrupt source registered for the fd */
if (src == NULL) {
@@ -605,9 +630,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
struct rte_intr_callback *cb, *next;
/* do parameter checking first */
- if (intr_handle == NULL || intr_handle->fd < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ if (rte_intr_fd_get(intr_handle) < 0) {
+ RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
return -EINVAL;
}
@@ -615,7 +639,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* check if the insterrupt source for the fd is existent */
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd == intr_handle->fd)
+ if (rte_intr_fd_get(src->intr_handle) == rte_intr_fd_get(intr_handle))
break;
/* No interrupt source registered for the fd */
@@ -646,6 +670,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
}
@@ -677,22 +702,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
int
rte_intr_enable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to enable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -732,9 +758,8 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -757,13 +782,16 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
int
rte_intr_ack(const struct rte_intr_handle *intr_handle)
{
- if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ int uio_cfg_fd;
+
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 0;
- if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0)
return -1;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
/* Both acking and enabling are same for UIO */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_enable(intr_handle))
@@ -796,7 +824,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
/* unknown handle type */
default:
RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return -1;
}
@@ -806,22 +834,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
int
rte_intr_disable(const struct rte_intr_handle *intr_handle)
{
- int rc = 0;
+ int rc = 0, uio_cfg_fd;
if (intr_handle == NULL)
return -1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
rc = 0;
goto out;
}
- if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+ if (rte_intr_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
rc = -1;
goto out;
}
- switch (intr_handle->type){
+ switch (rte_intr_type_get(intr_handle)) {
/* write to the uio fd to disable the interrupt */
case RTE_INTR_HANDLE_UIO:
if (uio_intr_disable(intr_handle))
@@ -861,9 +890,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL,
- "Unknown handle type of fd %d\n",
- intr_handle->fd);
+ RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ rte_intr_fd_get(intr_handle));
rc = -1;
break;
}
@@ -896,8 +924,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
}
rte_spinlock_lock(&intr_lock);
TAILQ_FOREACH(src, &intr_sources, next)
- if (src->intr_handle.fd ==
- events[n].data.fd)
+ if (rte_intr_fd_get(src->intr_handle) == events[n].data.fd)
break;
if (src == NULL){
rte_spinlock_unlock(&intr_lock);
@@ -909,7 +936,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
rte_spinlock_unlock(&intr_lock);
/* set the length to be read dor different handle type */
- switch (src->intr_handle.type) {
+ switch (rte_intr_type_get(src->intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1000,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
TAILQ_REMOVE(&src->callbacks, cb, next);
free(cb);
}
+ rte_intr_instance_free(src->intr_handle);
free(src);
return -1;
} else if (bytes_read == 0)
@@ -1012,7 +1040,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (cb->pending_delete) {
TAILQ_REMOVE(&src->callbacks, cb, next);
if (cb->ucb_fn)
- cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+ cb->ucb_fn(src->intr_handle, cb->cb_arg);
free(cb);
rv++;
}
@@ -1021,6 +1049,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
/* all callbacks for that source are removed. */
if (TAILQ_EMPTY(&src->callbacks)) {
TAILQ_REMOVE(&intr_sources, src, next);
+ rte_intr_instance_free(src->intr_handle);
free(src);
}
@@ -1123,16 +1152,17 @@ eal_intr_thread_main(__rte_unused void *arg)
continue; /* skip those with no callbacks */
memset(&ev, 0, sizeof(ev));
ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
- ev.data.fd = src->intr_handle.fd;
+ ev.data.fd = rte_intr_fd_get(src->intr_handle);
/**
* add all the uio device file descriptor
* into wait list.
*/
if (epoll_ctl(pfd, EPOLL_CTL_ADD,
- src->intr_handle.fd, &ev) < 0){
+ rte_intr_fd_get(src->intr_handle), &ev) < 0) {
rte_panic("Error adding fd %d epoll_ctl, %s\n",
- src->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(src->intr_handle),
+ strerror(errno));
}
else
numfds++;
@@ -1185,7 +1215,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
int bytes_read = 0;
int nbytes;
- switch (intr_handle->type) {
+ switch (rte_intr_type_get(intr_handle)) {
case RTE_INTR_HANDLE_UIO:
case RTE_INTR_HANDLE_UIO_INTX:
bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1228,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
break;
#endif
case RTE_INTR_HANDLE_VDEV:
- bytes_read = intr_handle->efd_counter_size;
+ bytes_read = rte_intr_efd_counter_size_get(intr_handle);
/* For vdev, number of bytes to read is set by driver */
break;
case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1449,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- if (!intr_handle || intr_handle->nb_efd == 0 ||
- efd_idx >= intr_handle->nb_efd) {
+ if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 ||
+ efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
return -EPERM;
}
@@ -1428,7 +1458,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
switch (op) {
case RTE_INTR_EVENT_ADD:
epfd_op = EPOLL_CTL_ADD;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1472,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
epdata->cb_arg = (void *)intr_handle;
rc = rte_epoll_ctl(epfd, epfd_op,
- intr_handle->efds[efd_idx], rev);
+ rte_intr_efds_index_get(intr_handle, efd_idx), rev);
if (!rc)
RTE_LOG(DEBUG, EAL,
"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1482,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
break;
case RTE_INTR_EVENT_DEL:
epfd_op = EPOLL_CTL_DEL;
- rev = &intr_handle->elist[efd_idx];
+ rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1507,8 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
uint32_t i;
struct rte_epoll_event *rev;
- for (i = 0; i < intr_handle->nb_efd; i++) {
- rev = &intr_handle->elist[i];
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++) {
+ rev = rte_intr_elist_index_get(intr_handle, i);
if (__atomic_load_n(&rev->status,
__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
continue;
@@ -1498,7 +1528,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
assert(nb_efd != 0);
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX) {
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
@@ -1507,21 +1537,30 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
errno, strerror(errno));
return -errno;
}
- intr_handle->efds[i] = fd;
+
+ if (rte_intr_efds_index_set(intr_handle, i, fd))
+ return -rte_errno;
}
- intr_handle->nb_efd = n;
- intr_handle->max_intr = NB_OTHER_INTR + n;
- } else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+ if (rte_intr_nb_efd_set(intr_handle, n))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR + n))
+ return -rte_errno;
+ } else if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
/* only check, initialization would be done in vdev driver.*/
- if (intr_handle->efd_counter_size >
+ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
- intr_handle->efds[0] = intr_handle->fd;
- intr_handle->nb_efd = RTE_MIN(nb_efd, 1U);
- intr_handle->max_intr = NB_OTHER_INTR;
+ if (rte_intr_efds_index_set(intr_handle, 0, rte_intr_fd_get(intr_handle)))
+ return -rte_errno;
+ if (rte_intr_nb_efd_set(intr_handle, RTE_MIN(nb_efd, 1U)))
+ return -rte_errno;
+ if (rte_intr_max_intr_set(intr_handle, NB_OTHER_INTR))
+ return -rte_errno;
}
return 0;
@@ -1533,18 +1572,18 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
uint32_t i;
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->max_intr > intr_handle->nb_efd) {
- for (i = 0; i < intr_handle->nb_efd; i++)
- close(intr_handle->efds[i]);
+ if (rte_intr_max_intr_get(intr_handle) > rte_intr_nb_efd_get(intr_handle)) {
+ for (i = 0; i < (uint32_t)rte_intr_nb_efd_get(intr_handle); i++)
+ close(rte_intr_efds_index_get(intr_handle, i));
}
- intr_handle->nb_efd = 0;
- intr_handle->max_intr = 0;
+ rte_intr_nb_efd_set(intr_handle, 0);
+ rte_intr_max_intr_set(intr_handle, 0);
}
int
rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
{
- return !(!intr_handle->nb_efd);
+ return !(!rte_intr_nb_efd_get(intr_handle));
}
int
@@ -1553,16 +1592,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
if (!rte_intr_dp_is_en(intr_handle))
return 1;
else
- return !!(intr_handle->max_intr - intr_handle->nb_efd);
+ return !!(rte_intr_max_intr_get(intr_handle) -
+ rte_intr_nb_efd_get(intr_handle));
}
int
rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
{
- if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
return 1;
- if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
return 1;
return 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 3/9] test/interrupts: remove direct access to interrupt handle
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 1/9] interrupts: add allocator and accessors David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 2/9] interrupts: remove direct access to interrupt handle David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 4/9] alarm: " David Marchand
` (7 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
Updating the interrupt testsuite to make use of interrupt
handle get set APIs.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- fixed leak on when some interrupt handle can't be allocated,
---
app/test/test_interrupts.c | 164 ++++++++++++++++++++++---------------
1 file changed, 98 insertions(+), 66 deletions(-)
diff --git a/app/test/test_interrupts.c b/app/test/test_interrupts.c
index 233b14a70b..2a05399f96 100644
--- a/app/test/test_interrupts.c
+++ b/app/test/test_interrupts.c
@@ -16,7 +16,7 @@
/* predefined interrupt handle types */
enum test_interrupt_handle_type {
- TEST_INTERRUPT_HANDLE_INVALID,
+ TEST_INTERRUPT_HANDLE_INVALID = 0,
TEST_INTERRUPT_HANDLE_VALID,
TEST_INTERRUPT_HANDLE_VALID_UIO,
TEST_INTERRUPT_HANDLE_VALID_ALARM,
@@ -27,7 +27,7 @@ enum test_interrupt_handle_type {
/* flag of if callback is called */
static volatile int flag;
-static struct rte_intr_handle intr_handles[TEST_INTERRUPT_HANDLE_MAX];
+static struct rte_intr_handle *intr_handles[TEST_INTERRUPT_HANDLE_MAX];
static enum test_interrupt_handle_type test_intr_type =
TEST_INTERRUPT_HANDLE_MAX;
@@ -50,7 +50,7 @@ static union intr_pipefds pfds;
static inline int
test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
{
- if (!intr_handle || intr_handle->fd < 0)
+ if (!intr_handle || rte_intr_fd_get(intr_handle) < 0)
return -1;
return 0;
@@ -62,31 +62,54 @@ test_interrupt_handle_sanity_check(struct rte_intr_handle *intr_handle)
static int
test_interrupt_init(void)
{
+ struct rte_intr_handle *test_intr_handle;
+ int i;
+
if (pipe(pfds.pipefd) < 0)
return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].fd = -1;
- intr_handles[TEST_INTERRUPT_HANDLE_INVALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++) {
+ intr_handles[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (!intr_handles[i])
+ return -1;
+ }
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID].type =
- RTE_INTR_HANDLE_UNKNOWN;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
+ if (rte_intr_fd_set(test_intr_handle, -1))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO].type =
- RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
+
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM].type =
- RTE_INTR_HANDLE_ALARM;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_ALARM))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].fd = pfds.readfd;
- intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT].type =
- RTE_INTR_HANDLE_DEV_EVENT;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
+ if (rte_intr_fd_set(test_intr_handle, pfds.readfd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ return -1;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].fd = pfds.writefd;
- intr_handles[TEST_INTERRUPT_HANDLE_CASE1].type = RTE_INTR_HANDLE_UIO;
+ test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
+ if (rte_intr_fd_set(test_intr_handle, pfds.writefd))
+ return -1;
+ if (rte_intr_type_set(test_intr_handle, RTE_INTR_HANDLE_UIO))
+ return -1;
return 0;
}
@@ -97,6 +120,10 @@ test_interrupt_init(void)
static int
test_interrupt_deinit(void)
{
+ int i;
+
+ for (i = 0; i < TEST_INTERRUPT_HANDLE_MAX; i++)
+ rte_intr_instance_free(intr_handles[i]);
close(pfds.pipefd[0]);
close(pfds.pipefd[1]);
@@ -125,8 +152,10 @@ test_interrupt_handle_compare(struct rte_intr_handle *intr_handle_l,
if (!intr_handle_l || !intr_handle_r)
return -1;
- if (intr_handle_l->fd != intr_handle_r->fd ||
- intr_handle_l->type != intr_handle_r->type)
+ if (rte_intr_fd_get(intr_handle_l) !=
+ rte_intr_fd_get(intr_handle_r) ||
+ rte_intr_type_get(intr_handle_l) !=
+ rte_intr_type_get(intr_handle_r))
return -1;
return 0;
@@ -178,6 +207,8 @@ static void
test_interrupt_callback(void *arg)
{
struct rte_intr_handle *intr_handle = arg;
+ struct rte_intr_handle *test_intr_handle;
+
if (test_intr_type >= TEST_INTERRUPT_HANDLE_MAX) {
printf("invalid interrupt type\n");
flag = -1;
@@ -198,8 +229,8 @@ test_interrupt_callback(void *arg)
return;
}
- if (test_interrupt_handle_compare(intr_handle,
- &(intr_handles[test_intr_type])) == 0)
+ test_intr_handle = intr_handles[test_intr_type];
+ if (test_interrupt_handle_compare(intr_handle, test_intr_handle) == 0)
flag = 1;
}
@@ -223,7 +254,7 @@ test_interrupt_callback_1(void *arg)
static int
test_interrupt_enable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_enable(NULL) == 0) {
@@ -233,7 +264,7 @@ test_interrupt_enable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable invalid intr_handle "
"successfully\n");
return -1;
@@ -241,7 +272,7 @@ test_interrupt_enable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -249,7 +280,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -257,7 +288,7 @@ test_interrupt_enable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -265,13 +296,13 @@ test_interrupt_enable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_enable(&test_intr_handle) < 0) {
+ if (rte_intr_enable(test_intr_handle) < 0) {
printf("fail to enable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_enable(&test_intr_handle) == 0) {
+ if (rte_intr_enable(test_intr_handle) == 0) {
printf("unexpectedly enable a specific intr_handle "
"successfully\n");
return -1;
@@ -286,7 +317,7 @@ test_interrupt_enable(void)
static int
test_interrupt_disable(void)
{
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
/* check with null intr_handle */
if (rte_intr_disable(NULL) == 0) {
@@ -297,7 +328,7 @@ test_interrupt_disable(void)
/* check with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable invalid intr_handle "
"successfully\n");
return -1;
@@ -305,7 +336,7 @@ test_interrupt_disable(void)
/* check with valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -313,7 +344,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -321,7 +352,7 @@ test_interrupt_disable(void)
/* check with specific valid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -329,13 +360,13 @@ test_interrupt_disable(void)
/* check with valid handler and its type */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_CASE1];
- if (rte_intr_disable(&test_intr_handle) < 0) {
+ if (rte_intr_disable(test_intr_handle) < 0) {
printf("fail to disable interrupt on a simulated handler\n");
return -1;
}
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- if (rte_intr_disable(&test_intr_handle) == 0) {
+ if (rte_intr_disable(test_intr_handle) == 0) {
printf("unexpectedly disable a specific intr_handle "
"successfully\n");
return -1;
@@ -351,13 +382,13 @@ static int
test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
{
int count;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
flag = 0;
test_intr_handle = intr_handles[intr_type];
test_intr_type = intr_type;
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("fail to register callback\n");
return -1;
}
@@ -371,9 +402,9 @@ test_interrupt_full_path_check(enum test_interrupt_handle_type intr_type)
rte_delay_ms(TEST_INTERRUPT_CHECK_INTERVAL);
while ((count =
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback,
- &test_intr_handle)) < 0) {
+ test_intr_handle)) < 0) {
if (count != -EAGAIN)
return -1;
}
@@ -396,11 +427,11 @@ static int
test_interrupt(void)
{
int ret = -1;
- struct rte_intr_handle test_intr_handle;
+ struct rte_intr_handle *test_intr_handle;
if (test_interrupt_init() < 0) {
printf("fail to initialize for testing interrupt\n");
- return -1;
+ goto out;
}
printf("Check unknown valid interrupt full path\n");
@@ -445,8 +476,8 @@ test_interrupt(void)
/* check if it will fail to register cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) == 0) {
printf("unexpectedly register successfully with invalid "
"intr_handle\n");
goto out;
@@ -454,7 +485,8 @@ test_interrupt(void)
/* check if it will fail to register without callback */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle, NULL, &test_intr_handle) == 0) {
+ if (rte_intr_callback_register(test_intr_handle, NULL,
+ test_intr_handle) == 0) {
printf("unexpectedly register successfully with "
"null callback\n");
goto out;
@@ -470,8 +502,8 @@ test_interrupt(void)
/* check if it will fail to unregister cb with invalid intr_handle */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_INVALID];
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) > 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) > 0) {
printf("unexpectedly unregister successfully with "
"invalid intr_handle\n");
goto out;
@@ -479,29 +511,29 @@ test_interrupt(void)
/* check if it is ok to register the same intr_handle twice */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_register(&test_intr_handle,
- test_interrupt_callback_1, &test_intr_handle) < 0) {
+ if (rte_intr_callback_register(test_intr_handle,
+ test_interrupt_callback_1, test_intr_handle) < 0) {
printf("it fails to register test_interrupt_callback_1\n");
goto out;
}
/* check if it will fail to unregister with invalid parameter */
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)0xff) != 0) {
printf("unexpectedly unregisters successfully with "
"invalid arg\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
- test_interrupt_callback, &test_intr_handle) <= 0) {
+ if (rte_intr_callback_unregister(test_intr_handle,
+ test_interrupt_callback, test_intr_handle) <= 0) {
printf("it fails to unregister test_interrupt_callback\n");
goto out;
}
- if (rte_intr_callback_unregister(&test_intr_handle,
+ if (rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1) <= 0) {
printf("it fails to unregister test_interrupt_callback_1 "
"for all\n");
@@ -529,27 +561,27 @@ test_interrupt(void)
printf("Clearing for interrupt tests\n");
/* clear registered callbacks */
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_UIO];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_ALARM];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
test_intr_handle = intr_handles[TEST_INTERRUPT_HANDLE_VALID_DEV_EVENT];
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback, (void *)-1);
- rte_intr_callback_unregister(&test_intr_handle,
+ rte_intr_callback_unregister(test_intr_handle,
test_interrupt_callback_1, (void *)-1);
rte_delay_ms(2 * TEST_INTERRUPT_CHECK_INTERVAL);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 4/9] alarm: remove direct access to interrupt handle
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (2 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 3/9] test/interrupts: " David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 5/9] lib: " David Marchand
` (6 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas, Bruce Richardson
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Implementing alarm cleanup routine, where the memory allocated
for interrupt instance can be freed.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- removed unused interrupt handle in FreeBSD alarm code,
Changes since v5:
- split from patch4,
- merged patch6,
- renamed rte_eal_alarm_fini as rte_eal_alarm_cleanup,
---
lib/eal/common/eal_private.h | 10 ++++++++++
lib/eal/freebsd/eal.c | 1 +
lib/eal/freebsd/eal_alarm.c | 35 +++++++++++++++++++++++++++++------
lib/eal/linux/eal.c | 1 +
lib/eal/linux/eal_alarm.c | 32 +++++++++++++++++++++++++-------
5 files changed, 66 insertions(+), 13 deletions(-)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86dab1f057..36bcc0b5a4 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -163,6 +163,16 @@ int rte_eal_intr_init(void);
*/
int rte_eal_alarm_init(void);
+/**
+ * Alarm mechanism cleanup.
+ *
+ * This function is private to EAL.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+void rte_eal_alarm_cleanup(void);
+
/**
* Function is to check if the kernel module(like, vfio, vfio_iommu_type1,
* etc.) loaded.
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 56a60f13e9..9935356ed4 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -975,6 +975,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index c38b2e04f8..1023c32937 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -32,7 +32,6 @@
struct alarm_entry {
LIST_ENTRY(alarm_entry) next;
- struct rte_intr_handle handle;
struct timespec time;
rte_eal_alarm_callback cb_fn;
void *cb_arg;
@@ -43,22 +42,46 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+ int fd;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto error;
/* on FreeBSD, timers don't use fd's, and their identifiers are stored
* in separate namespace from fd's, so using any value is OK. however,
* EAL interrupts handler expects fd's to be unique, so use an actual fd
* to guarantee unique timer identifier.
*/
- intr_handle.fd = open("/dev/zero", O_RDONLY);
+ fd = open("/dev/zero", O_RDONLY);
+
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto error;
return 0;
+error:
+ rte_intr_instance_free(intr_handle);
+ return -1;
}
static inline int
@@ -118,7 +141,7 @@ unregister_current_callback(void)
ap = LIST_FIRST(&alarm_list);
do {
- ret = rte_intr_callback_unregister(&intr_handle,
+ ret = rte_intr_callback_unregister(intr_handle,
eal_alarm_callback, &ap->time);
} while (ret == -EAGAIN);
}
@@ -136,7 +159,7 @@ register_first_callback(void)
ap = LIST_FIRST(&alarm_list);
/* register a new callback */
- ret = rte_intr_callback_register(&intr_handle,
+ ret = rte_intr_callback_register(intr_handle,
eal_alarm_callback, &ap->time);
}
return ret;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 0d0fc66668..81fdebc6a0 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1368,6 +1368,7 @@ rte_eal_cleanup(void)
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
+ rte_eal_alarm_cleanup();
rte_trace_save();
eal_trace_fini();
eal_cleanup_config(internal_conf);
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 3252c6fa59..3b5e894595 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -54,22 +54,40 @@ struct alarm_entry {
static LIST_HEAD(alarm_list, alarm_entry) alarm_list = LIST_HEAD_INITIALIZER();
static rte_spinlock_t alarm_list_lk = RTE_SPINLOCK_INITIALIZER;
-static struct rte_intr_handle intr_handle = {.fd = -1 };
+static struct rte_intr_handle *intr_handle;
static int handler_registered = 0;
static void eal_alarm_callback(void *arg);
+void
+rte_eal_alarm_cleanup(void)
+{
+ rte_intr_instance_free(intr_handle);
+}
+
int
rte_eal_alarm_init(void)
{
- intr_handle.type = RTE_INTR_HANDLE_ALARM;
+
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto error;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_ALARM))
+ goto error;
+
/* create a timerfd file descriptor */
- intr_handle.fd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK);
- if (intr_handle.fd == -1)
+ if (rte_intr_fd_set(intr_handle,
+ timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK)))
goto error;
+ if (rte_intr_fd_get(intr_handle) == -1)
+ goto error;
return 0;
error:
+ rte_intr_instance_free(intr_handle);
rte_errno = errno;
return -1;
}
@@ -109,7 +127,7 @@ eal_alarm_callback(void *arg __rte_unused)
atime.it_value.tv_sec -= now.tv_sec;
atime.it_value.tv_nsec -= now.tv_nsec;
- timerfd_settime(intr_handle.fd, 0, &atime, NULL);
+ timerfd_settime(rte_intr_fd_get(intr_handle), 0, &atime, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
}
@@ -140,7 +158,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
rte_spinlock_lock(&alarm_list_lk);
if (!handler_registered) {
/* registration can fail, callback can be registered later */
- if (rte_intr_callback_register(&intr_handle,
+ if (rte_intr_callback_register(intr_handle,
eal_alarm_callback, NULL) == 0)
handler_registered = 1;
}
@@ -170,7 +188,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
.tv_nsec = (us % US_PER_S) * NS_PER_US,
},
};
- ret |= timerfd_settime(intr_handle.fd, 0, &alarm_time, NULL);
+ ret |= timerfd_settime(rte_intr_fd_get(intr_handle), 0, &alarm_time, NULL);
}
rte_spinlock_unlock(&alarm_list_lk);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 5/9] lib: remove direct access to interrupt handle
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (3 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 4/9] alarm: " David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 6/9] drivers: " David Marchand
` (5 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Nicolas Chautru, Ferruh Yigit,
Andrew Rybchenko
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the libraries access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch4,
---
lib/bbdev/rte_bbdev.c | 4 +--
lib/eal/linux/eal_dev.c | 57 ++++++++++++++++++++++++-----------------
lib/ethdev/rte_ethdev.c | 14 +++++-----
3 files changed, 43 insertions(+), 32 deletions(-)
diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
index defddcfc28..b86c5fdcc0 100644
--- a/lib/bbdev/rte_bbdev.c
+++ b/lib/bbdev/rte_bbdev.c
@@ -1094,7 +1094,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
VALID_QUEUE_OR_RET_ERR(queue_id, dev);
intr_handle = dev->intr_handle;
- if (!intr_handle || !intr_handle->intr_vec) {
+ if (intr_handle == NULL) {
rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id);
return -ENOTSUP;
}
@@ -1105,7 +1105,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op,
return -ENOTSUP;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (ret && (ret != -EEXIST)) {
rte_bbdev_log(ERR,
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index 3b905e18f5..06820a3666 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -23,10 +23,7 @@
#include "eal_private.h"
-static struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_DEV_EVENT,
- .fd = -1,
-};
+static struct rte_intr_handle *intr_handle;
static rte_rwlock_t monitor_lock = RTE_RWLOCK_INITIALIZER;
static uint32_t monitor_refcount;
static bool hotplug_handle;
@@ -109,12 +106,11 @@ static int
dev_uev_socket_fd_create(void)
{
struct sockaddr_nl addr;
- int ret;
+ int ret, fd;
- intr_handle.fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC |
- SOCK_NONBLOCK,
- NETLINK_KOBJECT_UEVENT);
- if (intr_handle.fd < 0) {
+ fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
+ NETLINK_KOBJECT_UEVENT);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
return -1;
}
@@ -124,16 +120,19 @@ dev_uev_socket_fd_create(void)
addr.nl_pid = 0;
addr.nl_groups = 0xffffffff;
- ret = bind(intr_handle.fd, (struct sockaddr *) &addr, sizeof(addr));
+ ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
goto err;
}
+ if (rte_intr_fd_set(intr_handle, fd))
+ goto err;
+
return 0;
err:
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(fd);
+ fd = -1;
return ret;
}
@@ -217,9 +216,9 @@ dev_uev_parse(const char *buf, struct rte_dev_event *event, int length)
static void
dev_delayed_unregister(void *param)
{
- rte_intr_callback_unregister(&intr_handle, dev_uev_handler, param);
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ rte_intr_callback_unregister(intr_handle, dev_uev_handler, param);
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_fd_set(intr_handle, -1);
}
static void
@@ -235,7 +234,8 @@ dev_uev_handler(__rte_unused void *param)
memset(&uevent, 0, sizeof(struct rte_dev_event));
memset(buf, 0, EAL_UEV_MSG_LEN);
- ret = recv(intr_handle.fd, buf, EAL_UEV_MSG_LEN, MSG_DONTWAIT);
+ ret = recv(rte_intr_fd_get(intr_handle), buf, EAL_UEV_MSG_LEN,
+ MSG_DONTWAIT);
if (ret < 0 && errno == EAGAIN)
return;
else if (ret <= 0) {
@@ -311,24 +311,35 @@ rte_dev_event_monitor_start(void)
goto exit;
}
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto exit;
+ }
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_DEV_EVENT))
+ goto exit;
+
+ if (rte_intr_fd_set(intr_handle, -1))
+ goto exit;
+
ret = dev_uev_socket_fd_create();
if (ret) {
RTE_LOG(ERR, EAL, "error create device event fd.\n");
goto exit;
}
- ret = rte_intr_callback_register(&intr_handle, dev_uev_handler, NULL);
+ ret = rte_intr_callback_register(intr_handle, dev_uev_handler, NULL);
if (ret) {
- RTE_LOG(ERR, EAL, "fail to register uevent callback.\n");
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
goto exit;
}
monitor_refcount++;
exit:
+ rte_intr_instance_free(intr_handle);
rte_rwlock_write_unlock(&monitor_lock);
return ret;
}
@@ -350,15 +361,15 @@ rte_dev_event_monitor_stop(void)
goto exit;
}
- ret = rte_intr_callback_unregister(&intr_handle, dev_uev_handler,
+ ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
goto exit;
}
- close(intr_handle.fd);
- intr_handle.fd = -1;
+ close(rte_intr_fd_get(intr_handle));
+ rte_intr_instance_free(intr_handle);
monitor_refcount--;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 74de29c2e0..7db84b12d0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4819,13 +4819,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
for (qid = 0; qid < dev->data->nb_rx_queues; qid++) {
- vec = intr_handle->intr_vec[qid];
+ vec = rte_intr_vec_list_index_get(intr_handle, qid);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
@@ -4860,15 +4860,15 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id)
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -1;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
- fd = intr_handle->efds[efd_idx];
+ fd = rte_intr_efds_index_get(intr_handle, efd_idx);
return fd;
}
@@ -5046,12 +5046,12 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id,
}
intr_handle = dev->intr_handle;
- if (!intr_handle->intr_vec) {
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) {
RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n");
return -EPERM;
}
- vec = intr_handle->intr_vec[queue_id];
+ vec = rte_intr_vec_list_index_get(intr_handle, queue_id);
rc = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data);
if (rc && rc != -EEXIST) {
RTE_ETHDEV_LOG(ERR,
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 6/9] drivers: remove direct access to interrupt handle
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (4 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 5/9] lib: " David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 7/9] interrupts: make interrupt handle structure opaque David Marchand
` (4 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Hyong Youb Kim, Nicolas Chautru,
Parav Pandit, Xueming Li, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Ferruh Yigit, Anatoly Burakov, Stephen Hemminger,
Long Li, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
Satha Rao, Jerin Jacob, Ankur Dwivedi, Anoob Joseph,
Pavan Nikhilesh, Igor Russkikh, Steven Webster, Matt Peters,
Chandubabu Namburu, Rasesh Mody, Shahed Shaikh, Ajit Khaparde,
Somnath Kotur, Haiyue Wang, Marcin Wojtas, Michal Krawczyk,
Shai Brandes, Evgeny Schemeilin, Igor Chauskin, John Daley,
Gaetan Rivet, Qi Zhang, Xiao Wang, Ziyang Xuan, Xiaoyun Wang,
Guoyang Zhou, Min Hu (Connor),
Yisen Zhuang, Lijun Ou, Beilei Xing, Jingjing Wu, Qiming Yang,
Andrew Boyer, Jakub Grajciar, Matan Azrad, Viacheslav Ovsiienko,
Heinrich Kuhn, Jiawen Wu, Devendra Singh Rawat, Andrew Rybchenko,
Keith Wiles, Maciej Czekaj, Jian Wang, Maxime Coquelin,
Chenbo Xia, Yong Wang, Tianfei zhang, Xiaoyun Li, Guy Kaneti
From: Harman Kalra <hkalra@marvell.com>
Removing direct access to interrupt handle structure fields,
rather use respective get set APIs for the same.
Making changes to all the drivers access the interrupt handle fields.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v6:
- fixed interrupt handle allocation for drivers without
RTE_PCI_DRV_NEED_MAPPING,
Changes since v5:
- moved instance allocation to probing for auxiliary,
- fixed dev_irq_register() return value sign on error for
drivers/common/cnxk/roc_irq.c,
---
drivers/baseband/acc100/rte_acc100_pmd.c | 14 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 ++--
drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 ++--
drivers/bus/auxiliary/auxiliary_common.c | 17 ++-
drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
drivers/bus/dpaa/dpaa_bus.c | 28 ++++-
drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
drivers/bus/fslmc/fslmc_bus.c | 14 ++-
drivers/bus/fslmc/fslmc_vfio.c | 30 +++--
drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 ++-
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/rte_fslmc.h | 2 +-
drivers/bus/ifpga/ifpga_bus.c | 13 +-
drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
drivers/bus/pci/bsd/pci.c | 20 +--
drivers/bus/pci/linux/pci.c | 4 +-
drivers/bus/pci/linux/pci_uio.c | 69 +++++++----
drivers/bus/pci/linux/pci_vfio.c | 102 +++++++++------
drivers/bus/pci/pci_common.c | 47 +++++--
drivers/bus/pci/pci_common_uio.c | 21 ++--
drivers/bus/pci/rte_bus_pci.h | 4 +-
drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
drivers/bus/vmbus/linux/vmbus_uio.c | 35 ++++--
drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
drivers/bus/vmbus/vmbus_common_uio.c | 23 ++--
drivers/common/cnxk/roc_cpt.c | 8 +-
drivers/common/cnxk/roc_dev.c | 14 +--
drivers/common/cnxk/roc_irq.c | 107 +++++++++-------
drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
drivers/common/cnxk/roc_nix_irq.c | 36 +++---
drivers/common/cnxk/roc_npa.c | 2 +-
drivers/common/cnxk/roc_platform.h | 48 +++++--
drivers/common/cnxk/roc_sso.c | 4 +-
drivers/common/cnxk/roc_tim.c | 4 +-
drivers/common/octeontx2/otx2_dev.c | 14 +--
drivers/common/octeontx2/otx2_irq.c | 117 ++++++++++--------
.../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
drivers/net/atlantic/atl_ethdev.c | 20 ++-
drivers/net/avp/avp_ethdev.c | 8 +-
drivers/net/axgbe/axgbe_ethdev.c | 12 +-
drivers/net/axgbe/axgbe_mdio.c | 6 +-
drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 33 +++--
drivers/net/bnxt/bnxt_irq.c | 4 +-
drivers/net/dpaa/dpaa_ethdev.c | 48 ++++---
drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
drivers/net/e1000/em_ethdev.c | 23 ++--
drivers/net/e1000/igb_ethdev.c | 79 ++++++------
drivers/net/ena/ena_ethdev.c | 35 +++---
drivers/net/enic/enic_main.c | 26 ++--
drivers/net/failsafe/failsafe.c | 21 +++-
drivers/net/failsafe/failsafe_intr.c | 43 ++++---
drivers/net/failsafe/failsafe_ops.c | 19 ++-
drivers/net/failsafe/failsafe_private.h | 2 +-
drivers/net/fm10k/fm10k_ethdev.c | 32 ++---
drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
drivers/net/hns3/hns3_ethdev.c | 57 ++++-----
drivers/net/hns3/hns3_ethdev_vf.c | 64 +++++-----
drivers/net/hns3/hns3_rxtx.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 53 ++++----
drivers/net/iavf/iavf_ethdev.c | 42 +++----
drivers/net/iavf/iavf_vchnl.c | 4 +-
drivers/net/ice/ice_dcf.c | 10 +-
drivers/net/ice/ice_dcf_ethdev.c | 21 ++--
drivers/net/ice/ice_ethdev.c | 49 ++++----
drivers/net/igc/igc_ethdev.c | 45 ++++---
drivers/net/ionic/ionic_ethdev.c | 17 +--
drivers/net/ixgbe/ixgbe_ethdev.c | 66 +++++-----
drivers/net/memif/memif_socket.c | 108 +++++++++++-----
drivers/net/memif/memif_socket.h | 4 +-
drivers/net/memif/rte_eth_memif.c | 56 +++++++--
drivers/net/memif/rte_eth_memif.h | 2 +-
drivers/net/mlx4/mlx4.c | 19 ++-
drivers/net/mlx4/mlx4.h | 2 +-
drivers/net/mlx4/mlx4_intr.c | 47 ++++---
drivers/net/mlx5/linux/mlx5_os.c | 55 +++++---
drivers/net/mlx5/linux/mlx5_socket.c | 25 ++--
drivers/net/mlx5/mlx5.h | 6 +-
drivers/net/mlx5/mlx5_rxq.c | 43 ++++---
drivers/net/mlx5/mlx5_trigger.c | 4 +-
drivers/net/mlx5/mlx5_txpp.c | 25 ++--
drivers/net/netvsc/hn_ethdev.c | 4 +-
drivers/net/nfp/nfp_common.c | 34 ++---
drivers/net/nfp/nfp_ethdev.c | 13 +-
drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
drivers/net/ngbe/ngbe_ethdev.c | 29 ++---
drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +++---
drivers/net/qede/qede_ethdev.c | 16 +--
drivers/net/sfc/sfc_intr.c | 30 ++---
drivers/net/tap/rte_eth_tap.c | 33 +++--
drivers/net/tap/rte_eth_tap.h | 2 +-
drivers/net/tap/tap_intr.c | 33 ++---
drivers/net/thunderx/nicvf_ethdev.c | 10 ++
drivers/net/thunderx/nicvf_struct.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 38 +++---
drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +++--
drivers/net/vhost/rte_eth_vhost.c | 80 ++++++------
drivers/net/virtio/virtio_ethdev.c | 21 ++--
.../net/virtio/virtio_user/virtio_user_dev.c | 56 +++++----
drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 ++++---
drivers/raw/ifpga/ifpga_rawdev.c | 62 +++++++---
drivers/raw/ntb/ntb.c | 9 +-
.../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
drivers/vdpa/mlx5/mlx5_vdpa.c | 8 ++
drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 ++--
drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 ++++---
lib/ethdev/ethdev_pci.h | 2 +-
111 files changed, 1673 insertions(+), 1183 deletions(-)
diff --git a/drivers/baseband/acc100/rte_acc100_pmd.c b/drivers/baseband/acc100/rte_acc100_pmd.c
index 05fe6f8b6f..1c6080f2f8 100644
--- a/drivers/baseband/acc100/rte_acc100_pmd.c
+++ b/drivers/baseband/acc100/rte_acc100_pmd.c
@@ -720,8 +720,8 @@ acc100_intr_enable(struct rte_bbdev *dev)
struct acc100_device *d = dev->data->dev_private;
/* Only MSI are currently supported */
- if (dev->intr_handle->type == RTE_INTR_HANDLE_VFIO_MSI ||
- dev->intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_VFIO_MSI ||
+ rte_intr_type_get(dev->intr_handle) == RTE_INTR_HANDLE_UIO) {
ret = allocate_info_ring(dev);
if (ret < 0) {
@@ -1098,8 +1098,8 @@ acc100_queue_intr_enable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 1;
@@ -1111,8 +1111,8 @@ acc100_queue_intr_disable(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc100_queue *q = dev->data->queues[queue_id].queue_private;
- if (dev->intr_handle->type != RTE_INTR_HANDLE_VFIO_MSI &&
- dev->intr_handle->type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_VFIO_MSI &&
+ rte_intr_type_get(dev->intr_handle) != RTE_INTR_HANDLE_UIO)
return -ENOTSUP;
q->irq_enable = 0;
@@ -4185,7 +4185,7 @@ static int acc100_pci_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke ACC100 device initialization function */
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index ee457f3071..15d23d6269 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -743,17 +743,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -1880,7 +1880,7 @@ fpga_5gnr_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
index 703bb611a0..92decc3e05 100644
--- a/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
+++ b/drivers/baseband/fpga_lte_fec/fpga_lte_fec.c
@@ -1014,17 +1014,17 @@ fpga_intr_enable(struct rte_bbdev *dev)
* It ensures that callback function assigned to that descriptor will
* invoked when any FPGA queue issues interrupt.
*/
- for (i = 0; i < FPGA_NUM_INTR_VEC; ++i)
- dev->intr_handle->efds[i] = dev->intr_handle->fd;
-
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->num_queues * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- rte_bbdev_log(ERR, "Failed to allocate %u vectors",
- dev->data->num_queues);
- return -ENOMEM;
- }
+ for (i = 0; i < FPGA_NUM_INTR_VEC; ++i) {
+ if (rte_intr_efds_index_set(dev->intr_handle, i,
+ rte_intr_fd_get(dev->intr_handle)))
+ return -rte_errno;
+ }
+
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ dev->data->num_queues)) {
+ rte_bbdev_log(ERR, "Failed to allocate %u vectors",
+ dev->data->num_queues);
+ return -ENOMEM;
}
ret = rte_intr_enable(dev->intr_handle);
@@ -2370,7 +2370,7 @@ fpga_lte_fec_probe(struct rte_pci_driver *pci_drv,
/* Fill HW specific part of device structure */
bbdev->device = &pci_dev->device;
- bbdev->intr_handle = &pci_dev->intr_handle;
+ bbdev->intr_handle = pci_dev->intr_handle;
bbdev->data->socket_id = pci_dev->device.numa_node;
/* Invoke FEC FPGA device initialization function */
diff --git a/drivers/bus/auxiliary/auxiliary_common.c b/drivers/bus/auxiliary/auxiliary_common.c
index 603b6fdc02..2cf8fe672d 100644
--- a/drivers/bus/auxiliary/auxiliary_common.c
+++ b/drivers/bus/auxiliary/auxiliary_common.c
@@ -121,15 +121,27 @@ rte_auxiliary_probe_one_driver(struct rte_auxiliary_driver *drv,
return -EINVAL;
}
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ AUXILIARY_LOG(ERR, "Could not allocate interrupt instance for device %s",
+ dev->name);
+ return -ENOMEM;
+ }
+
dev->driver = drv;
AUXILIARY_LOG(INFO, "Probe auxiliary driver: %s device: %s (NUMA node %i)",
drv->driver.name, dev->name, dev->device.numa_node);
ret = drv->probe(drv, dev);
- if (ret != 0)
+ if (ret != 0) {
dev->driver = NULL;
- else
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ } else {
dev->device.driver = &drv->driver;
+ }
return ret;
}
@@ -320,6 +332,7 @@ auxiliary_unplug(struct rte_device *dev)
if (ret == 0) {
rte_auxiliary_remove_device(adev);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(adev->intr_handle);
free(adev);
}
return ret;
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index b1f5610404..93b266daf7 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -115,7 +115,7 @@ struct rte_auxiliary_device {
RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
struct rte_device device; /**< Inherit core device */
char name[RTE_DEV_NAME_MAX_LEN + 1]; /**< ASCII device name */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_auxiliary_driver *driver; /**< Device driver */
};
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6cab2ae760..9a53fdc1fb 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -172,6 +172,15 @@ dpaa_create_device_list(void)
dev->device.bus = &rte_dpaa_bus.bus;
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
cfg = &dpaa_netcfg->port_cfg[i];
fman_intf = cfg->fman_if;
@@ -214,6 +223,15 @@ dpaa_create_device_list(void)
goto cleanup;
}
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA_BUS_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
dev->device_type = FSL_DPAA_CRYPTO;
dev->id.dev_id = rte_dpaa_bus.device_count + i;
@@ -247,6 +265,7 @@ dpaa_clean_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -559,8 +578,11 @@ static int rte_dpaa_setup_intr(struct rte_intr_handle *intr_handle)
return errno;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return rte_errno;
return 0;
}
@@ -612,7 +634,7 @@ rte_dpaa_bus_probe(void)
TAILQ_FOREACH(dev, &rte_dpaa_bus.device_list, next) {
if (dev->device_type == FSL_DPAA_ETH) {
- ret = rte_dpaa_setup_intr(&dev->intr_handle);
+ ret = rte_dpaa_setup_intr(dev->intr_handle);
if (ret)
DPAA_BUS_ERR("Error setting up interrupt.\n");
}
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index ecc66387f6..97d189f9b0 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -98,7 +98,7 @@ struct rte_dpaa_device {
};
struct rte_dpaa_driver *driver;
struct dpaa_device_id id;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
enum rte_dpaa_type device_type; /**< Ethernet or crypto type device */
char name[RTE_ETH_NAME_MAX_LEN];
};
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 8c8f8a298d..ac3cb4aa5a 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -47,6 +47,7 @@ cleanup_fslmc_device_list(void)
RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
dev = NULL;
}
@@ -160,6 +161,15 @@ scan_one_fslmc_device(char *dev_name)
dev->device.bus = &rte_fslmc_bus.bus;
+ /* Allocate interrupt instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ ret = -ENOMEM;
+ goto cleanup;
+ }
+
/* Parse the device name and ID */
t_ptr = strtok(dup_dev_name, ".");
if (!t_ptr) {
@@ -220,8 +230,10 @@ scan_one_fslmc_device(char *dev_name)
cleanup:
if (dup_dev_name)
free(dup_dev_name);
- if (dev)
+ if (dev) {
+ rte_intr_instance_free(dev->intr_handle);
free(dev);
+ }
return ret;
}
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 852fcfc4dd..b4704eeae4 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -599,7 +599,7 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
int len, ret;
char irq_set_buf[IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
- int *fd_ptr;
+ int *fd_ptr, vfio_dev_fd;
len = sizeof(irq_set_buf);
@@ -611,12 +611,14 @@ int rte_dpaa2_intr_enable(struct rte_intr_handle *intr_handle, int index)
irq_set->index = index;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- *fd_ptr = intr_handle->fd;
+ *fd_ptr = rte_intr_fd_get(intr_handle);
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
DPAA2_BUS_ERR("Error:dpaa2 SET IRQs fd=%d, err = %d(%s)",
- intr_handle->fd, errno, strerror(errno));
+ rte_intr_fd_get(intr_handle), errno,
+ strerror(errno));
return ret;
}
@@ -627,7 +629,7 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
{
struct vfio_irq_set *irq_set;
char irq_set_buf[IRQ_SET_BUF_LEN];
- int len, ret;
+ int len, ret, vfio_dev_fd;
len = sizeof(struct vfio_irq_set);
@@ -638,11 +640,12 @@ int rte_dpaa2_intr_disable(struct rte_intr_handle *intr_handle, int index)
irq_set->start = 0;
irq_set->count = 0;
- ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
DPAA2_BUS_ERR(
"Error disabling dpaa2 interrupts for fd %d",
- intr_handle->fd);
+ rte_intr_fd_get(intr_handle));
return ret;
}
@@ -684,9 +687,14 @@ rte_dpaa2_vfio_setup_intr(struct rte_intr_handle *intr_handle,
return -1;
}
- intr_handle->fd = fd;
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSI;
- intr_handle->vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSI))
+ return -rte_errno;
+
+ if (rte_intr_dev_fd_set(intr_handle, vfio_dev_fd))
+ return -rte_errno;
return 0;
}
@@ -711,7 +719,7 @@ fslmc_process_iodevices(struct rte_dpaa2_device *dev)
switch (dev->dev_type) {
case DPAA2_ETH:
- rte_dpaa2_vfio_setup_intr(&dev->intr_handle, dev_fd,
+ rte_dpaa2_vfio_setup_intr(dev->intr_handle, dev_fd,
device_info.num_irqs);
break;
case DPAA2_CON:
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
index 1a1e437ed1..2210a0fa4a 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpio.c
@@ -176,7 +176,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
int threshold = 0x3, timeout = 0xFF;
dpio_epoll_fd = epoll_create(1);
- ret = rte_dpaa2_intr_enable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_enable(dpio_dev->intr_handle, 0);
if (ret) {
DPAA2_BUS_ERR("Interrupt registeration failed");
return -1;
@@ -195,7 +195,7 @@ static int dpaa2_dpio_intr_init(struct dpaa2_dpio_dev *dpio_dev)
qbman_swp_dqrr_thrshld_write(dpio_dev->sw_portal, threshold);
qbman_swp_intr_timeout_write(dpio_dev->sw_portal, timeout);
- eventfd = dpio_dev->intr_handle.fd;
+ eventfd = rte_intr_fd_get(dpio_dev->intr_handle);
epoll_ev.events = EPOLLIN | EPOLLPRI | EPOLLET;
epoll_ev.data.fd = eventfd;
@@ -213,7 +213,7 @@ static void dpaa2_dpio_intr_deinit(struct dpaa2_dpio_dev *dpio_dev)
{
int ret;
- ret = rte_dpaa2_intr_disable(&dpio_dev->intr_handle, 0);
+ ret = rte_dpaa2_intr_disable(dpio_dev->intr_handle, 0);
if (ret)
DPAA2_BUS_ERR("DPIO interrupt disable failed");
@@ -388,6 +388,14 @@ dpaa2_create_dpio_device(int vdev_fd,
/* Using single portal for all devices */
dpio_dev->mc_portal = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX);
+ /* Allocate interrupt instance */
+ dpio_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!dpio_dev->intr_handle) {
+ DPAA2_BUS_ERR("Failed to allocate intr handle");
+ goto err;
+ }
+
dpio_dev->dpio = rte_zmalloc(NULL, sizeof(struct fsl_mc_io),
RTE_CACHE_LINE_SIZE);
if (!dpio_dev->dpio) {
@@ -490,7 +498,7 @@ dpaa2_create_dpio_device(int vdev_fd,
io_space_count++;
dpio_dev->index = io_space_count;
- if (rte_dpaa2_vfio_setup_intr(&dpio_dev->intr_handle, vdev_fd, 1)) {
+ if (rte_dpaa2_vfio_setup_intr(dpio_dev->intr_handle, vdev_fd, 1)) {
DPAA2_BUS_ERR("Fail to setup interrupt for %d",
dpio_dev->hw_id);
goto err;
@@ -538,6 +546,7 @@ dpaa2_create_dpio_device(int vdev_fd,
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
/* For each element in the list, cleanup */
@@ -549,6 +558,7 @@ dpaa2_create_dpio_device(int vdev_fd,
dpio_dev->token);
rte_free(dpio_dev->dpio);
}
+ rte_intr_instance_free(dpio_dev->intr_handle);
rte_free(dpio_dev);
}
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index 037c841ef5..b1bba1ac36 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -116,7 +116,7 @@ struct dpaa2_dpio_dev {
uintptr_t qbman_portal_ci_paddr;
/**< Physical address of Cache Inhibit Area */
uintptr_t ci_size; /**< Size of the CI region */
- struct rte_intr_handle intr_handle; /* Interrupt related info */
+ struct rte_intr_handle *intr_handle; /* Interrupt related info */
int32_t epoll_fd; /**< File descriptor created for interrupt polling */
int32_t hw_id; /**< An unique ID of this DPIO device instance */
struct dpaa2_portal_dqrr dpaa2_held_bufs;
diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h
index a71cac7a9f..729f360646 100644
--- a/drivers/bus/fslmc/rte_fslmc.h
+++ b/drivers/bus/fslmc/rte_fslmc.h
@@ -122,7 +122,7 @@ struct rte_dpaa2_device {
};
enum rte_dpaa2_dev_type dev_type; /**< Device Type */
uint16_t object_id; /**< DPAA2 Object ID */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_dpaa2_driver *driver; /**< Associated driver */
char name[FSLMC_OBJECT_MAX_LEN]; /**< DPAA2 Object name*/
};
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index 62887da2d8..cbc6809284 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -161,6 +161,14 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
afu_dev->id.uuid.uuid_high = 0;
afu_dev->id.port = afu_pr_conf.afu_id.port;
+ /* Allocate interrupt instance */
+ afu_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (afu_dev->intr_handle == NULL) {
+ IFPGA_BUS_ERR("Failed to allocate intr handle");
+ goto end;
+ }
+
if (rawdev->dev_ops && rawdev->dev_ops->dev_info_get)
rawdev->dev_ops->dev_info_get(rawdev, afu_dev, sizeof(*afu_dev));
@@ -189,8 +197,10 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rte_kvargs_free(kvlist);
if (path)
free(path);
- if (afu_dev)
+ if (afu_dev) {
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
+ }
return NULL;
}
@@ -396,6 +406,7 @@ ifpga_unplug(struct rte_device *dev)
TAILQ_REMOVE(&ifpga_afu_dev_list, afu_dev, next);
rte_devargs_remove(dev->devargs);
+ rte_intr_instance_free(afu_dev->intr_handle);
free(afu_dev);
return 0;
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index a85e90d384..007ad19875 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -79,7 +79,7 @@ struct rte_afu_device {
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< AFU Memory Resource */
struct rte_afu_shared shared;
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_afu_driver *driver; /**< Associated driver */
char path[IFPGA_BUS_BITSTREAM_PATH_MAX_LEN];
} __rte_packed;
diff --git a/drivers/bus/pci/bsd/pci.c b/drivers/bus/pci/bsd/pci.c
index d189bff311..9a11f99ae3 100644
--- a/drivers/bus/pci/bsd/pci.c
+++ b/drivers/bus/pci/bsd/pci.c
@@ -95,10 +95,10 @@ pci_uio_free_resource(struct rte_pci_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.fd) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle)) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -121,13 +121,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
}
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(dev->intr_handle, open(devname, O_RDWR))) {
+ RTE_LOG(WARNING, EAL, "Failed to save fd");
+ goto error;
+ }
+
+ if (rte_intr_fd_get(dev->intr_handle) < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 4d261b55ee..e521459870 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -645,7 +645,7 @@ int rte_pci_read_config(const struct rte_pci_device *device,
void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
@@ -669,7 +669,7 @@ int rte_pci_write_config(const struct rte_pci_device *device,
const void *buf, size_t len, off_t offset)
{
char devname[RTE_DEV_NAME_MAX_LEN] = "";
- const struct rte_intr_handle *intr_handle = &device->intr_handle;
+ const struct rte_intr_handle *intr_handle = device->intr_handle;
switch (device->kdrv) {
case RTE_PCI_KDRV_IGB_UIO:
diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 39ebeac2a0..2ee5d04672 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -35,14 +35,18 @@ int
pci_uio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offset)
{
- return pread(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread(uio_cfg_fd, buf, len, offset);
}
int
pci_uio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offset)
{
- return pwrite(intr_handle->uio_cfg_fd, buf, len, offset);
+ int uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite(uio_cfg_fd, buf, len, offset);
}
static int
@@ -198,16 +202,19 @@ void
pci_uio_free_resource(struct rte_pci_device *dev,
struct mapped_pci_resource *uio_res)
{
+ int uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -218,7 +225,7 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
char dirname[PATH_MAX];
char cfgname[PATH_MAX];
char devname[PATH_MAX]; /* contains the /dev/uioX */
- int uio_num;
+ int uio_num, fd, uio_cfg_fd;
struct rte_pci_addr *loc;
loc = &dev->addr;
@@ -233,29 +240,38 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
/* save fd if in primary process */
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
devname, strerror(errno));
goto error;
}
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
- dev->intr_handle.uio_cfg_fd = open(cfgname, O_RDWR);
- if (dev->intr_handle.uio_cfg_fd < 0) {
+
+ uio_cfg_fd = open(cfgname, O_RDWR);
+ if (uio_cfg_fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
cfgname, strerror(errno));
goto error;
}
- if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO)
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
- else {
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+ if (rte_intr_dev_fd_set(dev->intr_handle, uio_cfg_fd))
+ goto error;
+
+ if (dev->kdrv == RTE_PCI_KDRV_IGB_UIO) {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
+ } else {
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* set bus master that is not done by uio_pci_generic */
- if (pci_uio_set_bus_master(dev->intr_handle.uio_cfg_fd)) {
+ if (pci_uio_set_bus_master(uio_cfg_fd)) {
RTE_LOG(ERR, EAL, "Cannot set up bus mastering!\n");
goto error;
}
@@ -381,7 +397,7 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
char buf[BUFSIZ];
uint64_t phys_addr, end_addr, flags;
unsigned long base;
- int i;
+ int i, fd;
/* open and read addresses of the corresponding resource in sysfs */
snprintf(filename, sizeof(filename), "%s/" PCI_PRI_FMT "/resource",
@@ -427,7 +443,8 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
/* FIXME only for primary process ? */
- if (dev->intr_handle.type == RTE_INTR_HANDLE_UNKNOWN) {
+ if (rte_intr_type_get(dev->intr_handle) ==
+ RTE_INTR_HANDLE_UNKNOWN) {
int uio_num = pci_get_uio_dev(dev, dirname, sizeof(dirname), 0);
if (uio_num < 0) {
RTE_LOG(ERR, EAL, "cannot open %s: %s\n",
@@ -436,13 +453,17 @@ pci_uio_ioport_map(struct rte_pci_device *dev, int bar,
}
snprintf(filename, sizeof(filename), "/dev/uio%u", uio_num);
- dev->intr_handle.fd = open(filename, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
filename, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO))
+ goto error;
}
RTE_LOG(DEBUG, EAL, "PCI Port IO found start=0x%lx\n", base);
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index a024269140..7b2f8296c5 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -47,7 +47,9 @@ int
pci_vfio_read_config(const struct rte_intr_handle *intr_handle,
void *buf, size_t len, off_t offs)
{
- return pread64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pread64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -55,7 +57,9 @@ int
pci_vfio_write_config(const struct rte_intr_handle *intr_handle,
const void *buf, size_t len, off_t offs)
{
- return pwrite64(intr_handle->vfio_dev_fd, buf, len,
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+
+ return pwrite64(vfio_dev_fd, buf, len,
VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) + offs);
}
@@ -281,21 +285,27 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->intr_handle.fd = fd;
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ return -1;
switch (i) {
case VFIO_PCI_MSIX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSIX;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSIX;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSIX);
break;
case VFIO_PCI_MSI_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_MSI;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_MSI;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_MSI);
break;
case VFIO_PCI_INTX_IRQ_INDEX:
intr_mode = RTE_INTR_MODE_LEGACY;
- dev->intr_handle.type = RTE_INTR_HANDLE_VFIO_LEGACY;
+ rte_intr_type_set(dev->intr_handle,
+ RTE_INTR_HANDLE_VFIO_LEGACY);
break;
default:
RTE_LOG(ERR, EAL, "Unknown interrupt type!\n");
@@ -362,11 +372,16 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
- dev->vfio_req_intr_handle.fd = fd;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_VFIO_REQ;
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, fd))
+ return -1;
+
+ if (rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_VFIO_REQ))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ return -1;
- ret = rte_intr_callback_register(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_register(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret) {
@@ -374,10 +389,10 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
goto error;
}
- ret = rte_intr_enable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_enable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "Fail to enable req notifier.\n");
- ret = rte_intr_callback_unregister(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0)
@@ -390,9 +405,9 @@ pci_vfio_enable_notifier(struct rte_pci_device *dev, int vfio_dev_fd)
error:
close(fd);
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return -1;
}
@@ -403,13 +418,13 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
{
int ret;
- ret = rte_intr_disable(&dev->vfio_req_intr_handle);
+ ret = rte_intr_disable(dev->vfio_req_intr_handle);
if (ret) {
RTE_LOG(ERR, EAL, "fail to disable req notifier.\n");
return -1;
}
- ret = rte_intr_callback_unregister_sync(&dev->vfio_req_intr_handle,
+ ret = rte_intr_callback_unregister_sync(dev->vfio_req_intr_handle,
pci_vfio_req_handler,
(void *)&dev->device);
if (ret < 0) {
@@ -418,11 +433,11 @@ pci_vfio_disable_notifier(struct rte_pci_device *dev)
return -1;
}
- close(dev->vfio_req_intr_handle.fd);
+ close(rte_intr_fd_get(dev->vfio_req_intr_handle));
- dev->vfio_req_intr_handle.fd = -1;
- dev->vfio_req_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
- dev->vfio_req_intr_handle.vfio_dev_fd = -1;
+ rte_intr_fd_set(dev->vfio_req_intr_handle, -1);
+ rte_intr_type_set(dev->vfio_req_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_dev_fd_set(dev->vfio_req_intr_handle, -1);
return 0;
}
@@ -705,9 +720,12 @@ pci_vfio_map_resource_primary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -854,9 +872,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
struct pci_map *maps;
- dev->intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.fd = -1;
+ if (rte_intr_fd_set(dev->vfio_req_intr_handle, -1))
+ return -1;
#endif
/* store PCI address string */
@@ -897,9 +917,11 @@ pci_vfio_map_resource_secondary(struct rte_pci_device *dev)
}
/* we need save vfio_dev_fd, so it can be used during release */
- dev->intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#ifdef HAVE_VFIO_DEV_REQ_INTERFACE
- dev->vfio_req_intr_handle.vfio_dev_fd = vfio_dev_fd;
+ if (rte_intr_dev_fd_set(dev->vfio_req_intr_handle, vfio_dev_fd))
+ goto err_vfio_dev_fd;
#endif
return 0;
@@ -968,7 +990,7 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
@@ -982,20 +1004,21 @@ pci_vfio_unmap_resource_primary(struct rte_pci_device *dev)
}
#endif
- if (close(dev->intr_handle.fd) < 0) {
+ if (close(rte_intr_fd_get(dev->intr_handle)) < 0) {
RTE_LOG(INFO, EAL, "Error when closing eventfd file descriptor for %s\n",
pci_addr);
return -1;
}
- if (pci_vfio_set_bus_master(dev->intr_handle.vfio_dev_fd, false)) {
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (pci_vfio_set_bus_master(vfio_dev_fd, false)) {
RTE_LOG(ERR, EAL, "%s cannot unset bus mastering for PCI device!\n",
pci_addr);
return -1;
}
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1024,14 +1047,15 @@ pci_vfio_unmap_resource_secondary(struct rte_pci_device *dev)
struct rte_pci_addr *loc = &dev->addr;
struct mapped_pci_resource *vfio_res = NULL;
struct mapped_pci_res_list *vfio_res_list;
- int ret;
+ int ret, vfio_dev_fd;
/* store PCI address string */
snprintf(pci_addr, sizeof(pci_addr), PCI_PRI_FMT,
loc->domain, loc->bus, loc->devid, loc->function);
+ vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
ret = rte_vfio_release_device(rte_pci_get_sysfs_path(), pci_addr,
- dev->intr_handle.vfio_dev_fd);
+ vfio_dev_fd);
if (ret < 0) {
RTE_LOG(ERR, EAL, "Cannot release VFIO device\n");
return ret;
@@ -1079,9 +1103,10 @@ void
pci_vfio_ioport_read(struct rte_pci_ioport *p,
void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pread64(intr_handle->vfio_dev_fd, data,
+ if (pread64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't read from PCI bar (%" PRIu64 ") : offset (%x)\n",
@@ -1092,9 +1117,10 @@ void
pci_vfio_ioport_write(struct rte_pci_ioport *p,
const void *data, size_t len, off_t offset)
{
- const struct rte_intr_handle *intr_handle = &p->dev->intr_handle;
+ const struct rte_intr_handle *intr_handle = p->dev->intr_handle;
+ int vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- if (pwrite64(intr_handle->vfio_dev_fd, data,
+ if (pwrite64(vfio_dev_fd, data,
len, p->base + offset) <= 0)
RTE_LOG(ERR, EAL,
"Can't write to PCI bar (%" PRIu64 ") : offset (%x)\n",
diff --git a/drivers/bus/pci/pci_common.c b/drivers/bus/pci/pci_common.c
index 3406e03b29..f8fff2c98e 100644
--- a/drivers/bus/pci/pci_common.c
+++ b/drivers/bus/pci/pci_common.c
@@ -226,16 +226,39 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
return -EINVAL;
}
- dev->driver = dr;
- }
+ /* Allocate interrupt instance for pci device */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL,
+ "Failed to create interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
- if (!already_probed && (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)) {
- /* map resources for devices that use igb_uio */
- ret = rte_pci_map_device(dev);
- if (ret != 0) {
- dev->driver = NULL;
- return ret;
+ dev->vfio_req_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->vfio_req_intr_handle == NULL) {
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ RTE_LOG(ERR, EAL,
+ "Failed to create vfio req interrupt instance for %s\n",
+ dev->device.name);
+ return -ENOMEM;
+ }
+
+ if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) {
+ ret = rte_pci_map_device(dev);
+ if (ret != 0) {
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ return ret;
+ }
}
+
+ dev->driver = dr;
}
RTE_LOG(INFO, EAL, "Probe PCI driver: %s (%x:%x) device: "PCI_PRI_FMT" (socket %i)\n",
@@ -248,6 +271,10 @@ rte_pci_probe_one_driver(struct rte_pci_driver *dr,
return ret; /* no rollback if already succeeded earlier */
if (ret) {
dev->driver = NULL;
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
if ((dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING) &&
/* Don't unmap if device is unsupported and
* driver needs mapped resources.
@@ -295,6 +322,10 @@ rte_pci_detach_dev(struct rte_pci_device *dev)
/* clear driver structure */
dev->driver = NULL;
dev->device.driver = NULL;
+ rte_intr_instance_free(dev->intr_handle);
+ dev->intr_handle = NULL;
+ rte_intr_instance_free(dev->vfio_req_intr_handle);
+ dev->vfio_req_intr_handle = NULL;
if (dr->drv_flags & RTE_PCI_DRV_NEED_MAPPING)
/* unmap resources for devices that use igb_uio */
diff --git a/drivers/bus/pci/pci_common_uio.c b/drivers/bus/pci/pci_common_uio.c
index 318f9a1d55..244c9a8940 100644
--- a/drivers/bus/pci/pci_common_uio.c
+++ b/drivers/bus/pci/pci_common_uio.c
@@ -90,8 +90,11 @@ pci_uio_map_resource(struct rte_pci_device *dev)
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -207,6 +210,7 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
struct mapped_pci_resource *uio_res;
struct mapped_pci_res_list *uio_res_list =
RTE_TAILQ_CAST(rte_uio_tailq.head, mapped_pci_res_list);
+ int uio_cfg_fd;
if (dev == NULL)
return;
@@ -229,12 +233,13 @@ pci_uio_unmap_resource(struct rte_pci_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ uio_cfg_fd = rte_intr_dev_fd_get(dev->intr_handle);
+ if (uio_cfg_fd >= 0) {
+ close(uio_cfg_fd);
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 673a2850c1..1c6a8fdd7b 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -69,12 +69,12 @@ struct rte_pci_device {
struct rte_pci_id id; /**< PCI ID. */
struct rte_mem_resource mem_resource[PCI_MAX_RESOURCE];
/**< PCI Memory Resource */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_pci_driver *driver; /**< PCI driver used in probing */
uint16_t max_vfs; /**< sriov enable if not zero */
enum rte_pci_kernel_driver kdrv; /**< Kernel driver passthrough */
char name[PCI_PRI_STR_SIZE+1]; /**< PCI location (ASCII) */
- struct rte_intr_handle vfio_req_intr_handle;
+ struct rte_intr_handle *vfio_req_intr_handle;
/**< Handler of VFIO request interrupt */
};
diff --git a/drivers/bus/vmbus/linux/vmbus_bus.c b/drivers/bus/vmbus/linux/vmbus_bus.c
index 68f6cc5742..f502783f7a 100644
--- a/drivers/bus/vmbus/linux/vmbus_bus.c
+++ b/drivers/bus/vmbus/linux/vmbus_bus.c
@@ -299,6 +299,12 @@ vmbus_scan_one(const char *name)
dev->device.devargs = vmbus_devargs_lookup(dev);
+ /* Allocate interrupt handle instance */
+ dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL)
+ goto error;
+
/* device is valid, add in list (sorted) */
VMBUS_LOG(DEBUG, "Adding vmbus device %s", name);
diff --git a/drivers/bus/vmbus/linux/vmbus_uio.c b/drivers/bus/vmbus/linux/vmbus_uio.c
index 70b0d098e0..9c5c1aeca3 100644
--- a/drivers/bus/vmbus/linux/vmbus_uio.c
+++ b/drivers/bus/vmbus/linux/vmbus_uio.c
@@ -30,9 +30,11 @@ static void *vmbus_map_addr;
/* Control interrupts */
void vmbus_uio_irq_control(struct rte_vmbus_device *dev, int32_t onoff)
{
- if (write(dev->intr_handle.fd, &onoff, sizeof(onoff)) < 0) {
+ if (write(rte_intr_fd_get(dev->intr_handle), &onoff,
+ sizeof(onoff)) < 0) {
VMBUS_LOG(ERR, "cannot write to %d:%s",
- dev->intr_handle.fd, strerror(errno));
+ rte_intr_fd_get(dev->intr_handle),
+ strerror(errno));
}
}
@@ -41,7 +43,8 @@ int vmbus_uio_irq_read(struct rte_vmbus_device *dev)
int32_t count;
int cc;
- cc = read(dev->intr_handle.fd, &count, sizeof(count));
+ cc = read(rte_intr_fd_get(dev->intr_handle), &count,
+ sizeof(count));
if (cc < (int)sizeof(count)) {
if (cc < 0) {
VMBUS_LOG(ERR, "IRQ read failed %s",
@@ -61,15 +64,15 @@ vmbus_uio_free_resource(struct rte_vmbus_device *dev,
{
rte_free(uio_res);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- if (dev->intr_handle.fd >= 0) {
- close(dev->intr_handle.fd);
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_fd_get(dev->intr_handle));
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
}
@@ -78,16 +81,22 @@ vmbus_uio_alloc_resource(struct rte_vmbus_device *dev,
struct mapped_vmbus_resource **uio_res)
{
char devname[PATH_MAX]; /* contains the /dev/uioX */
+ int fd;
/* save fd if in primary process */
snprintf(devname, sizeof(devname), "/dev/uio%u", dev->uio_num);
- dev->intr_handle.fd = open(devname, O_RDWR);
- if (dev->intr_handle.fd < 0) {
+ fd = open(devname, O_RDWR);
+ if (fd < 0) {
VMBUS_LOG(ERR, "Cannot open %s: %s",
devname, strerror(errno));
goto error;
}
- dev->intr_handle.type = RTE_INTR_HANDLE_UIO_INTX;
+
+ if (rte_intr_fd_set(dev->intr_handle, fd))
+ goto error;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UIO_INTX))
+ goto error;
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 6bcff66468..466d42d277 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -73,7 +73,7 @@ struct rte_vmbus_device {
struct vmbus_channel *primary; /**< VMBUS primary channel */
struct vmbus_mon_page *monitor_page; /**< VMBUS monitor page */
- struct rte_intr_handle intr_handle; /**< Interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< Interrupt handle */
struct rte_mem_resource resource[VMBUS_MAX_RESOURCE];
};
diff --git a/drivers/bus/vmbus/vmbus_common_uio.c b/drivers/bus/vmbus/vmbus_common_uio.c
index 041712fe75..336296d6a8 100644
--- a/drivers/bus/vmbus/vmbus_common_uio.c
+++ b/drivers/bus/vmbus/vmbus_common_uio.c
@@ -171,9 +171,14 @@ vmbus_uio_map_resource(struct rte_vmbus_device *dev)
int ret;
/* TODO: handle rescind */
- dev->intr_handle.fd = -1;
- dev->intr_handle.uio_cfg_fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ if (rte_intr_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_dev_fd_set(dev->intr_handle, -1))
+ return -1;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN))
+ return -1;
/* secondary processes - use already recorded details */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -253,12 +258,12 @@ vmbus_uio_unmap_resource(struct rte_vmbus_device *dev)
rte_free(uio_res);
/* close fd if in primary process */
- close(dev->intr_handle.fd);
- if (dev->intr_handle.uio_cfg_fd >= 0) {
- close(dev->intr_handle.uio_cfg_fd);
- dev->intr_handle.uio_cfg_fd = -1;
+ close(rte_intr_fd_get(dev->intr_handle));
+ if (rte_intr_dev_fd_get(dev->intr_handle) >= 0) {
+ close(rte_intr_dev_fd_get(dev->intr_handle));
+ rte_intr_dev_fd_set(dev->intr_handle, -1);
}
- dev->intr_handle.fd = -1;
- dev->intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(dev->intr_handle, -1);
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_UNKNOWN);
}
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 56744184ae..f0e52ae18f 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -65,7 +65,7 @@ cpt_lf_register_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -85,7 +85,7 @@ cpt_lf_unregister_misc_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_MISC;
/* Clear err interrupt */
@@ -129,7 +129,7 @@ cpt_lf_register_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int rc, vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
@@ -152,7 +152,7 @@ cpt_lf_unregister_done_irq(struct roc_cpt_lf *lf)
struct plt_intr_handle *handle;
int vec;
- handle = &pci_dev->intr_handle;
+ handle = pci_dev->intr_handle;
vec = lf->msixoff + CPT_LF_INT_VEC_DONE;
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index ce6980cbe4..926a916e44 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -641,7 +641,7 @@ roc_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -691,7 +691,7 @@ mbox_register_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static int
mbox_register_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -724,7 +724,7 @@ mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -755,7 +755,7 @@ mbox_unregister_pf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
static void
mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
plt_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -839,7 +839,7 @@ roc_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
plt_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -860,7 +860,7 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
static int
vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
- struct plt_intr_handle *handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
plt_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1211,7 +1211,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
int
dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
{
- struct plt_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
struct mbox *mbox;
/* Check if this dev hosts npalf and has 1+ refs */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 28fe691932..3b34467b96 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -20,11 +20,12 @@ static int
irq_get_info(struct plt_intr_handle *intr_handle)
{
struct vfio_irq_info irq = {.argsz = sizeof(irq)};
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
plt_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -36,9 +37,10 @@ irq_get_info(struct plt_intr_handle *intr_handle)
if (irq.count > PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("HW max=%d > PLT_MAX_RXTX_INTR_VEC_ID: %d", irq.count,
PLT_MAX_RXTX_INTR_VEC_ID);
- intr_handle->max_intr = PLT_MAX_RXTX_INTR_VEC_ID;
+ plt_intr_max_intr_set(intr_handle, PLT_MAX_RXTX_INTR_VEC_ID);
} else {
- intr_handle->max_intr = irq.count;
+ if (plt_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -49,12 +51,12 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -71,9 +73,10 @@ irq_config(struct plt_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = plt_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -85,23 +88,25 @@ irq_init(struct plt_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > PLT_MAX_RXTX_INTR_VEC_ID) {
+ if (plt_intr_max_intr_get(intr_handle) >
+ PLT_MAX_RXTX_INTR_VEC_ID) {
plt_err("Max_intr=%d greater than PLT_MAX_RXTX_INTR_VEC_ID=%d",
- intr_handle->max_intr, PLT_MAX_RXTX_INTR_VEC_ID);
+ plt_intr_max_intr_get(intr_handle),
+ PLT_MAX_RXTX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * plt_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = plt_intr_max_intr_get(intr_handle);
irq_set->flags =
VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -110,7 +115,8 @@ irq_init(struct plt_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = plt_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
plt_err("Failed to set irqs vector rc=%d", rc);
@@ -121,7 +127,7 @@ int
dev_irqs_disable(struct plt_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ plt_intr_max_intr_set(intr_handle, 0);
return plt_intr_disable(intr_handle);
}
@@ -129,43 +135,49 @@ int
dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
- int rc;
+ struct plt_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (plt_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
- plt_err("Vector=%d greater than max_intr=%d or "
- "max_efd=%" PRIu64,
- vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
+ plt_err("Vector=%d greater than max_intr=%d or ",
+ vec, plt_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (plt_intr_fd_set(tmp_handle, fd))
+ return -errno;
+
/* Register vector interrupt callback */
- rc = plt_intr_callback_register(&tmp_handle, cb, data);
+ rc = plt_intr_callback_register(tmp_handle, cb, data);
if (rc) {
plt_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd =
- (vec > intr_handle->nb_efd) ? vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ plt_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)plt_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)plt_intr_nb_efd_get(intr_handle);
+ plt_intr_nb_efd_set(intr_handle, nb_efd);
+ tmp_nb_efd = plt_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)plt_intr_max_intr_get(intr_handle))
+ plt_intr_max_intr_set(intr_handle, tmp_nb_efd);
plt_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -175,24 +187,27 @@ void
dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
void *data, unsigned int vec)
{
- struct plt_intr_handle tmp_handle;
+ struct plt_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)plt_intr_max_intr_get(intr_handle)) {
plt_err("Error unregistering MSI-X interrupts vec:%d > %d", vec,
- intr_handle->max_intr);
+ plt_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = plt_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (plt_intr_fd_set(tmp_handle, fd))
return;
do {
/* Un-register callback func from platform lib */
- rc = plt_intr_callback_unregister(&tmp_handle, cb, data);
+ rc = plt_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -206,12 +221,14 @@ dev_irq_unregister(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
}
plt_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- intr_handle->nb_efd, intr_handle->max_intr);
+ plt_intr_nb_efd_get(intr_handle),
+ plt_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (plt_intr_efds_index_get(intr_handle, vec) != -1)
+ close(plt_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ plt_intr_efds_index_set(intr_handle, vec, -1);
+
irq_config(intr_handle, vec);
}
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
index 25ed42f875..848523b010 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev_irq.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -99,7 +99,7 @@ nix_inl_sso_hws_irq(void *param)
int
nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -147,7 +147,7 @@ nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t ssow_base = inl_dev->ssow_base;
uintptr_t sso_base = inl_dev->sso_base;
uint16_t sso_msixoff, ssow_msixoff;
@@ -282,7 +282,7 @@ nix_inl_nix_err_irq(void *param)
int
nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
int rc;
@@ -331,7 +331,7 @@ nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
void
nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
{
- struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = inl_dev->pci_dev->intr_handle;
uintptr_t nix_base = inl_dev->nix_base;
uint16_t msixoff;
diff --git a/drivers/common/cnxk/roc_nix_irq.c b/drivers/common/cnxk/roc_nix_irq.c
index 32be64a9d7..e9aa620abd 100644
--- a/drivers/common/cnxk/roc_nix_irq.c
+++ b/drivers/common/cnxk/roc_nix_irq.c
@@ -82,7 +82,7 @@ nix_lf_err_irq(void *param)
static int
nix_lf_register_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -99,7 +99,7 @@ nix_lf_register_err_irq(struct nix *nix)
static void
nix_lf_unregister_err_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_ERR_INT;
@@ -131,7 +131,7 @@ nix_lf_ras_irq(void *param)
static int
nix_lf_register_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int rc, vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -148,7 +148,7 @@ nix_lf_register_ras_irq(struct nix *nix)
static void
nix_lf_unregister_ras_irq(struct nix *nix)
{
- struct plt_intr_handle *handle = &nix->pci_dev->intr_handle;
+ struct plt_intr_handle *handle = nix->pci_dev->intr_handle;
int vec;
vec = nix->msixoff + NIX_LF_INT_VEC_POISON;
@@ -300,7 +300,7 @@ roc_nix_register_queue_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
/* Figure out max qintx required */
rqs = PLT_MIN(nix->qints, nix->nb_rx_queues);
@@ -352,7 +352,7 @@ roc_nix_unregister_queue_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_qints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_QINT_START + q;
@@ -382,7 +382,7 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
struct nix *nix;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
nix->configured_cints = PLT_MIN(nix->cints, nix->nb_rx_queues);
@@ -414,19 +414,19 @@ roc_nix_register_cq_irqs(struct roc_nix *roc_nix)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = plt_zmalloc(
- nix->configured_cints * sizeof(int), 0);
- if (!handle->intr_vec) {
- plt_err("Failed to allocate %d rx intr_vec",
- nix->configured_cints);
- return -ENOMEM;
- }
+ rc = plt_intr_vec_list_alloc(handle, "cnxk",
+ nix->configured_cints);
+ if (rc) {
+ plt_err("Fail to allocate intr vec list, rc=%d",
+ rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = PLT_INTR_VEC_RXTX_OFFSET + vec;
+ if (plt_intr_vec_list_index_set(handle, q,
+ PLT_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
plt_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -450,7 +450,7 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
int vec, q;
nix = roc_nix_to_nix_priv(roc_nix);
- handle = &nix->pci_dev->intr_handle;
+ handle = nix->pci_dev->intr_handle;
for (q = 0; q < nix->configured_cints; q++) {
vec = nix->msixoff + NIX_LF_INT_VEC_CINT_START + q;
@@ -465,6 +465,8 @@ roc_nix_unregister_cq_irqs(struct roc_nix *roc_nix)
dev_irq_unregister(handle, nix_lf_cq_irq, &nix->cints_mem[q],
vec);
}
+
+ plt_intr_vec_list_free(handle);
plt_free(nix->cints_mem);
}
diff --git a/drivers/common/cnxk/roc_npa.c b/drivers/common/cnxk/roc_npa.c
index a0d2cc8f19..664240ab42 100644
--- a/drivers/common/cnxk/roc_npa.c
+++ b/drivers/common/cnxk/roc_npa.c
@@ -710,7 +710,7 @@ npa_lf_init(struct dev *dev, struct plt_pci_device *pci_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index a0f01797f1..60227b72d0 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -106,6 +106,32 @@
#define plt_thread_is_intr rte_thread_is_intr
#define plt_intr_callback_fn rte_intr_callback_fn
+#define plt_intr_efd_counter_size_get rte_intr_efd_counter_size_get
+#define plt_intr_efd_counter_size_set rte_intr_efd_counter_size_set
+#define plt_intr_vec_list_index_get rte_intr_vec_list_index_get
+#define plt_intr_vec_list_index_set rte_intr_vec_list_index_set
+#define plt_intr_vec_list_alloc rte_intr_vec_list_alloc
+#define plt_intr_vec_list_free rte_intr_vec_list_free
+#define plt_intr_fd_set rte_intr_fd_set
+#define plt_intr_fd_get rte_intr_fd_get
+#define plt_intr_dev_fd_get rte_intr_dev_fd_get
+#define plt_intr_dev_fd_set rte_intr_dev_fd_set
+#define plt_intr_type_get rte_intr_type_get
+#define plt_intr_type_set rte_intr_type_set
+#define plt_intr_instance_alloc rte_intr_instance_alloc
+#define plt_intr_instance_dup rte_intr_instance_dup
+#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_max_intr_get rte_intr_max_intr_get
+#define plt_intr_max_intr_set rte_intr_max_intr_set
+#define plt_intr_nb_efd_get rte_intr_nb_efd_get
+#define plt_intr_nb_efd_set rte_intr_nb_efd_set
+#define plt_intr_nb_intr_get rte_intr_nb_intr_get
+#define plt_intr_nb_intr_set rte_intr_nb_intr_set
+#define plt_intr_efds_index_get rte_intr_efds_index_get
+#define plt_intr_efds_index_set rte_intr_efds_index_set
+#define plt_intr_elist_index_get rte_intr_elist_index_get
+#define plt_intr_elist_index_set rte_intr_elist_index_set
+
#define plt_alarm_set rte_eal_alarm_set
#define plt_alarm_cancel rte_eal_alarm_cancel
@@ -183,7 +209,7 @@ extern int cnxk_logtype_tm;
#define plt_dbg(subsystem, fmt, args...) \
rte_log(RTE_LOG_DEBUG, cnxk_logtype_##subsystem, \
"[%s] %s():%u " fmt "\n", #subsystem, __func__, __LINE__, \
- ##args)
+##args)
#define plt_base_dbg(fmt, ...) plt_dbg(base, fmt, ##__VA_ARGS__)
#define plt_cpt_dbg(fmt, ...) plt_dbg(cpt, fmt, ##__VA_ARGS__)
@@ -203,18 +229,18 @@ extern int cnxk_logtype_tm;
#ifdef __cplusplus
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
- (subsystem_dev), \
- }
+{ \
+ RTE_CLASS_ANY_ID, PCI_VENDOR_ID_CAVIUM, (dev), RTE_PCI_ANY_ID, \
+ (subsystem_dev), \
+}
#else
#define CNXK_PCI_ID(subsystem_dev, dev) \
- { \
- .class_id = RTE_CLASS_ANY_ID, \
- .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
- .subsystem_vendor_id = RTE_PCI_ANY_ID, \
- .subsystem_device_id = (subsystem_dev), \
- }
+{ \
+ .class_id = RTE_CLASS_ANY_ID, \
+ .vendor_id = PCI_VENDOR_ID_CAVIUM, .device_id = (dev), \
+ .subsystem_vendor_id = RTE_PCI_ANY_ID, \
+ .subsystem_device_id = (subsystem_dev), \
+}
#endif
__rte_internal
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index bdf973fc2a..762893f3dc 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -505,7 +505,7 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
goto sso_msix_fail;
}
- rc = sso_register_irqs_priv(roc_sso, &sso->pci_dev->intr_handle, nb_hws,
+ rc = sso_register_irqs_priv(roc_sso, sso->pci_dev->intr_handle, nb_hws,
nb_hwgrp);
if (rc < 0) {
plt_err("Failed to register SSO LF IRQs");
@@ -535,7 +535,7 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
if (!roc_sso->nb_hws && !roc_sso->nb_hwgrp)
return;
- sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
+ sso_unregister_irqs_priv(roc_sso, sso->pci_dev->intr_handle,
roc_sso->nb_hws, roc_sso->nb_hwgrp);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index 387164bb1d..534b697bee 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -200,7 +200,7 @@ roc_tim_lf_alloc(struct roc_tim *roc_tim, uint8_t ring_id, uint64_t *clk)
if (clk)
*clk = rsp->tenns_clk;
- rc = tim_register_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ rc = tim_register_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
if (rc < 0) {
plt_tim_dbg("Failed to register Ring[%d] IRQ", ring_id);
@@ -223,7 +223,7 @@ roc_tim_lf_free(struct roc_tim *roc_tim, uint8_t ring_id)
struct tim_ring_req *req;
int rc = -ENOSPC;
- tim_unregister_irq_priv(roc_tim, &sso->pci_dev->intr_handle, ring_id,
+ tim_unregister_irq_priv(roc_tim, sso->pci_dev->intr_handle, ring_id,
tim->tim_msix_offsets[ring_id]);
req = mbox_alloc_msg_tim_lf_free(dev->mbox);
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
index ce4f0e7ca9..08dca87848 100644
--- a/drivers/common/octeontx2/otx2_dev.c
+++ b/drivers/common/octeontx2/otx2_dev.c
@@ -643,7 +643,7 @@ otx2_af_pf_mbox_irq(void *param)
static int
mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i, rc;
/* HW clear irq */
@@ -693,7 +693,7 @@ mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Clear irq */
@@ -726,7 +726,7 @@ mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
/* HW clear irq */
@@ -758,7 +758,7 @@ mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static void
mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* Clear irq */
otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
@@ -841,7 +841,7 @@ otx2_pf_vf_flr_irq(void *param)
static int
vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
@@ -862,7 +862,7 @@ vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
static int
vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int i, rc;
otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
@@ -1039,7 +1039,7 @@ otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
void
otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
{
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct otx2_dev *dev = otx2_dev;
struct otx2_idev_cfg *idev;
struct otx2_mbox *mbox;
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
index c0137ff36d..93fc95c0e1 100644
--- a/drivers/common/octeontx2/otx2_irq.c
+++ b/drivers/common/octeontx2/otx2_irq.c
@@ -26,11 +26,12 @@ static int
irq_get_info(struct rte_intr_handle *intr_handle)
{
struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc;
+ int rc, vfio_dev_fd;
irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
if (rc < 0) {
otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
return rc;
@@ -41,10 +42,13 @@ irq_get_info(struct rte_intr_handle *intr_handle)
if (irq.count > MAX_INTR_VEC_ID) {
otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
- intr_handle->max_intr = MAX_INTR_VEC_ID;
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
+ if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
+ return -1;
} else {
- intr_handle->max_intr = irq.count;
+ if (rte_intr_max_intr_set(intr_handle, irq.count))
+ return -1;
}
return 0;
@@ -55,12 +59,12 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
@@ -77,9 +81,10 @@ irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
/* Use vec fd to set interrupt vectors */
fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = intr_handle->efds[vec];
+ fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
@@ -91,23 +96,24 @@ irq_init(struct rte_intr_handle *intr_handle)
{
char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
struct vfio_irq_set *irq_set;
+ int len, rc, vfio_dev_fd;
int32_t *fd_ptr;
- int len, rc;
uint32_t i;
- if (intr_handle->max_intr > MAX_INTR_VEC_ID) {
+ if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- intr_handle->max_intr, MAX_INTR_VEC_ID);
+ rte_intr_max_intr_get(intr_handle),
+ MAX_INTR_VEC_ID);
return -ERANGE;
}
len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * intr_handle->max_intr;
+ sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
irq_set = (struct vfio_irq_set *)irq_set_buf;
irq_set->argsz = len;
irq_set->start = 0;
- irq_set->count = intr_handle->max_intr;
+ irq_set->count = rte_intr_max_intr_get(intr_handle);
irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
VFIO_IRQ_SET_ACTION_TRIGGER;
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
@@ -116,7 +122,8 @@ irq_init(struct rte_intr_handle *intr_handle)
for (i = 0; i < irq_set->count; i++)
fd_ptr[i] = -1;
- rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+ vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
+ rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (rc)
otx2_err("Failed to set irqs vector rc=%d", rc);
@@ -131,7 +138,8 @@ int
otx2_disable_irqs(struct rte_intr_handle *intr_handle)
{
/* Clear max_intr to indicate re-init next time */
- intr_handle->max_intr = 0;
+ if (rte_intr_max_intr_set(intr_handle, 0))
+ return -1;
return rte_intr_disable(intr_handle);
}
@@ -143,42 +151,50 @@ int
otx2_register_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
- int rc;
+ struct rte_intr_handle *tmp_handle;
+ uint32_t nb_efd, tmp_nb_efd;
+ int rc, fd;
/* If no max_intr read from VFIO */
- if (intr_handle->max_intr == 0) {
+ if (rte_intr_max_intr_get(intr_handle) == 0) {
irq_get_info(intr_handle);
irq_init(intr_handle);
}
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Vector=%d greater than max_intr=%d", vec,
- intr_handle->max_intr);
+ rte_intr_max_intr_get(intr_handle));
return -EINVAL;
}
- tmp_handle = *intr_handle;
+ tmp_handle = intr_handle;
/* Create new eventfd for interrupt vector */
- tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (tmp_handle.fd == -1)
+ fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ if (fd == -1)
return -ENODEV;
+ if (rte_intr_fd_set(tmp_handle, fd))
+ return errno;
+
/* Register vector interrupt callback */
- rc = rte_intr_callback_register(&tmp_handle, cb, data);
+ rc = rte_intr_callback_register(tmp_handle, cb, data);
if (rc) {
otx2_err("Failed to register vector:0x%x irq callback.", vec);
return rc;
}
- intr_handle->efds[vec] = tmp_handle.fd;
- intr_handle->nb_efd = (vec > intr_handle->nb_efd) ?
- vec : intr_handle->nb_efd;
- if ((intr_handle->nb_efd + 1) > intr_handle->max_intr)
- intr_handle->max_intr = intr_handle->nb_efd + 1;
+ rte_intr_efds_index_set(intr_handle, vec, fd);
+ nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
+ vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, nb_efd);
+
+ tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
+ if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
+ rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
/* Enable MSIX vectors to VFIO */
return irq_config(intr_handle, vec);
@@ -192,24 +208,27 @@ void
otx2_unregister_irq(struct rte_intr_handle *intr_handle,
rte_intr_callback_fn cb, void *data, unsigned int vec)
{
- struct rte_intr_handle tmp_handle;
+ struct rte_intr_handle *tmp_handle;
uint8_t retries = 5; /* 5 ms */
- int rc;
+ int rc, fd;
- if (vec > intr_handle->max_intr) {
+ if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, intr_handle->max_intr);
+ vec, rte_intr_max_intr_get(intr_handle));
return;
}
- tmp_handle = *intr_handle;
- tmp_handle.fd = intr_handle->efds[vec];
- if (tmp_handle.fd == -1)
+ tmp_handle = intr_handle;
+ fd = rte_intr_efds_index_get(intr_handle, vec);
+ if (fd == -1)
+ return;
+
+ if (rte_intr_fd_set(tmp_handle, fd))
return;
do {
- /* Un-register callback func from eal lib */
- rc = rte_intr_callback_unregister(&tmp_handle, cb, data);
+ /* Un-register callback func from platform lib */
+ rc = rte_intr_callback_unregister(tmp_handle, cb, data);
/* Retry only if -EAGAIN */
if (rc != -EAGAIN)
break;
@@ -218,18 +237,18 @@ otx2_unregister_irq(struct rte_intr_handle *intr_handle,
} while (retries);
if (rc < 0) {
- otx2_err("Error unregistering MSI-X intr vec %d cb, rc=%d",
- vec, rc);
+ otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
return;
}
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)",
- vec, intr_handle->nb_efd, intr_handle->max_intr);
+ otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
- if (intr_handle->efds[vec] != -1)
- close(intr_handle->efds[vec]);
+ if (rte_intr_efds_index_get(intr_handle, vec) != -1)
+ close(rte_intr_efds_index_get(intr_handle, vec));
/* Disable MSIX vectors from VFIO */
- intr_handle->efds[vec] = -1;
+ rte_intr_efds_index_set(intr_handle, vec, -1);
irq_config(intr_handle, vec);
}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
index bf90d095fe..d5d6b5bad7 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
@@ -36,7 +36,7 @@ otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
@@ -65,7 +65,7 @@ otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
uint16_t msix_off, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
index a2033646e6..9b7ad27b04 100644
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ b/drivers/event/octeontx2/otx2_evdev_irq.c
@@ -29,7 +29,7 @@ sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -66,7 +66,7 @@ ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -86,7 +86,7 @@ sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t ggrp_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
@@ -101,7 +101,7 @@ ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
uint16_t gws_msixoff, uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
@@ -198,7 +198,7 @@ static int
tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int rc, vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
@@ -226,7 +226,7 @@ static void
tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
uintptr_t base)
{
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int vec;
vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
index fb630fecf8..f63dc06ef2 100644
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ b/drivers/mempool/octeontx2/otx2_mempool.c
@@ -301,7 +301,7 @@ otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
lf->pf_func = dev->pf_func;
lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = &pci_dev->intr_handle;
+ lf->intr_handle = pci_dev->intr_handle;
lf->pci_dev = pci_dev;
idev->npa_pf_func = dev->pf_func;
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index f7bfac796c..1c03e8bfa1 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -359,7 +359,7 @@ eth_atl_dev_init(struct rte_eth_dev *eth_dev)
{
struct atl_adapter *adapter = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
int err = 0;
@@ -478,7 +478,7 @@ atl_dev_start(struct rte_eth_dev *dev)
{
struct aq_hw_s *hw = ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int status;
int err;
@@ -524,10 +524,9 @@ atl_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -607,7 +606,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
struct aq_hw_s *hw =
ATL_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
@@ -637,10 +636,7 @@ atl_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -691,7 +687,7 @@ static int
atl_dev_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct aq_hw_s *hw;
int ret;
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 9eabdf0901..7ac55584ff 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -711,7 +711,7 @@ avp_dev_interrupt_handler(void *data)
status);
/* re-enable UIO interrupt handling */
- ret = rte_intr_ack(&pci_dev->intr_handle);
+ ret = rte_intr_ack(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to re-enable UIO interrupts, ret=%d\n",
ret);
@@ -730,7 +730,7 @@ avp_dev_enable_interrupts(struct rte_eth_dev *eth_dev)
return -EINVAL;
/* enable UIO interrupt handling */
- ret = rte_intr_enable(&pci_dev->intr_handle);
+ ret = rte_intr_enable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to enable UIO interrupts, ret=%d\n",
ret);
@@ -759,7 +759,7 @@ avp_dev_disable_interrupts(struct rte_eth_dev *eth_dev)
RTE_PTR_ADD(registers, RTE_AVP_INTERRUPT_MASK_OFFSET));
/* enable UIO interrupt handling */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Failed to disable UIO interrupts, ret=%d\n",
ret);
@@ -776,7 +776,7 @@ avp_dev_setup_interrupts(struct rte_eth_dev *eth_dev)
int ret;
/* register a callback handler with UIO for interrupt notifications */
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
avp_dev_interrupt_handler,
(void *)eth_dev);
if (ret < 0) {
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index dab0c6775d..7d40c18a86 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -313,7 +313,7 @@ axgbe_dev_interrupt_handler(void *param)
}
}
/* Unmask interrupts since disabled after generation */
- rte_intr_ack(&pdata->pci_dev->intr_handle);
+ rte_intr_ack(pdata->pci_dev->intr_handle);
}
/*
@@ -374,7 +374,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
/* phy start*/
pdata->phy_if.phy_start(pdata);
@@ -406,7 +406,7 @@ axgbe_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
if (rte_bit_relaxed_get32(AXGBE_STOPPED, &pdata->dev_state))
return 0;
@@ -2311,7 +2311,7 @@ eth_axgbe_dev_init(struct rte_eth_dev *eth_dev)
return ret;
}
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
PMD_INIT_LOG(DEBUG, "port %d vendorID=0x%x deviceID=0x%x",
@@ -2335,8 +2335,8 @@ axgbe_dev_close(struct rte_eth_dev *eth_dev)
axgbe_dev_clear_queues(eth_dev);
/* disable uio intr before callback unregister */
- rte_intr_disable(&pci_dev->intr_handle);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_disable(pci_dev->intr_handle);
+ rte_intr_callback_unregister(pci_dev->intr_handle,
axgbe_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 59fa9175ad..32d8c666f9 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -933,7 +933,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
}
/* Disable auto-negotiation interrupt */
- rte_intr_disable(&pdata->pci_dev->intr_handle);
+ rte_intr_disable(pdata->pci_dev->intr_handle);
/* Start auto-negotiation in a supported mode */
if (axgbe_use_mode(pdata, AXGBE_MODE_KR)) {
@@ -951,7 +951,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
} else if (axgbe_use_mode(pdata, AXGBE_MODE_SGMII_100)) {
axgbe_set_mode(pdata, AXGBE_MODE_SGMII_100);
} else {
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
return -EINVAL;
}
@@ -964,7 +964,7 @@ static int __axgbe_phy_config_aneg(struct axgbe_port *pdata)
pdata->kx_state = AXGBE_RX_BPA;
/* Re-enable auto-negotiation interrupt */
- rte_intr_enable(&pdata->pci_dev->intr_handle);
+ rte_intr_enable(pdata->pci_dev->intr_handle);
axgbe_an37_enable_interrupts(pdata);
axgbe_an_init(pdata);
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 78fc717ec4..f36ad30e17 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -134,7 +134,7 @@ bnx2x_interrupt_handler(void *param)
PMD_DEBUG_PERIODIC_LOG(INFO, sc, "Interrupt handled");
bnx2x_interrupt_action(dev, 1);
- rte_intr_ack(&sc->pci_dev->intr_handle);
+ rte_intr_ack(sc->pci_dev->intr_handle);
}
static void bnx2x_periodic_start(void *param)
@@ -230,10 +230,10 @@ bnx2x_dev_start(struct rte_eth_dev *dev)
}
if (IS_PF(sc)) {
- rte_intr_callback_register(&sc->pci_dev->intr_handle,
+ rte_intr_callback_register(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
- if (rte_intr_enable(&sc->pci_dev->intr_handle))
+ if (rte_intr_enable(sc->pci_dev->intr_handle))
PMD_DRV_LOG(ERR, sc, "rte_intr_enable failed");
}
@@ -258,8 +258,8 @@ bnx2x_dev_stop(struct rte_eth_dev *dev)
bnx2x_dev_rxtx_init_dummy(dev);
if (IS_PF(sc)) {
- rte_intr_disable(&sc->pci_dev->intr_handle);
- rte_intr_callback_unregister(&sc->pci_dev->intr_handle,
+ rte_intr_disable(sc->pci_dev->intr_handle);
+ rte_intr_callback_unregister(sc->pci_dev->intr_handle,
bnx2x_interrupt_handler, (void *)dev);
/* stop the periodic callout */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2791a5c62d..5a34bb96d0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -729,7 +729,7 @@ static int bnxt_alloc_prev_ring_stats(struct bnxt *bp)
static int bnxt_start_nic(struct bnxt *bp)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
uint32_t queue_id, base = BNXT_MISC_VEC_ID;
uint32_t vec = BNXT_MISC_VEC_ID;
@@ -846,26 +846,24 @@ static int bnxt_start_nic(struct bnxt *bp)
return rc;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- bp->eth_dev->data->nb_rx_queues *
- sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ bp->eth_dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
rc = -ENOMEM;
goto err_out;
}
- PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
- "intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
- intr_handle->intr_vec, intr_handle->nb_efd,
- intr_handle->max_intr);
+ PMD_DRV_LOG(DEBUG, "intr_handle->nb_efd = %d "
+ "intr_handle->max_intr = %d\n",
+ rte_intr_nb_efd_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle));
for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] =
- vec + BNXT_RX_VEC_START;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec + BNXT_RX_VEC_START);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
@@ -1473,7 +1471,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
{
struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
int ret;
@@ -1515,10 +1513,7 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
/* Clean queue intr-vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
bnxt_hwrm_port_clr_stats(bp);
bnxt_free_tx_mbufs(bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 122a1f9908..508abfc844 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -67,7 +67,7 @@ void bnxt_int_handler(void *param)
int bnxt_free_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
@@ -170,7 +170,7 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
- struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = bp->pdev->intr_handle;
struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 89ea7dd47c..b9bf9d2966 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -208,7 +208,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
/* Rx offloads which are enabled by default */
@@ -255,13 +255,14 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && intr_handle->fd) {
+ if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
rte_intr_callback_register(intr_handle,
dpaa_interrupt_handler,
(void *)dev);
- ret = dpaa_intr_enable(__fif->node_name, intr_handle->fd);
+ ret = dpaa_intr_enable(__fif->node_name,
+ rte_intr_fd_get(intr_handle));
if (ret) {
if (dev->data->dev_conf.intr_conf.lsc != 0) {
rte_intr_callback_unregister(intr_handle,
@@ -368,9 +369,10 @@ static void dpaa_interrupt_handler(void *param)
int bytes_read;
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
- bytes_read = read(intr_handle->fd, &buf, sizeof(uint64_t));
+ bytes_read = read(rte_intr_fd_get(intr_handle), &buf,
+ sizeof(uint64_t));
if (bytes_read < 0)
DPAA_PMD_ERR("Error reading eventfd\n");
dpaa_eth_link_update(dev, 0);
@@ -440,7 +442,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
}
dpaa_dev = container_of(rdev, struct rte_dpaa_device, device);
- intr_handle = &dpaa_dev->intr_handle;
+ intr_handle = dpaa_dev->intr_handle;
__fif = container_of(fif, struct __fman_if, __if);
ret = dpaa_eth_dev_stop(dev);
@@ -449,7 +451,7 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
- if (intr_handle && intr_handle->fd &&
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
dpaa_intr_disable(__fif->node_name);
rte_intr_callback_unregister(intr_handle,
@@ -1072,26 +1074,38 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->qp = qp;
/* Set up the device interrupt handler */
- if (!dev->intr_handle) {
+ if (dev->intr_handle == NULL) {
struct rte_dpaa_device *dpaa_dev;
struct rte_device *rdev = dev->device;
dpaa_dev = container_of(rdev, struct rte_dpaa_device,
device);
- dev->intr_handle = &dpaa_dev->intr_handle;
- dev->intr_handle->intr_vec = rte_zmalloc(NULL,
- dpaa_push_mode_max_queue, 0);
- if (!dev->intr_handle->intr_vec) {
+ dev->intr_handle = dpaa_dev->intr_handle;
+ if (rte_intr_vec_list_alloc(dev->intr_handle,
+ NULL, dpaa_push_mode_max_queue)) {
DPAA_PMD_ERR("intr_vec alloc failed");
return -ENOMEM;
}
- dev->intr_handle->nb_efd = dpaa_push_mode_max_queue;
- dev->intr_handle->max_intr = dpaa_push_mode_max_queue;
+ if (rte_intr_nb_efd_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle,
+ dpaa_push_mode_max_queue))
+ return -rte_errno;
}
- dev->intr_handle->type = RTE_INTR_HANDLE_EXT;
- dev->intr_handle->intr_vec[queue_idx] = queue_idx + 1;
- dev->intr_handle->efds[queue_idx] = q_fd;
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_vec_list_index_set(dev->intr_handle,
+ queue_idx, queue_idx + 1))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, queue_idx,
+ q_fd))
+ return -rte_errno;
+
rxq->q_fd = q_fd;
}
rxq->bp_array = rte_dpaa_bpid_info;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 59e728577f..73d17f7b3c 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1145,7 +1145,7 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
dpaa2_dev = container_of(rdev, struct rte_dpaa2_device, device);
- intr_handle = &dpaa2_dev->intr_handle;
+ intr_handle = dpaa2_dev->intr_handle;
PMD_INIT_FUNC_TRACE();
@@ -1216,8 +1216,8 @@ dpaa2_dev_start(struct rte_eth_dev *dev)
}
/* if the interrupts were configured on this devices*/
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/* Registering LSC interrupt handler */
rte_intr_callback_register(intr_handle,
dpaa2_interrupt_handler,
@@ -1256,8 +1256,8 @@ dpaa2_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
/* reset interrupt callback */
- if (intr_handle && (intr_handle->fd) &&
- (dev->data->dev_conf.intr_conf.lsc != 0)) {
+ if (intr_handle && rte_intr_fd_get(intr_handle) &&
+ dev->data->dev_conf.intr_conf.lsc != 0) {
/*disable dpni irqs */
dpaa2_eth_setup_irqs(dev, 0);
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index 9da477e59d..18fea4e0ac 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -237,7 +237,7 @@ static int
eth_em_dev_init(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(eth_dev->data->dev_private);
struct e1000_hw *hw =
@@ -523,7 +523,7 @@ eth_em_start(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t *speeds;
@@ -573,12 +573,10 @@ eth_em_start(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
+ " intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
@@ -716,7 +714,7 @@ eth_em_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
dev->data->dev_started = 0;
@@ -750,10 +748,7 @@ eth_em_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -765,7 +760,7 @@ eth_em_close(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1006,7 +1001,7 @@ eth_em_rx_queue_intr_enable(struct rte_eth_dev *dev, __rte_unused uint16_t queue
{
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
em_rxq_intr_enable(hw);
rte_intr_ack(intr_handle);
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index ae3bc4a9c2..ff06575f03 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -515,7 +515,7 @@ igb_intr_enable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -532,7 +532,7 @@ igb_intr_disable(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
@@ -851,12 +851,12 @@ eth_igb_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igb_interrupt_handler,
(void *)eth_dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igb_intr_enable(eth_dev);
@@ -992,7 +992,7 @@ eth_igbvf_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id, "igb_mac_82576_vf");
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_intr_callback_register(intr_handle,
eth_igbvf_interrupt_handler, eth_dev);
@@ -1196,7 +1196,7 @@ eth_igb_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret, mask;
uint32_t intr_vector = 0;
uint32_t ctrl_ext;
@@ -1255,11 +1255,10 @@ eth_igb_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -1418,7 +1417,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
struct rte_eth_link link;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -1462,10 +1461,7 @@ eth_igb_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -1505,7 +1501,7 @@ eth_igb_close(struct rte_eth_dev *dev)
struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_eth_link link;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_filter_info *filter_info =
E1000_DEV_PRIVATE_TO_FILTER_INFO(dev->data->dev_private);
int ret;
@@ -1531,10 +1527,8 @@ eth_igb_close(struct rte_eth_dev *dev)
igb_dev_free_queues(dev);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
memset(&link, 0, sizeof(link));
rte_eth_linkstatus_set(dev, &link);
@@ -2771,7 +2765,7 @@ static int eth_igb_rxq_interrupt_setup(struct rte_eth_dev *dev)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
struct rte_eth_dev_info dev_info;
@@ -3288,7 +3282,7 @@ igbvf_dev_start(struct rte_eth_dev *dev)
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
uint32_t intr_vector = 0;
@@ -3319,11 +3313,10 @@ igbvf_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ /* Allocate the vector list */
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -3345,7 +3338,7 @@ static int
igbvf_dev_stop(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct e1000_adapter *adapter =
E1000_DEV_PRIVATE(dev->data->dev_private);
@@ -3369,10 +3362,9 @@ igbvf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Clean vector list */
+ rte_intr_vec_list_free(intr_handle);
adapter->stopped = true;
dev->data->dev_started = 0;
@@ -3410,7 +3402,7 @@ igbvf_dev_close(struct rte_eth_dev *dev)
memset(&addr, 0, sizeof(addr));
igbvf_default_mac_addr_set(dev, &addr);
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
eth_igbvf_interrupt_handler,
(void *)dev);
@@ -5112,7 +5104,7 @@ eth_igb_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5132,7 +5124,7 @@ eth_igb_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct e1000_hw *hw =
E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = E1000_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5210,7 +5202,7 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
uint32_t base = E1000_MISC_VEC_ID;
uint32_t misc_shift = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
/* won't configure msix register if no mapping is done
* between intr vector and event fd
@@ -5251,8 +5243,9 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
E1000_WRITE_REG(hw, E1000_GPIE, E1000_GPIE_MSIX_MODE |
E1000_GPIE_PBA | E1000_GPIE_EIAME |
E1000_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask =
+ RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5270,8 +5263,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
/* use EIAM to auto-mask when MSI-X interrupt
* is asserted, this saves a register write for every interrupt
*/
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc != 0)
intr_mask |= (1 << IGB_MSIX_OTHER_INTR_VEC);
@@ -5281,8 +5274,8 @@ eth_igb_configure_msix_intr(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
eth_igb_assign_msix_vector(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 572d7c20f9..634c97acf6 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -494,7 +494,7 @@ static void ena_config_debug_area(struct ena_adapter *adapter)
static int ena_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_adapter *adapter = dev->data->dev_private;
int ret = 0;
@@ -954,7 +954,7 @@ static int ena_stop(struct rte_eth_dev *dev)
struct ena_adapter *adapter = dev->data->dev_private;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
/* Cannot free memory in secondary process */
@@ -976,10 +976,9 @@ static int ena_stop(struct rte_eth_dev *dev)
rte_intr_disable(intr_handle);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
rte_intr_enable(intr_handle);
@@ -995,7 +994,7 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
struct ena_adapter *adapter = ring->adapter;
struct ena_com_dev *ena_dev = &adapter->ena_dev;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ena_com_create_io_ctx ctx =
/* policy set to _HOST just to satisfy icc compiler */
{ ENA_ADMIN_PLACEMENT_POLICY_HOST,
@@ -1015,7 +1014,10 @@ static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring)
ena_qid = ENA_IO_RXQ_IDX(ring->id);
ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX;
if (rte_intr_dp_is_en(intr_handle))
- ctx.msix_vector = intr_handle->intr_vec[ring->id];
+ ctx.msix_vector =
+ rte_intr_vec_list_index_get(intr_handle,
+ ring->id);
+
for (i = 0; i < ring->ring_size; i++)
ring->empty_rx_reqs[i] = i;
}
@@ -1824,7 +1826,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->addr.devid,
pci_dev->addr.function);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
adapter->regs = pci_dev->mem_resource[ENA_REGS_BAR].addr;
adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr;
@@ -3112,7 +3114,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter,
static int ena_setup_rx_intr(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int rc;
uint16_t vectors_nb, i;
bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq;
@@ -3139,9 +3141,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
goto enable_intr;
}
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0);
- if (intr_handle->intr_vec == NULL) {
+ /* Allocate the vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate interrupt vector for %d queues\n",
dev->data->nb_rx_queues);
@@ -3160,7 +3162,9 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
}
for (i = 0; i < vectors_nb; ++i)
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + i))
+ goto disable_intr_efd;
rte_intr_enable(intr_handle);
return 0;
@@ -3168,8 +3172,7 @@ static int ena_setup_rx_intr(struct rte_eth_dev *dev)
disable_intr_efd:
rte_intr_efd_disable(intr_handle);
free_intr_vec:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
enable_intr:
rte_intr_enable(intr_handle);
return rc;
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index f7ae84767f..5cc6d9f017 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -448,7 +448,7 @@ enic_intr_handler(void *arg)
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_LSC, NULL);
enic_log_q_error(enic);
/* Re-enable irq in case of INTx */
- rte_intr_ack(&enic->pdev->intr_handle);
+ rte_intr_ack(enic->pdev->intr_handle);
}
static int enic_rxq_intr_init(struct enic *enic)
@@ -477,14 +477,16 @@ static int enic_rxq_intr_init(struct enic *enic)
" interrupts\n");
return err;
}
- intr_handle->intr_vec = rte_zmalloc("enic_intr_vec",
- rxq_intr_count * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, "enic_intr_vec",
+ rxq_intr_count)) {
dev_err(enic, "Failed to allocate intr_vec\n");
return -ENOMEM;
}
for (i = 0; i < rxq_intr_count; i++)
- intr_handle->intr_vec[i] = i + ENICPMD_RXQ_INTR_OFFSET;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + ENICPMD_RXQ_INTR_OFFSET))
+ return -rte_errno;
return 0;
}
@@ -494,10 +496,8 @@ static void enic_rxq_intr_deinit(struct enic *enic)
intr_handle = enic->rte_dev->intr_handle;
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ rte_intr_vec_list_free(intr_handle);
}
static void enic_prep_wq_for_simple_tx(struct enic *enic, uint16_t queue_idx)
@@ -667,10 +667,10 @@ int enic_enable(struct enic *enic)
vnic_dev_enable_wait(enic->vdev);
/* Register and enable error interrupt */
- rte_intr_callback_register(&(enic->pdev->intr_handle),
+ rte_intr_callback_register(enic->pdev->intr_handle,
enic_intr_handler, (void *)enic->rte_dev);
- rte_intr_enable(&(enic->pdev->intr_handle));
+ rte_intr_enable(enic->pdev->intr_handle);
/* Unmask LSC interrupt */
vnic_intr_unmask(&enic->intr[ENICPMD_LSC_INTR_OFFSET]);
@@ -1111,8 +1111,8 @@ int enic_disable(struct enic *enic)
(void)vnic_intr_masked(&enic->intr[i]); /* flush write */
}
enic_rxq_intr_deinit(enic);
- rte_intr_disable(&enic->pdev->intr_handle);
- rte_intr_callback_unregister(&enic->pdev->intr_handle,
+ rte_intr_disable(enic->pdev->intr_handle);
+ rte_intr_callback_unregister(enic->pdev->intr_handle,
enic_intr_handler,
(void *)enic->rte_dev);
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 82d595b1d1..ad6b43538e 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -264,11 +264,23 @@ fs_eth_dev_create(struct rte_vdev_device *vdev)
RTE_ETHER_ADDR_BYTES(mac));
dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC |
RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
- PRIV(dev)->intr_handle = (struct rte_intr_handle){
- .fd = -1,
- .type = RTE_INTR_HANDLE_EXT,
- };
+
+ /* Allocate interrupt instance */
+ PRIV(dev)->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (PRIV(dev)->intr_handle == NULL) {
+ ERROR("Failed to allocate intr handle");
+ goto cancel_alarm;
+ }
+
+ if (rte_intr_fd_set(PRIV(dev)->intr_handle, -1))
+ goto cancel_alarm;
+
+ if (rte_intr_type_set(PRIV(dev)->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto cancel_alarm;
+
rte_eth_dev_probing_finish(dev);
+
return 0;
cancel_alarm:
failsafe_hotplug_alarm_cancel(dev);
@@ -297,6 +309,7 @@ fs_rte_eth_free(const char *name)
return 0; /* port already released */
ret = failsafe_eth_dev_close(dev);
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(PRIV(dev)->intr_handle);
return ret;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 5f4810051d..14b87a54ab 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -410,12 +410,10 @@ fs_rx_intr_vec_uninstall(struct fs_priv *priv)
{
struct rte_intr_handle *intr_handle;
- intr_handle = &priv->intr_handle;
- if (intr_handle->intr_vec != NULL) {
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
- intr_handle->nb_efd = 0;
+ intr_handle = priv->intr_handle;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -439,11 +437,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
rxqs_n = priv->data->nb_rx_queues;
n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
count = 0;
- intr_handle = &priv->intr_handle;
- RTE_ASSERT(intr_handle->intr_vec == NULL);
+ intr_handle = priv->intr_handle;
/* Allocate the interrupt vector of the failsafe Rx proxy interrupts */
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
fs_rx_intr_vec_uninstall(priv);
rte_errno = ENOMEM;
ERROR("Failed to allocate memory for interrupt vector,"
@@ -456,9 +452,9 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
/* Skip queues that cannot request interrupts. */
if (rxq == NULL || rxq->event_fd < 0) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -469,15 +465,24 @@ fs_rx_intr_vec_install(struct fs_priv *priv)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->event_fd;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq->event_fd))
+ return -rte_errno;
count++;
}
if (count == 0) {
fs_rx_intr_vec_uninstall(priv);
} else {
- intr_handle->nb_efd = count;
- intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
+
+ if (rte_intr_efd_counter_size_set(intr_handle,
+ sizeof(uint64_t)))
+ return -rte_errno;
}
return 0;
}
@@ -499,7 +504,7 @@ failsafe_rx_intr_uninstall(struct rte_eth_dev *dev)
struct rte_intr_handle *intr_handle;
priv = PRIV(dev);
- intr_handle = &priv->intr_handle;
+ intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
fs_rx_event_proxy_uninstall(priv);
fs_rx_intr_vec_uninstall(priv);
@@ -530,6 +535,6 @@ failsafe_rx_intr_install(struct rte_eth_dev *dev)
fs_rx_intr_vec_uninstall(priv);
return -rte_errno;
}
- dev->intr_handle = &priv->intr_handle;
+ dev->intr_handle = priv->intr_handle;
return 0;
}
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index a3a8a1c82e..822883bc2f 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -393,15 +393,22 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
* For the time being, fake as if we are using MSIX interrupts,
* this will cause rte_intr_efd_enable to allocate an eventfd for us.
*/
- struct rte_intr_handle intr_handle = {
- .type = RTE_INTR_HANDLE_VFIO_MSIX,
- .efds = { -1, },
- };
+ struct rte_intr_handle *intr_handle;
struct sub_device *sdev;
struct rxq *rxq;
uint8_t i;
int ret;
+ intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (intr_handle == NULL)
+ return -ENOMEM;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, 0, -1))
+ return -rte_errno;
+
fs_lock(dev, 0);
if (rx_conf->rx_deferred_start) {
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) {
@@ -435,12 +442,12 @@ fs_rx_queue_setup(struct rte_eth_dev *dev,
rxq->info.nb_desc = nb_rx_desc;
rxq->priv = PRIV(dev);
rxq->sdev = PRIV(dev)->subs;
- ret = rte_intr_efd_enable(&intr_handle, 1);
+ ret = rte_intr_efd_enable(intr_handle, 1);
if (ret < 0) {
fs_unlock(dev, 0);
return ret;
}
- rxq->event_fd = intr_handle.efds[0];
+ rxq->event_fd = rte_intr_efds_index_get(intr_handle, 0);
dev->data->rx_queues[rx_queue_id] = rxq;
FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) {
ret = rte_eth_rx_queue_setup(PORT_ID(sdev),
diff --git a/drivers/net/failsafe/failsafe_private.h b/drivers/net/failsafe/failsafe_private.h
index cd39d103c6..a80f5e2caf 100644
--- a/drivers/net/failsafe/failsafe_private.h
+++ b/drivers/net/failsafe/failsafe_private.h
@@ -166,7 +166,7 @@ struct fs_priv {
struct rte_ether_addr *mcast_addrs;
/* current capabilities */
struct rte_eth_dev_owner my_owner; /* Unique owner. */
- struct rte_intr_handle intr_handle; /* Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* Port interrupt handle. */
/*
* Fail-safe state machine.
* This level will be tracking state of the EAL and eth
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index d256334bfd..c25c323140 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -32,7 +32,8 @@
#define BIT_MASK_PER_UINT32 ((1 << CHARS_PER_UINT32) - 1)
/* default 1:1 map from queue ID to interrupt vector ID */
-#define Q2V(pci_dev, queue_id) ((pci_dev)->intr_handle.intr_vec[queue_id])
+#define Q2V(pci_dev, queue_id) \
+ (rte_intr_vec_list_index_get((pci_dev)->intr_handle, queue_id))
/* First 64 Logical ports for PF/VMDQ, second 64 for Flow director */
#define MAX_LPORT_NUM 128
@@ -690,7 +691,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct fm10k_macvlan_filter_info *macvlan;
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i, ret;
struct fm10k_rx_queue *rxq;
uint64_t base_addr;
@@ -1158,7 +1159,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int i;
PMD_INIT_FUNC_TRACE();
@@ -1187,8 +1188,7 @@ fm10k_dev_stop(struct rte_eth_dev *dev)
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -2367,7 +2367,7 @@ fm10k_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
else
FM10K_WRITE_REG(hw, FM10K_VFITR(Q2V(pdev, queue_id)),
FM10K_ITR_AUTOMASK | FM10K_ITR_MASK_CLEAR);
- rte_intr_ack(&pdev->intr_handle);
+ rte_intr_ack(pdev->intr_handle);
return 0;
}
@@ -2392,7 +2392,7 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
uint32_t intr_vector, vec;
uint16_t queue_id;
int result = 0;
@@ -2420,15 +2420,17 @@ fm10k_dev_rxq_interrupt_setup(struct rte_eth_dev *dev)
}
if (rte_intr_dp_is_en(intr_handle) && !result) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec) {
+ if (!rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
for (queue_id = 0, vec = FM10K_RX_VEC_START;
queue_id < dev->data->nb_rx_queues;
queue_id++) {
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < intr_handle->nb_efd - 1
- + FM10K_RX_VEC_START)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ int nb_efd =
+ rte_intr_nb_efd_get(intr_handle);
+ if (vec < (uint32_t)nb_efd - 1 +
+ FM10K_RX_VEC_START)
vec++;
}
} else {
@@ -2787,7 +2789,7 @@ fm10k_dev_close(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -3053,7 +3055,7 @@ eth_fm10k_dev_init(struct rte_eth_dev *dev)
{
struct fm10k_hw *hw = FM10K_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pdev->intr_handle;
+ struct rte_intr_handle *intr_handle = pdev->intr_handle;
int diag, i;
struct fm10k_macvlan_filter_info *macvlan;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 4cd5a85d5f..9cabd3e0c1 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1228,13 +1228,13 @@ static void hinic_disable_interrupt(struct rte_eth_dev *dev)
hinic_set_msix_state(nic_dev->hwdev, 0, HINIC_MSIX_DISABLE);
/* disable rte interrupt */
- ret = rte_intr_disable(&pci_dev->intr_handle);
+ ret = rte_intr_disable(pci_dev->intr_handle);
if (ret)
PMD_DRV_LOG(ERR, "Disable intr failed: %d", ret);
do {
ret =
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler, dev);
if (ret >= 0) {
break;
@@ -3118,7 +3118,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* register callback func to eal lib */
- rc = rte_intr_callback_register(&pci_dev->intr_handle,
+ rc = rte_intr_callback_register(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
if (rc) {
@@ -3128,7 +3128,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
}
/* enable uio/vfio intr/eventfd mapping */
- rc = rte_intr_enable(&pci_dev->intr_handle);
+ rc = rte_intr_enable(pci_dev->intr_handle);
if (rc) {
PMD_DRV_LOG(ERR, "Enable rte interrupt failed, dev_name: %s",
eth_dev->data->name);
@@ -3158,7 +3158,7 @@ static int hinic_func_init(struct rte_eth_dev *eth_dev)
return 0;
enable_intr_fail:
- (void)rte_intr_callback_unregister(&pci_dev->intr_handle,
+ (void)rte_intr_callback_unregister(pci_dev->intr_handle,
hinic_dev_interrupt_handler,
(void *)eth_dev);
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 9881659ceb..1437a07372 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5224,7 +5224,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_config_all_msix_error(hw, true);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3_interrupt_handler,
eth_dev);
if (ret) {
@@ -5237,7 +5237,7 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
goto err_get_config;
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3_pf_enable_irq0(hw);
/* Get configuration */
@@ -5296,8 +5296,8 @@ hns3_init_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
err_get_config:
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -5330,8 +5330,8 @@ hns3_uninit_pf(struct rte_eth_dev *eth_dev)
hns3_tqp_stats_uninit(hw);
hns3_config_mac_tnl_int(hw, false);
hns3_pf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3_interrupt_handler,
eth_dev);
hns3_config_all_msix_error(hw, false);
hns3_cmd_uninit(hw);
@@ -5665,7 +5665,7 @@ static int
hns3_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5688,16 +5688,13 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
- hw->used_rx_queues);
- ret = -ENOMEM;
- goto alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "failed to allocate %u rx_queues intr_vec",
+ hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -5710,20 +5707,21 @@ hns3_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto bind_vector_error;
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -5734,7 +5732,7 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -5744,8 +5742,9 @@ hns3_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -5888,7 +5887,7 @@ static void
hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_adapter *hns = dev->data->dev_private;
struct hns3_hw *hw = &hns->hw;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
@@ -5908,16 +5907,14 @@ hns3_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
}
static int
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index c0c1f1c4c1..873924927c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -1956,7 +1956,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
hns3vf_clear_event_cause(hw, 0);
- ret = rte_intr_callback_register(&pci_dev->intr_handle,
+ ret = rte_intr_callback_register(pci_dev->intr_handle,
hns3vf_interrupt_handler, eth_dev);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to register intr: %d", ret);
@@ -1964,7 +1964,7 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
}
/* Enable interrupt */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
hns3vf_enable_irq0(hw);
/* Get configuration from PF */
@@ -2016,8 +2016,8 @@ hns3vf_init_vf(struct rte_eth_dev *eth_dev)
err_get_config:
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
err_intr_callback_register:
err_cmd_init:
@@ -2045,8 +2045,8 @@ hns3vf_uninit_vf(struct rte_eth_dev *eth_dev)
hns3_flow_uninit(eth_dev);
hns3_tqp_stats_uninit(hw);
hns3vf_disable_irq0(hw);
- rte_intr_disable(&pci_dev->intr_handle);
- hns3_intr_unregister(&pci_dev->intr_handle, hns3vf_interrupt_handler,
+ rte_intr_disable(pci_dev->intr_handle);
+ hns3_intr_unregister(pci_dev->intr_handle, hns3vf_interrupt_handler,
eth_dev);
hns3_cmd_uninit(hw);
hns3_cmd_destroy_queue(hw);
@@ -2089,7 +2089,7 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
{
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
uint16_t q_id;
@@ -2107,16 +2107,16 @@ hns3vf_unmap_rx_interrupt(struct rte_eth_dev *dev)
(void)hns3vf_bind_ring_with_vector(hw, vec, false,
HNS3_RING_TYPE_RX,
q_id);
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
}
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
}
static int
@@ -2272,7 +2272,7 @@ static int
hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint8_t base = RTE_INTR_VEC_ZERO_OFFSET;
uint8_t vec = RTE_INTR_VEC_ZERO_OFFSET;
@@ -2295,16 +2295,13 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
if (rte_intr_efd_enable(intr_handle, intr_vector))
return -EINVAL;
- if (intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->used_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
- hns3_err(hw, "Failed to allocate %u rx_queues"
- " intr_vec", hw->used_rx_queues);
- ret = -ENOMEM;
- goto vf_alloc_intr_vec_error;
- }
+ /* Allocate vector list */
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ hw->used_rx_queues)) {
+ hns3_err(hw, "Failed to allocate %u rx_queues"
+ " intr_vec", hw->used_rx_queues);
+ ret = -ENOMEM;
+ goto vf_alloc_intr_vec_error;
}
if (rte_intr_allow_others(intr_handle)) {
@@ -2317,20 +2314,22 @@ hns3vf_map_rx_interrupt(struct rte_eth_dev *dev)
HNS3_RING_TYPE_RX, q_id);
if (ret)
goto vf_bind_vector_error;
- intr_handle->intr_vec[q_id] = vec;
+
+ if (rte_intr_vec_list_index_set(intr_handle, q_id, vec))
+ goto vf_bind_vector_error;
+
/*
* If there are not enough efds (e.g. not enough interrupt),
* remaining queues will be bond to the last interrupt.
*/
- if (vec < base + intr_handle->nb_efd - 1)
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
rte_intr_enable(intr_handle);
return 0;
vf_bind_vector_error:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
vf_alloc_intr_vec_error:
rte_intr_efd_disable(intr_handle);
return ret;
@@ -2341,7 +2340,7 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
{
struct rte_eth_dev *dev = &rte_eth_devices[hw->data->port_id];
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t q_id;
int ret;
@@ -2351,8 +2350,9 @@ hns3vf_restore_rx_interrupt(struct hns3_hw *hw)
if (rte_intr_dp_is_en(intr_handle)) {
for (q_id = 0; q_id < hw->used_rx_queues; q_id++) {
ret = hns3vf_bind_ring_with_vector(hw,
- intr_handle->intr_vec[q_id], true,
- HNS3_RING_TYPE_RX, q_id);
+ rte_intr_vec_list_index_get(intr_handle,
+ q_id),
+ true, HNS3_RING_TYPE_RX, q_id);
if (ret)
return ret;
}
@@ -2816,7 +2816,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
int ret;
if (hw->reset.level == HNS3_VF_FULL_RESET) {
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ret = hns3vf_set_bus_master(pci_dev, true);
if (ret < 0) {
hns3_err(hw, "failed to set pci bus, ret = %d", ret);
@@ -2842,7 +2842,7 @@ hns3vf_reinit_dev(struct hns3_adapter *hns)
hns3_err(hw, "Failed to enable msix");
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
}
ret = hns3_reset_all_tqps(hns);
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index b633aabb14..ceb98025f8 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1050,7 +1050,7 @@ int
hns3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
if (dev->data->dev_conf.intr_conf.rxq == 0)
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 293df887bf..62e374d19e 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1440,7 +1440,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
}
i40e_set_default_ptype_table(dev);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
rte_eth_copy_pci_info(dev, pci_dev);
@@ -1972,7 +1972,7 @@ i40e_vsi_queues_unbind_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
uint16_t i;
@@ -2088,10 +2088,11 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -2141,8 +2142,8 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->nb_used_qps - i,
itr_idx);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
break;
}
/* 1:1 queue/msix_vect mapping */
@@ -2150,7 +2151,9 @@ i40e_vsi_queues_bind_intr(struct i40e_vsi *vsi, uint16_t itr_idx)
vsi->base_queue + i, 1,
itr_idx);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ if (rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect))
+ return -rte_errno;
msix_vect++;
nb_msix--;
@@ -2164,7 +2167,7 @@ i40e_vsi_enable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2191,7 +2194,7 @@ i40e_vsi_disable_queues_intr(struct i40e_vsi *vsi)
{
struct rte_eth_dev *dev = I40E_VSI_TO_ETH_DEV(vsi);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_VSI_TO_HW(vsi);
struct i40e_pf *pf = I40E_VSI_TO_PF(vsi);
uint16_t msix_intr, i;
@@ -2357,7 +2360,7 @@ i40e_dev_start(struct rte_eth_dev *dev)
struct i40e_vsi *main_vsi = pf->main_vsi;
int ret, i;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
struct i40e_vsi *vsi;
uint16_t nb_rxq, nb_txq;
@@ -2375,12 +2378,9 @@ i40e_dev_start(struct rte_eth_dev *dev)
return ret;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -2521,7 +2521,7 @@ i40e_dev_stop(struct rte_eth_dev *dev)
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct i40e_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
if (hw->adapter_stopped == 1)
@@ -2562,10 +2562,9 @@ i40e_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+
+ /* Cleanup vector list */
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
pf->tm_conf.committed = false;
@@ -2584,7 +2583,7 @@ i40e_dev_close(struct rte_eth_dev *dev)
struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_filter_control_settings settings;
struct rte_flow *p_flow;
uint32_t reg;
@@ -11068,11 +11067,11 @@ static int
i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_INTENA_MASK |
@@ -11087,7 +11086,7 @@ i40e_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
I40E_PFINT_DYN_CTLN_ITR_INDX_MASK);
I40E_WRITE_FLUSH(hw);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -11096,11 +11095,11 @@ static int
i40e_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
if (msix_intr == I40E_MISC_VEC_ID)
I40E_WRITE_REG(hw, I40E_PFINT_DYN_CTL0,
I40E_PFINT_DYN_CTL0_ITR_INDX_MASK);
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index b2b413c247..f892306f18 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -646,17 +646,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
}
}
+
qv_map = rte_zmalloc("qv_map",
dev->data->nb_rx_queues * sizeof(struct iavf_qv_map), 0);
if (!qv_map) {
@@ -716,7 +715,8 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vf->msix_base;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
vf->qv_map = qv_map;
PMD_DRV_LOG(DEBUG,
@@ -726,14 +726,16 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
/* If Rx interrupt is reuquired, and we can use
* multi interrupts, then the vec is from 1
*/
- vf->nb_msix = RTE_MIN(intr_handle->nb_efd,
- (uint16_t)(vf->vf_res->max_vectors - 1));
+ vf->nb_msix =
+ RTE_MIN(rte_intr_nb_efd_get(intr_handle),
+ (uint16_t)(vf->vf_res->max_vectors - 1));
vf->msix_base = IAVF_RX_VEC_START;
vec = IAVF_RX_VEC_START;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
qv_map[i].queue_id = i;
qv_map[i].vector_id = vec;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= vf->nb_msix + IAVF_RX_VEC_START)
vec = IAVF_RX_VEC_START;
}
@@ -775,8 +777,7 @@ static int iavf_config_rx_queues_irqs(struct rte_eth_dev *dev,
vf->qv_map = NULL;
qv_map_alloc_err:
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
return -1;
}
@@ -912,10 +913,7 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Disable the interrupt for Rx */
rte_intr_efd_disable(intr_handle);
/* Rx interrupt vector mapping free */
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* remove all mac addrs */
iavf_add_del_all_mac_addr(adapter, false);
@@ -1639,7 +1637,8 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(INFO, "MISC is also enabled for control");
IAVF_WRITE_REG(hw, IAVF_VFINT_DYN_CTL01,
@@ -1658,7 +1657,7 @@ iavf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
IAVF_WRITE_FLUSH(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -1670,7 +1669,8 @@ iavf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = pci_dev->intr_handle.intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(pci_dev->intr_handle,
+ queue_id);
if (msix_intr == IAVF_MISC_VEC_ID) {
PMD_DRV_LOG(ERR, "MISC is used for control, cannot disable it");
return -EIO;
@@ -2355,12 +2355,12 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* register callback func to eal lib */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
iavf_dev_interrupt_handler,
(void *)eth_dev);
/* enable uio intr after callback register */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_set(IAVF_ALARM_INTERVAL,
iavf_dev_alarm_handler, eth_dev);
@@ -2394,7 +2394,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
{
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 0f4dd21d44..bb65dbf04f 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -1685,9 +1685,9 @@ iavf_request_queues(struct rte_eth_dev *dev, uint16_t num)
/* disable interrupt to avoid the admin queue message to be read
* before iavf_read_msg_from_pf.
*/
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
err = iavf_execute_vf_cmd(adapter, &args);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
} else {
rte_eal_alarm_cancel(iavf_dev_alarm_handler, dev);
err = iavf_execute_vf_cmd(adapter, &args);
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 7b7df5eebb..084f7a53db 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -539,7 +539,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_spinlock_lock(&hw->vc_cmd_send_lock);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
ice_dcf_disable_irq0(hw);
for (;;) {
@@ -555,7 +555,7 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME);
}
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
rte_spinlock_unlock(&hw->vc_cmd_send_lock);
@@ -694,9 +694,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
}
hw->eth_dev = eth_dev;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
ice_dcf_dev_interrupt_handler, hw);
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
ice_dcf_enable_irq0(hw);
return 0;
@@ -718,7 +718,7 @@ void
ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS)
if (hw->tm_conf.committed) {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 7cb8066416..7c71a48010 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -144,11 +144,9 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
dev->data->nb_rx_queues);
return -1;
@@ -198,7 +196,8 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
hw->msix_base = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[hw->msix_base] |= 1 << i;
- intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, IAVF_MISC_VEC_ID);
}
PMD_DRV_LOG(DEBUG,
"vector %u are mapping to all Rx queues",
@@ -208,12 +207,13 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
* multi interrupts, then the vec is from 1
*/
hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
- intr_handle->nb_efd);
+ rte_intr_nb_efd_get(intr_handle));
hw->msix_base = IAVF_MISC_VEC_ID;
vec = IAVF_MISC_VEC_ID;
for (i = 0; i < dev->data->nb_rx_queues; i++) {
hw->rxq_map[vec] |= 1 << i;
- intr_handle->intr_vec[i] = vec++;
+ rte_intr_vec_list_index_set(intr_handle,
+ i, vec++);
if (vec >= hw->nb_msix)
vec = IAVF_RX_VEC_START;
}
@@ -623,10 +623,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
ice_dcf_stop_queues(dev);
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 6a6637a15a..ef6ee1c386 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2178,7 +2178,7 @@ ice_dev_init(struct rte_eth_dev *dev)
ice_set_default_ptype_table(dev);
pci_dev = RTE_DEV_TO_PCI(dev->device);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
pf->adapter = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
pf->dev_data = dev->data;
@@ -2375,7 +2375,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -2405,7 +2405,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_vsi *main_vsi = pf->main_vsi;
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint16_t i;
/* avoid stopping again */
@@ -2430,10 +2430,7 @@ ice_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
pf->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -2447,7 +2444,7 @@ ice_dev_close(struct rte_eth_dev *dev)
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_adapter *ad =
ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
int ret;
@@ -3345,10 +3342,11 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_vect = vsi->msix_intr;
- uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd);
+ uint16_t nb_msix = RTE_MIN(vsi->nb_msix,
+ rte_intr_nb_efd_get(intr_handle));
uint16_t queue_idx = 0;
int record = 0;
int i;
@@ -3376,8 +3374,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->nb_used_qps - i);
for (; !!record && i < vsi->nb_used_qps; i++)
- intr_handle->intr_vec[queue_idx + i] =
- msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i, msix_vect);
+
break;
}
@@ -3386,7 +3385,9 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi)
vsi->base_queue + i, 1);
if (!!record)
- intr_handle->intr_vec[queue_idx + i] = msix_vect;
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_idx + i,
+ msix_vect);
msix_vect++;
nb_msix--;
@@ -3398,7 +3399,7 @@ ice_vsi_enable_queues_intr(struct ice_vsi *vsi)
{
struct rte_eth_dev *dev = &rte_eth_devices[vsi->adapter->pf.dev_data->port_id];
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
uint16_t msix_intr, i;
@@ -3424,7 +3425,7 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
{
struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_vsi *vsi = pf->main_vsi;
uint32_t intr_vector = 0;
@@ -3444,11 +3445,9 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc(NULL, dev->data->nb_rx_queues * sizeof(int),
- 0);
- if (!intr_handle->intr_vec) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL,
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -4755,19 +4754,19 @@ static int ice_rx_queue_intr_enable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t val;
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
val = GLINT_DYN_CTL_INTENA_M | GLINT_DYN_CTL_CLEARPBA_M |
GLINT_DYN_CTL_ITR_INDX_M;
val &= ~GLINT_DYN_CTL_WB_ON_ITR_M;
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), val);
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
return 0;
}
@@ -4776,11 +4775,11 @@ static int ice_rx_queue_intr_disable(struct rte_eth_dev *dev,
uint16_t queue_id)
{
struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint16_t msix_intr;
- msix_intr = intr_handle->intr_vec[queue_id];
+ msix_intr = rte_intr_vec_list_index_get(intr_handle, queue_id);
ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), GLINT_DYN_CTL_WB_ON_ITR_M);
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 7ce80a442b..8189ad412a 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -377,7 +377,7 @@ igc_intr_other_disable(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -397,7 +397,7 @@ igc_intr_other_enable(struct rte_eth_dev *dev)
struct igc_interrupt *intr = IGC_DEV_PRIVATE_INTR(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc) {
@@ -609,7 +609,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct rte_eth_link link;
dev->data->dev_started = 0;
@@ -661,10 +661,7 @@ eth_igc_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
return 0;
}
@@ -724,7 +721,7 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_mask;
uint32_t vec = IGC_MISC_VEC_ID;
@@ -748,8 +745,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
IGC_WRITE_REG(hw, IGC_GPIE, IGC_GPIE_MSIX_MODE |
IGC_GPIE_PBA | IGC_GPIE_EIAME |
IGC_GPIE_NSICR);
- intr_mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) <<
- misc_shift;
+ intr_mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle),
+ uint32_t) << misc_shift;
if (dev->data->dev_conf.intr_conf.lsc)
intr_mask |= (1u << IGC_MSIX_OTHER_INTR_VEC);
@@ -766,8 +763,8 @@ igc_configure_msix_intr(struct rte_eth_dev *dev)
for (i = 0; i < dev->data->nb_rx_queues; i++) {
igc_write_ivar(hw, i, 0, vec);
- intr_handle->intr_vec[i] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, i, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle) - 1)
vec++;
}
@@ -803,7 +800,7 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
uint32_t mask;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int misc_shift = rte_intr_allow_others(intr_handle) ? 1 : 0;
/* won't configure msix register if no mapping is done
@@ -812,7 +809,8 @@ igc_rxq_interrupt_setup(struct rte_eth_dev *dev)
if (!rte_intr_dp_is_en(intr_handle))
return;
- mask = RTE_LEN2MASK(intr_handle->nb_efd, uint32_t) << misc_shift;
+ mask = RTE_LEN2MASK(rte_intr_nb_efd_get(intr_handle), uint32_t)
+ << misc_shift;
IGC_WRITE_REG(hw, IGC_EIMS, mask);
}
@@ -906,7 +904,7 @@ eth_igc_start(struct rte_eth_dev *dev)
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t *speeds;
int ret;
@@ -944,10 +942,9 @@ eth_igc_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_DRV_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -1162,7 +1159,7 @@ static int
eth_igc_close(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct igc_adapter *adapter = IGC_DEV_PRIVATE(dev);
int retry = 0;
@@ -1331,11 +1328,11 @@ eth_igc_dev_init(struct rte_eth_dev *dev)
dev->data->port_id, pci_dev->id.vendor_id,
pci_dev->id.device_id);
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
eth_igc_interrupt_handler, (void *)dev);
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(&pci_dev->intr_handle);
+ rte_intr_enable(pci_dev->intr_handle);
/* enable support intr */
igc_intr_other_enable(dev);
@@ -2076,7 +2073,7 @@ eth_igc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -2095,7 +2092,7 @@ eth_igc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IGC_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index c688c3735c..28280c5377 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -1060,7 +1060,7 @@ static int
ionic_configure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err;
IONIC_PRINT(DEBUG, "Configuring %u intrs", adapter->nintrs);
@@ -1074,15 +1074,10 @@ ionic_configure_intr(struct ionic_adapter *adapter)
IONIC_PRINT(DEBUG,
"Packet I/O interrupt on datapath is enabled");
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec = rte_zmalloc("intr_vec",
- adapter->nintrs * sizeof(int), 0);
-
- if (!intr_handle->intr_vec) {
- IONIC_PRINT(ERR, "Failed to allocate %u vectors",
- adapter->nintrs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", adapter->nintrs)) {
+ IONIC_PRINT(ERR, "Failed to allocate %u vectors",
+ adapter->nintrs);
+ return -ENOMEM;
}
err = rte_intr_callback_register(intr_handle,
@@ -1111,7 +1106,7 @@ static void
ionic_unconfigure_intr(struct ionic_adapter *adapter)
{
struct rte_pci_device *pci_dev = adapter->pci_dev;
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
rte_intr_disable(intr_handle);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a87c607106..1911cf2fab 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1027,7 +1027,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -1525,7 +1525,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev)
uint32_t tc, tcs;
struct ixgbe_adapter *ad = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
struct ixgbe_vfta *shadow_vfta =
@@ -2539,7 +2539,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -2594,11 +2594,9 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -2834,7 +2832,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_vf_info *vfinfo =
*IXGBE_DEV_PRIVATE_TO_P_VFDATA(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct ixgbe_tm_conf *tm_conf =
IXGBE_DEV_PRIVATE_TO_TM_CONF(dev->data->dev_private);
@@ -2885,10 +2883,7 @@ ixgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -2972,7 +2967,7 @@ ixgbe_dev_close(struct rte_eth_dev *dev)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -4618,7 +4613,7 @@ ixgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5290,7 +5285,7 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -5353,11 +5348,9 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
}
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
ixgbe_dev_clear_queues(dev);
@@ -5397,7 +5390,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct ixgbe_adapter *adapter = dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -5425,10 +5418,7 @@ ixgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
@@ -5440,7 +5430,7 @@ ixgbevf_dev_close(struct rte_eth_dev *dev)
{
struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -5738,7 +5728,7 @@ static int
ixgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_interrupt *intr =
IXGBE_DEV_PRIVATE_TO_INTR(dev->data->dev_private);
struct ixgbe_hw *hw =
@@ -5764,7 +5754,7 @@ ixgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = IXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -5780,7 +5770,7 @@ static int
ixgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
@@ -5907,7 +5897,7 @@ static void
ixgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t q_idx;
@@ -5934,8 +5924,10 @@ ixgbevf_configure_msix(struct rte_eth_dev *dev)
* as IXGBE_VF_MAXMSIVECOTR = 1
*/
ixgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
@@ -5956,7 +5948,7 @@ static void
ixgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ixgbe_hw *hw =
IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
uint32_t queue_id, base = IXGBE_MISC_VEC_ID;
@@ -6000,8 +5992,10 @@ ixgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ixgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index 8533e39f69..d48c3685d9 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -65,7 +65,8 @@ memif_msg_send_from_queue(struct memif_control_channel *cc)
if (e == NULL)
return 0;
- size = memif_msg_send(cc->intr_handle.fd, &e->msg, e->fd);
+ size = memif_msg_send(rte_intr_fd_get(cc->intr_handle), &e->msg,
+ e->fd);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(ERR, "sendmsg fail: %s.", strerror(errno));
ret = -1;
@@ -317,7 +318,9 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
- mq->intr_handle.fd = fd;
+ if (rte_intr_fd_set(mq->intr_handle, fd))
+ return -1;
+
mq->log2_ring_size = ar->log2_ring_size;
mq->region = ar->region;
mq->ring_offset = ar->offset;
@@ -453,7 +456,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
dev->data->rx_queues[idx];
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
- e->fd = mq->intr_handle.fd;
+ e->fd = rte_intr_fd_get(mq->intr_handle);
ar->index = idx;
ar->offset = mq->ring_offset;
ar->region = mq->region;
@@ -505,12 +508,13 @@ memif_intr_unregister_handler(struct rte_intr_handle *intr_handle, void *arg)
struct memif_control_channel *cc = arg;
/* close control channel fd */
- close(intr_handle->fd);
+ close(rte_intr_fd_get(intr_handle));
/* clear message queue */
while ((elt = TAILQ_FIRST(&cc->msg_queue)) != NULL) {
TAILQ_REMOVE(&cc->msg_queue, elt, next);
rte_free(elt);
}
+ rte_intr_instance_free(cc->intr_handle);
/* free control channel */
rte_free(cc);
}
@@ -548,8 +552,8 @@ memif_disconnect(struct rte_eth_dev *dev)
"Unexpected message(s) in message queue.");
}
- ih = &pmd->cc->intr_handle;
- if (ih->fd > 0) {
+ ih = pmd->cc->intr_handle;
+ if (rte_intr_fd_get(ih) > 0) {
ret = rte_intr_callback_unregister(ih,
memif_intr_handler,
pmd->cc);
@@ -563,7 +567,8 @@ memif_disconnect(struct rte_eth_dev *dev)
pmd->cc,
memif_intr_unregister_handler);
} else if (ret > 0) {
- close(ih->fd);
+ close(rte_intr_fd_get(ih));
+ rte_intr_instance_free(ih);
rte_free(pmd->cc);
}
pmd->cc = NULL;
@@ -587,9 +592,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
@@ -604,9 +610,10 @@ memif_disconnect(struct rte_eth_dev *dev)
else
continue;
}
- if (mq->intr_handle.fd > 0) {
- close(mq->intr_handle.fd);
- mq->intr_handle.fd = -1;
+
+ if (rte_intr_fd_get(mq->intr_handle) > 0) {
+ close(rte_intr_fd_get(mq->intr_handle));
+ rte_intr_fd_set(mq->intr_handle, -1);
}
}
@@ -644,7 +651,7 @@ memif_msg_receive(struct memif_control_channel *cc)
mh.msg_control = ctl;
mh.msg_controllen = sizeof(ctl);
- size = recvmsg(cc->intr_handle.fd, &mh, 0);
+ size = recvmsg(rte_intr_fd_get(cc->intr_handle), &mh, 0);
if (size != sizeof(memif_msg_t)) {
MIF_LOG(DEBUG, "Invalid message size = %zd", size);
if (size > 0)
@@ -774,7 +781,7 @@ memif_intr_handler(void *arg)
/* if driver failed to assign device */
if (cc->dev == NULL) {
memif_msg_send_from_queue(cc);
- ret = rte_intr_callback_unregister_pending(&cc->intr_handle,
+ ret = rte_intr_callback_unregister_pending(cc->intr_handle,
memif_intr_handler,
cc,
memif_intr_unregister_handler);
@@ -812,12 +819,12 @@ memif_listener_handler(void *arg)
int ret;
addr_len = sizeof(client);
- sockfd = accept(socket->intr_handle.fd, (struct sockaddr *)&client,
- (socklen_t *)&addr_len);
+ sockfd = accept(rte_intr_fd_get(socket->intr_handle),
+ (struct sockaddr *)&client, (socklen_t *)&addr_len);
if (sockfd < 0) {
MIF_LOG(ERR,
"Failed to accept connection request on socket fd %d",
- socket->intr_handle.fd);
+ rte_intr_fd_get(socket->intr_handle));
return;
}
@@ -829,13 +836,25 @@ memif_listener_handler(void *arg)
goto error;
}
- cc->intr_handle.fd = sockfd;
- cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ cc->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
cc->socket = socket;
cc->dev = NULL;
TAILQ_INIT(&cc->msg_queue);
- ret = rte_intr_callback_register(&cc->intr_handle, memif_intr_handler, cc);
+ ret = rte_intr_callback_register(cc->intr_handle, memif_intr_handler,
+ cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register control channel callback.");
goto error;
@@ -857,8 +876,10 @@ memif_listener_handler(void *arg)
close(sockfd);
sockfd = -1;
}
- if (cc != NULL)
+ if (cc != NULL) {
+ rte_intr_instance_free(cc->intr_handle);
rte_free(cc);
+ }
}
static struct memif_socket *
@@ -914,9 +935,21 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
MIF_LOG(DEBUG, "Memif listener socket %s created.", sock->filename);
- sock->intr_handle.fd = sockfd;
- sock->intr_handle.type = RTE_INTR_HANDLE_EXT;
- ret = rte_intr_callback_register(&sock->intr_handle,
+ /* Allocate interrupt instance */
+ sock->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sock->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(sock->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(sock->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ ret = rte_intr_callback_register(sock->intr_handle,
memif_listener_handler, sock);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt "
@@ -929,8 +962,10 @@ memif_socket_create(char *key, uint8_t listener, bool is_abstract)
error:
MIF_LOG(ERR, "Failed to setup socket %s: %s", key, strerror(errno));
- if (sock != NULL)
+ if (sock != NULL) {
+ rte_intr_instance_free(sock->intr_handle);
rte_free(sock);
+ }
if (sockfd >= 0)
close(sockfd);
return NULL;
@@ -1047,6 +1082,8 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
MIF_LOG(ERR, "Failed to remove socket file: %s",
socket->filename);
}
+ if (pmd->role != MEMIF_ROLE_CLIENT)
+ rte_intr_instance_free(socket->intr_handle);
rte_free(socket);
}
}
@@ -1109,13 +1146,25 @@ memif_connect_client(struct rte_eth_dev *dev)
goto error;
}
- pmd->cc->intr_handle.fd = sockfd;
- pmd->cc->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ /* Allocate interrupt instance */
+ pmd->cc->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->cc->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(pmd->cc->intr_handle, sockfd))
+ goto error;
+
+ if (rte_intr_type_set(pmd->cc->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
pmd->cc->socket = NULL;
pmd->cc->dev = dev;
TAILQ_INIT(&pmd->cc->msg_queue);
- ret = rte_intr_callback_register(&pmd->cc->intr_handle,
+ ret = rte_intr_callback_register(pmd->cc->intr_handle,
memif_intr_handler, pmd->cc);
if (ret < 0) {
MIF_LOG(ERR, "Failed to register interrupt callback for control fd");
@@ -1130,6 +1179,7 @@ memif_connect_client(struct rte_eth_dev *dev)
sockfd = -1;
}
if (pmd->cc != NULL) {
+ rte_intr_instance_free(pmd->cc->intr_handle);
rte_free(pmd->cc);
pmd->cc = NULL;
}
diff --git a/drivers/net/memif/memif_socket.h b/drivers/net/memif/memif_socket.h
index b9b8a15178..b0decbb0a2 100644
--- a/drivers/net/memif/memif_socket.h
+++ b/drivers/net/memif/memif_socket.h
@@ -85,7 +85,7 @@ struct memif_socket_dev_list_elt {
(sizeof(struct sockaddr_un) - offsetof(struct sockaddr_un, sun_path))
struct memif_socket {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
char filename[MEMIF_SOCKET_UN_SIZE]; /**< socket filename */
TAILQ_HEAD(, memif_socket_dev_list_elt) dev_queue;
@@ -101,7 +101,7 @@ struct memif_msg_queue_elt {
};
struct memif_control_channel {
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
TAILQ_HEAD(, memif_msg_queue_elt) msg_queue; /**< control message queue */
struct memif_socket *socket; /**< pointer to socket */
struct rte_eth_dev *dev; /**< pointer to device */
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 9deb7a5f13..8cec493ffd 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -326,7 +326,8 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* consume interrupt */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0)
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
ring_size = 1 << mq->log2_ring_size;
mask = ring_size - 1;
@@ -462,7 +463,8 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t b;
ssize_t size __rte_unused;
- size = read(mq->intr_handle.fd, &b, sizeof(b));
+ size = read(rte_intr_fd_get(mq->intr_handle), &b,
+ sizeof(b));
}
ring_size = 1 << mq->log2_ring_size;
@@ -680,7 +682,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
a = 1;
- size = write(mq->intr_handle.fd, &a, sizeof(a));
+ size = write(rte_intr_fd_get(mq->intr_handle), &a,
+ sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -832,7 +835,8 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
/* Send interrupt, if enabled. */
if ((ring->flags & MEMIF_RING_FLAG_MASK_INT) == 0) {
uint64_t a = 1;
- ssize_t size = write(mq->intr_handle.fd, &a, sizeof(a));
+ ssize_t size = write(rte_intr_fd_get(mq->intr_handle),
+ &a, sizeof(a));
if (unlikely(size < 0)) {
MIF_LOG(WARNING,
"Failed to send interrupt. %s", strerror(errno));
@@ -1092,8 +1096,10 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for tx queue %d: %s.", i,
strerror(errno));
@@ -1115,8 +1121,9 @@ memif_init_queues(struct rte_eth_dev *dev)
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
mq->last_head = 0;
mq->last_tail = 0;
- mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
- if (mq->intr_handle.fd < 0) {
+ if (rte_intr_fd_set(mq->intr_handle, eventfd(0, EFD_NONBLOCK)))
+ return -rte_errno;
+ if (rte_intr_fd_get(mq->intr_handle) < 0) {
MIF_LOG(WARNING,
"Failed to create eventfd for rx queue %d: %s.", i,
strerror(errno));
@@ -1310,12 +1317,24 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type =
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->in_port = dev->data->port_id;
dev->data->tx_queues[qid] = mq;
@@ -1339,11 +1358,23 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
return -ENOMEM;
}
+ /* Allocate interrupt instance */
+ mq->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (mq->intr_handle == NULL) {
+ MIF_LOG(ERR, "Failed to allocate intr handle");
+ return -ENOMEM;
+ }
+
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
mq->n_pkts = 0;
mq->n_bytes = 0;
- mq->intr_handle.fd = -1;
- mq->intr_handle.type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_fd_set(mq->intr_handle, -1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(mq->intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
mq->mempool = mb_pool;
mq->in_port = dev->data->port_id;
dev->data->rx_queues[qid] = mq;
@@ -1359,6 +1390,7 @@ memif_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!mq)
return;
+ rte_intr_instance_free(mq->intr_handle);
rte_free(mq);
}
diff --git a/drivers/net/memif/rte_eth_memif.h b/drivers/net/memif/rte_eth_memif.h
index 2038bda742..a5ee23d42e 100644
--- a/drivers/net/memif/rte_eth_memif.h
+++ b/drivers/net/memif/rte_eth_memif.h
@@ -68,7 +68,7 @@ struct memif_queue {
uint64_t n_pkts; /**< number of rx/tx packets */
uint64_t n_bytes; /**< number of rx/tx bytes */
- struct rte_intr_handle intr_handle; /**< interrupt handle */
+ struct rte_intr_handle *intr_handle; /**< interrupt handle */
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
};
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index f7fe831d61..cccc71f757 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -1042,9 +1042,19 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
rte_eth_copy_pci_info(eth_dev, pci_dev);
eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
/* Initialize local interrupt handle for current port. */
- memset(&priv->intr_handle, 0, sizeof(struct rte_intr_handle));
- priv->intr_handle.fd = -1;
- priv->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ priv->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->intr_handle == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ goto port_error;
+ }
+
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ goto port_error;
+
+ if (rte_intr_type_set(priv->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto port_error;
+
/*
* Override ethdev interrupt handle pointer with private
* handle instead of that of the parent PCI device used by
@@ -1057,7 +1067,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
* besides setting up eth_dev->intr_handle, the rest is
* handled by rte_intr_rx_ctl().
*/
- eth_dev->intr_handle = &priv->intr_handle;
+ eth_dev->intr_handle = priv->intr_handle;
priv->dev_data = eth_dev->data;
eth_dev->dev_ops = &mlx4_dev_ops;
#ifdef HAVE_IBV_MLX4_BUF_ALLOCATORS
@@ -1102,6 +1112,7 @@ mlx4_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
prev_dev = eth_dev;
continue;
port_error:
+ rte_intr_instance_free(priv->intr_handle);
rte_free(priv);
if (eth_dev != NULL)
eth_dev->data->dev_private = NULL;
diff --git a/drivers/net/mlx4/mlx4.h b/drivers/net/mlx4/mlx4.h
index e07b1d2386..2d0c512f79 100644
--- a/drivers/net/mlx4/mlx4.h
+++ b/drivers/net/mlx4/mlx4.h
@@ -176,7 +176,7 @@ struct mlx4_priv {
uint32_t tso_max_payload_sz; /**< Max supported TSO payload size. */
uint32_t hw_rss_max_qps; /**< Max Rx Queues supported by RSS. */
uint64_t hw_rss_sup; /**< Supported RSS hash fields (Verbs format). */
- struct rte_intr_handle intr_handle; /**< Port interrupt handle. */
+ struct rte_intr_handle *intr_handle; /**< Port interrupt handle. */
struct mlx4_drop *drop; /**< Shared resources for drop flow rules. */
struct {
uint32_t dev_gen; /* Generation number to flush local caches. */
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index 2aab0f60a7..01057482ec 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -43,12 +43,12 @@ static int mlx4_link_status_check(struct mlx4_priv *priv);
static void
mlx4_rx_intr_vec_disable(struct mlx4_priv *priv)
{
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
@@ -67,11 +67,10 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
unsigned int rxqs_n = ETH_DEV(priv)->data->nb_rx_queues;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int count = 0;
- struct rte_intr_handle *intr_handle = &priv->intr_handle;
+ struct rte_intr_handle *intr_handle = priv->intr_handle;
mlx4_rx_intr_vec_disable(priv);
- intr_handle->intr_vec = malloc(n * sizeof(intr_handle->intr_vec[0]));
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
rte_errno = ENOMEM;
ERROR("failed to allocate memory for interrupt vector,"
" Rx interrupts will not be supported");
@@ -83,9 +82,9 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
/* Skip queues that cannot request interrupts. */
if (!rxq || !rxq->channel) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
if (count >= RTE_MAX_RXTX_INTR_VEC_ID) {
@@ -96,14 +95,21 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
mlx4_rx_intr_vec_disable(priv);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq->channel->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+
+ if (rte_intr_efds_index_set(intr_handle, i,
+ rxq->channel->fd))
+ return -rte_errno;
+
count++;
}
if (!count)
mlx4_rx_intr_vec_disable(priv);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -254,12 +260,13 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
{
int err = rte_errno; /* Make sure rte_errno remains unchanged. */
- if (priv->intr_handle.fd != -1) {
- rte_intr_callback_unregister(&priv->intr_handle,
+ if (rte_intr_fd_get(priv->intr_handle) != -1) {
+ rte_intr_callback_unregister(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
- priv->intr_handle.fd = -1;
+ if (rte_intr_fd_set(priv->intr_handle, -1))
+ return -rte_errno;
}
rte_eal_alarm_cancel((void (*)(void *))mlx4_link_status_alarm, priv);
priv->intr_alarm = 0;
@@ -286,8 +293,10 @@ mlx4_intr_install(struct mlx4_priv *priv)
mlx4_intr_uninstall(priv);
if (intr_conf->lsc | intr_conf->rmv) {
- priv->intr_handle.fd = priv->ctx->async_fd;
- rc = rte_intr_callback_register(&priv->intr_handle,
+ if (rte_intr_fd_set(priv->intr_handle, priv->ctx->async_fd))
+ return -rte_errno;
+
+ rc = rte_intr_callback_register(priv->intr_handle,
(void (*)(void *))
mlx4_interrupt_handler,
priv);
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index f17e1aac3c..72bbb665cf 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -2458,11 +2458,9 @@ mlx5_os_pci_probe_pf(struct mlx5_common_device *cdev,
* Representor interrupts handle is released in mlx5_dev_stop().
*/
if (list[i].info.representor) {
- struct rte_intr_handle *intr_handle;
- intr_handle = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
- sizeof(*intr_handle), 0,
- SOCKET_ID_ANY);
- if (!intr_handle) {
+ struct rte_intr_handle *intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (intr_handle == NULL) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt handler "
"Rx interrupts will not be supported",
@@ -2626,7 +2624,7 @@ mlx5_os_auxiliary_probe(struct mlx5_common_device *cdev)
if (eth_dev == NULL)
return -rte_errno;
/* Post create. */
- eth_dev->intr_handle = &adev->intr_handle;
+ eth_dev->intr_handle = adev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_RMV;
@@ -2690,24 +2688,38 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
int flags;
struct ibv_context *ctx = sh->cdev->ctx;
- sh->intr_handle.fd = -1;
+ sh->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle, -1);
+
flags = fcntl(ctx->async_fd, F_GETFL);
ret = fcntl(ctx->async_fd, F_SETFL, flags | O_NONBLOCK);
if (ret) {
DRV_LOG(INFO, "failed to change file descriptor async event"
" queue");
} else {
- sh->intr_handle.fd = ctx->async_fd;
- sh->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle,
+ rte_intr_fd_set(sh->intr_handle, ctx->async_fd);
+ rte_intr_type_set(sh->intr_handle, RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle,
mlx5_dev_interrupt_handler, sh)) {
DRV_LOG(INFO, "Fail to install the shared interrupt.");
- sh->intr_handle.fd = -1;
+ rte_intr_fd_set(sh->intr_handle, -1);
}
}
if (sh->devx) {
#ifdef HAVE_IBV_DEVX_ASYNC
- sh->intr_handle_devx.fd = -1;
+ sh->intr_handle_devx =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (!sh->intr_handle_devx) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ rte_errno = ENOMEM;
+ return;
+ }
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
sh->devx_comp = (void *)mlx5_glue->devx_create_cmd_comp(ctx);
struct mlx5dv_devx_cmd_comp *devx_comp = sh->devx_comp;
if (!devx_comp) {
@@ -2721,13 +2733,14 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
" devx comp");
return;
}
- sh->intr_handle_devx.fd = devx_comp->fd;
- sh->intr_handle_devx.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->intr_handle_devx,
+ rte_intr_fd_set(sh->intr_handle_devx, devx_comp->fd);
+ rte_intr_type_set(sh->intr_handle_devx,
+ RTE_INTR_HANDLE_EXT);
+ if (rte_intr_callback_register(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh)) {
DRV_LOG(INFO, "Fail to install the devx shared"
" interrupt.");
- sh->intr_handle_devx.fd = -1;
+ rte_intr_fd_set(sh->intr_handle_devx, -1);
}
#endif /* HAVE_IBV_DEVX_ASYNC */
}
@@ -2744,13 +2757,15 @@ mlx5_os_dev_shared_handler_install(struct mlx5_dev_ctx_shared *sh)
void
mlx5_os_dev_shared_handler_uninstall(struct mlx5_dev_ctx_shared *sh)
{
- if (sh->intr_handle.fd >= 0)
- mlx5_intr_callback_unregister(&sh->intr_handle,
+ if (rte_intr_fd_get(sh->intr_handle) >= 0)
+ mlx5_intr_callback_unregister(sh->intr_handle,
mlx5_dev_interrupt_handler, sh);
+ rte_intr_instance_free(sh->intr_handle);
#ifdef HAVE_IBV_DEVX_ASYNC
- if (sh->intr_handle_devx.fd >= 0)
- rte_intr_callback_unregister(&sh->intr_handle_devx,
+ if (rte_intr_fd_get(sh->intr_handle_devx) >= 0)
+ rte_intr_callback_unregister(sh->intr_handle_devx,
mlx5_dev_interrupt_handler_devx, sh);
+ rte_intr_instance_free(sh->intr_handle_devx);
if (sh->devx_comp)
mlx5_glue->devx_destroy_cmd_comp(sh->devx_comp);
#endif
diff --git a/drivers/net/mlx5/linux/mlx5_socket.c b/drivers/net/mlx5/linux/mlx5_socket.c
index 902b8ec934..db474f030a 100644
--- a/drivers/net/mlx5/linux/mlx5_socket.c
+++ b/drivers/net/mlx5/linux/mlx5_socket.c
@@ -23,7 +23,7 @@
#define MLX5_SOCKET_PATH "/var/tmp/dpdk_net_mlx5_%d"
int server_socket; /* Unix socket for primary process. */
-struct rte_intr_handle server_intr_handle; /* Interrupt handler. */
+struct rte_intr_handle *server_intr_handle; /* Interrupt handler. */
/**
* Handle server pmd socket interrupts.
@@ -145,9 +145,19 @@ static int
mlx5_pmd_interrupt_handler_install(void)
{
MLX5_ASSERT(server_socket);
- server_intr_handle.fd = server_socket;
- server_intr_handle.type = RTE_INTR_HANDLE_EXT;
- return rte_intr_callback_register(&server_intr_handle,
+ server_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (server_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
+ if (rte_intr_fd_set(server_intr_handle, server_socket))
+ return -rte_errno;
+
+ if (rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ return rte_intr_callback_register(server_intr_handle,
mlx5_pmd_socket_handle, NULL);
}
@@ -158,12 +168,13 @@ static void
mlx5_pmd_interrupt_handler_uninstall(void)
{
if (server_socket) {
- mlx5_intr_callback_unregister(&server_intr_handle,
+ mlx5_intr_callback_unregister(server_intr_handle,
mlx5_pmd_socket_handle,
NULL);
}
- server_intr_handle.fd = 0;
- server_intr_handle.type = RTE_INTR_HANDLE_UNKNOWN;
+ rte_intr_fd_set(server_intr_handle, 0);
+ rte_intr_type_set(server_intr_handle, RTE_INTR_HANDLE_UNKNOWN);
+ rte_intr_instance_free(server_intr_handle);
}
/**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5da5ceaafe..5768b82935 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -996,7 +996,7 @@ struct mlx5_dev_txpp {
uint32_t tick; /* Completion tick duration in nanoseconds. */
uint32_t test; /* Packet pacing test mode. */
int32_t skew; /* Scheduling skew. */
- struct rte_intr_handle intr_handle; /* Periodic interrupt. */
+ struct rte_intr_handle *intr_handle; /* Periodic interrupt. */
void *echan; /* Event Channel. */
struct mlx5_txpp_wq clock_queue; /* Clock Queue. */
struct mlx5_txpp_wq rearm_queue; /* Clock Queue. */
@@ -1160,8 +1160,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_indexed_pool *ipool[MLX5_IPOOL_MAX];
struct mlx5_indexed_pool *mdh_ipools[MLX5_MAX_MODIFY_NUM];
/* Shared interrupt handler section. */
- struct rte_intr_handle intr_handle; /* Interrupt handler for device. */
- struct rte_intr_handle intr_handle_devx; /* DEVX interrupt handler. */
+ struct rte_intr_handle *intr_handle; /* Interrupt handler for device. */
+ struct rte_intr_handle *intr_handle_devx; /* DEVX interrupt handler. */
void *devx_comp; /* DEVX async comp obj. */
struct mlx5_devx_obj *tis[16]; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 5fed42324d..4f02fe02b9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -834,10 +834,7 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
mlx5_rx_intr_vec_disable(dev);
- intr_handle->intr_vec = mlx5_malloc(0,
- n * sizeof(intr_handle->intr_vec[0]),
- 0, SOCKET_ID_ANY);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, n)) {
DRV_LOG(ERR,
"port %u failed to allocate memory for interrupt"
" vector, Rx interrupts will not be supported",
@@ -845,7 +842,10 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
rte_errno = ENOMEM;
return -rte_errno;
}
- intr_handle->type = RTE_INTR_HANDLE_EXT;
+
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
for (i = 0; i != n; ++i) {
/* This rxq obj must not be released in this function. */
struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_get(dev, i);
@@ -856,9 +856,9 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
if (!rxq_obj || (!rxq_obj->ibv_channel &&
!rxq_obj->devx_channel)) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
/* Decrease the rxq_ctrl's refcnt */
if (rxq_ctrl)
mlx5_rxq_release(dev, i);
@@ -885,14 +885,19 @@ mlx5_rx_intr_vec_enable(struct rte_eth_dev *dev)
mlx5_rx_intr_vec_disable(dev);
return -rte_errno;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = rxq_obj->fd;
+
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ rxq_obj->fd))
+ return -rte_errno;
count++;
}
if (!count)
mlx5_rx_intr_vec_disable(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
@@ -913,11 +918,11 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
if (!dev->data->dev_conf.intr_conf.rxq)
return;
- if (!intr_handle->intr_vec)
+ if (rte_intr_vec_list_index_get(intr_handle, 0) < 0)
goto free;
for (i = 0; i != n; ++i) {
- if (intr_handle->intr_vec[i] == RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID)
+ if (rte_intr_vec_list_index_get(intr_handle, i) ==
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID)
continue;
/**
* Need to access directly the queue to release the reference
@@ -927,10 +932,10 @@ mlx5_rx_intr_vec_disable(struct rte_eth_dev *dev)
}
free:
rte_intr_free_epoll_fd(intr_handle);
- if (intr_handle->intr_vec)
- mlx5_free(intr_handle->intr_vec);
- intr_handle->nb_efd = 0;
- intr_handle->intr_vec = NULL;
+
+ rte_intr_vec_list_free(intr_handle);
+
+ rte_intr_nb_efd_set(intr_handle, 0);
}
/**
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index dacf7ff272..d916c8addc 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1183,7 +1183,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->rx_pkt_burst = mlx5_select_rx_function(dev);
/* Enable datapath on secondary process. */
mlx5_mp_os_req_start_rxtx(dev);
- if (priv->sh->intr_handle.fd >= 0) {
+ if (rte_intr_fd_get(priv->sh->intr_handle) >= 0) {
priv->sh->port[priv->dev_port - 1].ih_port_id =
(uint32_t)dev->data->port_id;
} else {
@@ -1192,7 +1192,7 @@ mlx5_dev_start(struct rte_eth_dev *dev)
dev->data->dev_conf.intr_conf.lsc = 0;
dev->data->dev_conf.intr_conf.rmv = 0;
}
- if (priv->sh->intr_handle_devx.fd >= 0)
+ if (rte_intr_fd_get(priv->sh->intr_handle_devx) >= 0)
priv->sh->port[priv->dev_port - 1].devx_ih_port_id =
(uint32_t)dev->data->port_id;
return 0;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 48f03fcd79..34f92faa67 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -759,11 +759,11 @@ mlx5_txpp_interrupt_handler(void *cb_arg)
static void
mlx5_txpp_stop_service(struct mlx5_dev_ctx_shared *sh)
{
- if (!sh->txpp.intr_handle.fd)
+ if (!rte_intr_fd_get(sh->txpp.intr_handle))
return;
- mlx5_intr_callback_unregister(&sh->txpp.intr_handle,
+ mlx5_intr_callback_unregister(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh);
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_instance_free(sh->txpp.intr_handle);
}
/* Attach interrupt handler and fires first request to Rearm Queue. */
@@ -787,13 +787,22 @@ mlx5_txpp_start_service(struct mlx5_dev_ctx_shared *sh)
rte_errno = errno;
return -rte_errno;
}
- memset(&sh->txpp.intr_handle, 0, sizeof(sh->txpp.intr_handle));
+ sh->txpp.intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (sh->txpp.intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ return -ENOMEM;
+ }
fd = mlx5_os_get_devx_channel_fd(sh->txpp.echan);
- sh->txpp.intr_handle.fd = fd;
- sh->txpp.intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&sh->txpp.intr_handle,
+ if (rte_intr_fd_set(sh->txpp.intr_handle, fd))
+ return -rte_errno;
+
+ if (rte_intr_type_set(sh->txpp.intr_handle, RTE_INTR_HANDLE_EXT))
+ return -rte_errno;
+
+ if (rte_intr_callback_register(sh->txpp.intr_handle,
mlx5_txpp_interrupt_handler, sh)) {
- sh->txpp.intr_handle.fd = 0;
+ rte_intr_fd_set(sh->txpp.intr_handle, 0);
DRV_LOG(ERR, "Failed to register CQE interrupt %d.", rte_errno);
return -rte_errno;
}
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9c4ae80e7e..8a950403ac 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -133,9 +133,9 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size)
eth_dev->device = &dev->device;
/* interrupt is simulated */
- dev->intr_handle.type = RTE_INTR_HANDLE_EXT;
+ rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_EXT);
eth_dev->data->dev_flags |= RTE_ETH_DEV_INTR_LSC;
- eth_dev->intr_handle = &dev->intr_handle;
+ eth_dev->intr_handle = dev->intr_handle;
return eth_dev;
}
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 3ea697c544..f8978e803a 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -307,24 +307,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
struct nfp_net_hw *hw;
int i;
- if (!intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (!intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
- " intr_vec", dev->data->nb_rx_queues);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
+ " intr_vec", dev->data->nb_rx_queues);
+ return -ENOMEM;
}
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO");
/* UIO just supports one queue and no LSC*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0);
- intr_handle->intr_vec[0] = 0;
+ if (rte_intr_vec_list_index_set(intr_handle, 0, 0))
+ return -1;
} else {
PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO");
for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -333,9 +330,12 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,
* efd interrupts
*/
nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ i + 1))
+ return -1;
PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i,
- intr_handle->intr_vec[i]);
+ rte_intr_vec_list_index_get(intr_handle,
+ i));
}
}
@@ -804,7 +804,8 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -824,7 +825,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);
pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- if (pci_dev->intr_handle.type != RTE_INTR_HANDLE_UIO)
+ if (rte_intr_type_get(pci_dev->intr_handle) !=
+ RTE_INTR_HANDLE_UIO)
base = 1;
/* Make sure all updates are written before un-masking */
@@ -874,7 +876,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)
if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) {
/* If MSI-X auto-masking is used, clear the entry */
rte_wmb();
- rte_intr_ack(&pci_dev->intr_handle);
+ rte_intr_ack(pci_dev->intr_handle);
} else {
/* Make sure all updates are written before un-masking */
rte_wmb();
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index e08e594b04..830863af28 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -82,7 +82,7 @@ static int
nfp_net_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct nfp_pf_dev *pf_dev;
@@ -109,12 +109,13 @@ nfp_net_start(struct rte_eth_dev *dev)
"with NFP multiport PF");
return -EINVAL;
}
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -333,10 +334,10 @@ nfp_net_close(struct rte_eth_dev *dev)
nfp_cpp_free(pf_dev->cpp);
rte_free(pf_dev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -579,7 +580,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index 817fe64dbc..5557a1e002 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -51,7 +51,7 @@ static int
nfp_netvf_start(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t new_ctrl, update = 0;
struct nfp_net_hw *hw;
struct rte_eth_conf *dev_conf;
@@ -71,12 +71,13 @@ nfp_netvf_start(struct rte_eth_dev *dev)
/* check and configure queue intr-vector mapping */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
- if (intr_handle->type == RTE_INTR_HANDLE_UIO) {
+ if (rte_intr_type_get(intr_handle) ==
+ RTE_INTR_HANDLE_UIO) {
/*
* Better not to share LSC with RX interrupts.
* Unregistering LSC interrupt handler
*/
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler, (void *)dev);
if (dev->data->nb_rx_queues > 1) {
@@ -225,10 +226,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)
nfp_net_reset_rx_queue(this_rx_q);
}
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
/* unregister callback func from eal lib */
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)dev);
@@ -445,7 +446,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* Registering LSC interrupt handler */
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
nfp_net_dev_interrupt_handler,
(void *)eth_dev);
/* Telling the firmware about the LSC interrupt entry */
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fc76b84b5b..466e089b34 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -129,7 +129,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
struct ngbe_hw *hw = ngbe_dev_hw(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
int err;
@@ -334,7 +334,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = false;
@@ -372,11 +372,9 @@ ngbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && intr_handle->intr_vec == NULL) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR,
"Failed to allocate %d rx_queues intr_vec",
dev->data->nb_rx_queues);
@@ -503,7 +501,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
struct rte_eth_link link;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -540,10 +538,7 @@ ngbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
hw->adapter_stopped = true;
dev->data->dev_started = 0;
@@ -559,7 +554,7 @@ ngbe_dev_close(struct rte_eth_dev *dev)
{
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -1093,7 +1088,7 @@ static void
ngbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct ngbe_hw *hw = ngbe_dev_hw(dev);
uint32_t queue_id, base = NGBE_MISC_VEC_ID;
uint32_t vec = NGBE_MISC_VEC_ID;
@@ -1128,8 +1123,10 @@ ngbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
ngbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
index b121488faf..cc573bb2e8 100644
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ b/drivers/net/octeontx2/otx2_ethdev_irq.c
@@ -34,7 +34,7 @@ static int
nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -54,7 +54,7 @@ static void
nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -90,7 +90,7 @@ static int
nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int rc, vec;
@@ -110,7 +110,7 @@ static void
nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec;
@@ -263,7 +263,7 @@ int
oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q, sqs, rqs, qs, rc = 0;
@@ -308,7 +308,7 @@ void
oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
@@ -332,7 +332,7 @@ int
oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
uint8_t rc = 0, vec, q;
@@ -362,20 +362,19 @@ oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
return rc;
}
- if (!handle->intr_vec) {
- handle->intr_vec = rte_zmalloc("intr_vec",
- dev->configured_cints *
- sizeof(int), 0);
- if (!handle->intr_vec) {
- otx2_err("Failed to allocate %d rx intr_vec",
- dev->configured_cints);
- return -ENOMEM;
- }
+ rc = rte_intr_vec_list_alloc(handle, "intr_vec",
+ dev->configured_cints);
+ if (rc) {
+ otx2_err("Fail to allocate intr vec list, "
+ "rc=%d", rc);
+ return rc;
}
/* VFIO vector zero is resereved for misc interrupt so
* doing required adjustment. (b13bfab4cd)
*/
- handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec;
+ if (rte_intr_vec_list_index_set(handle, q,
+ RTE_INTR_VEC_RXTX_OFFSET + vec))
+ return -1;
/* Configure CQE interrupt coalescing parameters */
otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
@@ -395,7 +394,7 @@ void
oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
int vec, q;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c907d7fd83..8ca00e7f6c 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1569,17 +1569,17 @@ static int qede_dev_close(struct rte_eth_dev *eth_dev)
qdev->ops->common->slowpath_stop(edev);
qdev->ops->common->remove(edev);
- rte_intr_disable(&pci_dev->intr_handle);
+ rte_intr_disable(pci_dev->intr_handle);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
- rte_intr_callback_unregister(&pci_dev->intr_handle,
+ rte_intr_callback_unregister(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
@@ -2554,22 +2554,22 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
}
qede_update_pf_params(edev);
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
int_mode = ECORE_INT_MODE_INTA;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler_intx,
(void *)eth_dev);
break;
default:
int_mode = ECORE_INT_MODE_MSIX;
- rte_intr_callback_register(&pci_dev->intr_handle,
+ rte_intr_callback_register(pci_dev->intr_handle,
qede_interrupt_handler,
(void *)eth_dev);
}
- if (rte_intr_enable(&pci_dev->intr_handle)) {
+ if (rte_intr_enable(pci_dev->intr_handle)) {
DP_ERR(edev, "rte_intr_enable() failed\n");
rc = -ENODEV;
goto err;
diff --git a/drivers/net/sfc/sfc_intr.c b/drivers/net/sfc/sfc_intr.c
index 69414fd839..ab67aa9237 100644
--- a/drivers/net/sfc/sfc_intr.c
+++ b/drivers/net/sfc/sfc_intr.c
@@ -79,7 +79,7 @@ sfc_intr_line_handler(void *cb_arg)
if (qmask & (1 << sa->mgmt_evq_index))
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -123,7 +123,7 @@ sfc_intr_message_handler(void *cb_arg)
sfc_intr_handle_mgmt_evq(sa);
- if (rte_intr_ack(&pci_dev->intr_handle) != 0)
+ if (rte_intr_ack(pci_dev->intr_handle) != 0)
sfc_err(sa, "cannot reenable interrupts");
sfc_log_init(sa, "done");
@@ -159,7 +159,7 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_intr_init;
pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
if (intr->handler != NULL) {
if (intr->rxq_intr && rte_intr_cap_multiple(intr_handle)) {
@@ -171,16 +171,15 @@ sfc_intr_start(struct sfc_adapter *sa)
goto fail_rte_intr_efd_enable;
}
if (rte_intr_dp_is_en(intr_handle)) {
- intr_handle->intr_vec =
- rte_calloc("intr_vec",
- sa->eth_dev->data->nb_rx_queues, sizeof(int),
- 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_vec_list_alloc(intr_handle,
+ "intr_vec",
+ sa->eth_dev->data->nb_rx_queues)) {
sfc_err(sa,
"Failed to allocate %d rx_queues intr_vec",
sa->eth_dev->data->nb_rx_queues);
goto fail_intr_vector_alloc;
}
+
}
sfc_log_init(sa, "rte_intr_callback_register");
@@ -214,16 +213,17 @@ sfc_intr_start(struct sfc_adapter *sa)
efx_intr_enable(sa->nic);
}
- sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u vec=%p",
- intr_handle->type, intr_handle->max_intr,
- intr_handle->nb_efd, intr_handle->intr_vec);
+ sfc_log_init(sa, "done type=%u max_intr=%d nb_efd=%u",
+ rte_intr_type_get(intr_handle),
+ rte_intr_max_intr_get(intr_handle),
+ rte_intr_nb_efd_get(intr_handle));
return 0;
fail_rte_intr_enable:
rte_intr_callback_unregister(intr_handle, intr->handler, (void *)sa);
fail_rte_intr_cb_reg:
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
fail_intr_vector_alloc:
rte_intr_efd_disable(intr_handle);
@@ -250,9 +250,9 @@ sfc_intr_stop(struct sfc_adapter *sa)
efx_intr_disable(sa->nic);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
- rte_free(intr_handle->intr_vec);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
if (rte_intr_disable(intr_handle) != 0)
@@ -322,7 +322,7 @@ sfc_intr_attach(struct sfc_adapter *sa)
sfc_log_init(sa, "entry");
- switch (pci_dev->intr_handle.type) {
+ switch (rte_intr_type_get(pci_dev->intr_handle)) {
#ifdef RTE_EXEC_ENV_LINUX
case RTE_INTR_HANDLE_UIO_INTX:
case RTE_INTR_HANDLE_VFIO_LEGACY:
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index ef3399ee0f..a9a7658147 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1663,7 +1663,8 @@ tap_dev_intr_handler(void *cb_arg)
struct rte_eth_dev *dev = cb_arg;
struct pmd_internals *pmd = dev->data->dev_private;
- tap_nl_recv(pmd->intr_handle.fd, tap_nl_msg_handler, dev);
+ tap_nl_recv(rte_intr_fd_get(pmd->intr_handle),
+ tap_nl_msg_handler, dev);
}
static int
@@ -1674,22 +1675,22 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
/* In any case, disable interrupt if the conf is no longer there. */
if (!dev->data->dev_conf.intr_conf.lsc) {
- if (pmd->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(pmd->intr_handle) != -1)
goto clean;
- }
+
return 0;
}
if (set) {
- pmd->intr_handle.fd = tap_nl_init(RTMGRP_LINK);
- if (unlikely(pmd->intr_handle.fd == -1))
+ rte_intr_fd_set(pmd->intr_handle, tap_nl_init(RTMGRP_LINK));
+ if (unlikely(rte_intr_fd_get(pmd->intr_handle) == -1))
return -EBADF;
return rte_intr_callback_register(
- &pmd->intr_handle, tap_dev_intr_handler, dev);
+ pmd->intr_handle, tap_dev_intr_handler, dev);
}
clean:
do {
- ret = rte_intr_callback_unregister(&pmd->intr_handle,
+ ret = rte_intr_callback_unregister(pmd->intr_handle,
tap_dev_intr_handler, dev);
if (ret >= 0) {
break;
@@ -1702,8 +1703,8 @@ tap_lsc_intr_handle_set(struct rte_eth_dev *dev, int set)
}
} while (true);
- tap_nl_final(pmd->intr_handle.fd);
- pmd->intr_handle.fd = -1;
+ tap_nl_final(rte_intr_fd_get(pmd->intr_handle));
+ rte_intr_fd_set(pmd->intr_handle, -1);
return 0;
}
@@ -1918,6 +1919,13 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
goto error_exit;
}
+ /* Allocate interrupt instance */
+ pmd->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (pmd->intr_handle == NULL) {
+ TAP_LOG(ERR, "Failed to allocate intr handle");
+ goto error_exit;
+ }
+
/* Setup some default values */
data = dev->data;
data->dev_private = pmd;
@@ -1935,9 +1943,9 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
dev->rx_pkt_burst = pmd_rx_burst;
dev->tx_pkt_burst = pmd_tx_burst;
- pmd->intr_handle.type = RTE_INTR_HANDLE_EXT;
- pmd->intr_handle.fd = -1;
- dev->intr_handle = &pmd->intr_handle;
+ rte_intr_type_set(pmd->intr_handle, RTE_INTR_HANDLE_EXT);
+ rte_intr_fd_set(pmd->intr_handle, -1);
+ dev->intr_handle = pmd->intr_handle;
/* Presetup the fds to -1 as being not valid */
for (i = 0; i < RTE_PMD_TAP_MAX_QUEUES; i++) {
@@ -2088,6 +2096,7 @@ eth_dev_tap_create(struct rte_vdev_device *vdev, const char *tap_name,
/* mac_addrs must not be freed alone because part of dev_private */
dev->data->mac_addrs = NULL;
rte_eth_dev_release_port(dev);
+ rte_intr_instance_free(pmd->intr_handle);
error_exit_nodev:
TAP_LOG(ERR, "%s Unable to initialize %s",
diff --git a/drivers/net/tap/rte_eth_tap.h b/drivers/net/tap/rte_eth_tap.h
index a98ea11a33..996021e424 100644
--- a/drivers/net/tap/rte_eth_tap.h
+++ b/drivers/net/tap/rte_eth_tap.h
@@ -89,7 +89,7 @@ struct pmd_internals {
LIST_HEAD(tap_implicit_flows, rte_flow) implicit_flows;
struct rx_queue rxq[RTE_PMD_TAP_MAX_QUEUES]; /* List of RX queues */
struct tx_queue txq[RTE_PMD_TAP_MAX_QUEUES]; /* List of TX queues */
- struct rte_intr_handle intr_handle; /* LSC interrupt handle. */
+ struct rte_intr_handle *intr_handle; /* LSC interrupt handle. */
int ka_fd; /* keep-alive file descriptor */
struct rte_mempool *gso_ctx_mp; /* Mempool for GSO packets */
};
diff --git a/drivers/net/tap/tap_intr.c b/drivers/net/tap/tap_intr.c
index 1cacc15d9f..56c343acea 100644
--- a/drivers/net/tap/tap_intr.c
+++ b/drivers/net/tap/tap_intr.c
@@ -29,12 +29,13 @@ static void
tap_rx_intr_vec_uninstall(struct rte_eth_dev *dev)
{
struct pmd_internals *pmd = dev->data->dev_private;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
rte_intr_free_epoll_fd(intr_handle);
- free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- intr_handle->nb_efd = 0;
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_nb_efd_set(intr_handle, 0);
+
+ rte_intr_instance_free(intr_handle);
}
/**
@@ -52,15 +53,15 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
struct pmd_internals *pmd = dev->data->dev_private;
struct pmd_process_private *process_private = dev->process_private;
unsigned int rxqs_n = pmd->dev->data->nb_rx_queues;
- struct rte_intr_handle *intr_handle = &pmd->intr_handle;
+ struct rte_intr_handle *intr_handle = pmd->intr_handle;
unsigned int n = RTE_MIN(rxqs_n, (uint32_t)RTE_MAX_RXTX_INTR_VEC_ID);
unsigned int i;
unsigned int count = 0;
if (!dev->data->dev_conf.intr_conf.rxq)
return 0;
- intr_handle->intr_vec = malloc(sizeof(int) * rxqs_n);
- if (intr_handle->intr_vec == NULL) {
+
+ if (rte_intr_vec_list_alloc(intr_handle, NULL, rxqs_n)) {
rte_errno = ENOMEM;
TAP_LOG(ERR,
"failed to allocate memory for interrupt vector,"
@@ -73,19 +74,23 @@ tap_rx_intr_vec_install(struct rte_eth_dev *dev)
/* Skip queues that cannot request interrupts. */
if (!rxq || process_private->rxq_fds[i] == -1) {
/* Use invalid intr_vec[] index to disable entry. */
- intr_handle->intr_vec[i] =
- RTE_INTR_VEC_RXTX_OFFSET +
- RTE_MAX_RXTX_INTR_VEC_ID;
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + RTE_MAX_RXTX_INTR_VEC_ID))
+ return -rte_errno;
continue;
}
- intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + count;
- intr_handle->efds[count] = process_private->rxq_fds[i];
+ if (rte_intr_vec_list_index_set(intr_handle, i,
+ RTE_INTR_VEC_RXTX_OFFSET + count))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(intr_handle, count,
+ process_private->rxq_fds[i]))
+ return -rte_errno;
count++;
}
if (!count)
tap_rx_intr_vec_uninstall(dev);
- else
- intr_handle->nb_efd = count;
+ else if (rte_intr_nb_efd_set(intr_handle, count))
+ return -rte_errno;
return 0;
}
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 762647e3b6..fc334cf734 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1858,6 +1858,8 @@ nicvf_dev_close(struct rte_eth_dev *dev)
nicvf_periodic_alarm_stop(nicvf_vf_interrupt, nic->snicvf[i]);
}
+ rte_intr_instance_free(nic->intr_handle);
+
return 0;
}
@@ -2157,6 +2159,14 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
+ /* Allocate interrupt instance */
+ nic->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (nic->intr_handle == NULL) {
+ PMD_INIT_LOG(ERR, "Failed to allocate intr handle");
+ ret = -ENODEV;
+ goto fail;
+ }
+
nicvf_disable_all_interrupts(nic);
ret = nicvf_periodic_alarm_start(nicvf_interrupt, eth_dev);
diff --git a/drivers/net/thunderx/nicvf_struct.h b/drivers/net/thunderx/nicvf_struct.h
index 0ca207d0dd..c7ea13313e 100644
--- a/drivers/net/thunderx/nicvf_struct.h
+++ b/drivers/net/thunderx/nicvf_struct.h
@@ -100,7 +100,7 @@ struct nicvf {
uint16_t subsystem_vendor_id;
struct nicvf_rbdr *rbdr;
struct nicvf_rss_reta_info rss_info;
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint8_t cpi_alg;
uint16_t mtu;
int skip_bytes;
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 4b3b703029..169272ded5 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -548,7 +548,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
struct txgbe_dcb_config *dcb_config = TXGBE_DEV_DCB_CONFIG(eth_dev);
struct txgbe_filter_info *filter_info = TXGBE_DEV_FILTER(eth_dev);
struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
const struct rte_memzone *mz;
uint32_t ctrl_ext;
uint16_t csum;
@@ -1620,7 +1620,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw_stats *hw_stats = TXGBE_DEV_STATS(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t intr_vector = 0;
int err;
bool link_up = false, negotiate = 0;
@@ -1670,17 +1670,14 @@ txgbe_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
}
}
-
/* confiugre msix for sleep until rx interrupt */
txgbe_configure_msix(dev);
@@ -1861,7 +1858,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_vf_info *vfinfo = *TXGBE_DEV_VFDATA(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int vf;
struct txgbe_tm_conf *tm_conf = TXGBE_DEV_TM_CONF(dev);
@@ -1911,10 +1908,7 @@ txgbe_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* reset hierarchy commit */
tm_conf->committed = false;
@@ -1977,7 +1971,7 @@ txgbe_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int retries = 0;
int ret;
@@ -2936,8 +2930,8 @@ txgbe_dev_interrupt_get_status(struct rte_eth_dev *dev,
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
- if (intr_handle->type != RTE_INTR_HANDLE_UIO &&
- intr_handle->type != RTE_INTR_HANDLE_VFIO_MSIX)
+ if (rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_UIO &&
+ rte_intr_type_get(intr_handle) != RTE_INTR_HANDLE_VFIO_MSIX)
wr32(hw, TXGBE_PX_INTA, 1);
/* clear all cause mask */
@@ -3103,7 +3097,7 @@ txgbe_dev_interrupt_delayed_handler(void *param)
{
struct rte_eth_dev *dev = (struct rte_eth_dev *)param;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t eicr;
@@ -3623,7 +3617,7 @@ static int
txgbe_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t mask;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
@@ -3705,7 +3699,7 @@ static void
txgbe_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t queue_id, base = TXGBE_MISC_VEC_ID;
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -3739,8 +3733,10 @@ txgbe_configure_msix(struct rte_eth_dev *dev)
queue_id++) {
/* by default, 1:1 mapping */
txgbe_set_ivar_map(hw, 0, queue_id, vec);
- intr_handle->intr_vec[queue_id] = vec;
- if (vec < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle,
+ queue_id, vec);
+ if (vec < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vec++;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 283b52e8f3..4dda55b0c2 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -166,7 +166,7 @@ eth_txgbevf_dev_init(struct rte_eth_dev *eth_dev)
int err;
uint32_t tc, tcs;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(eth_dev);
struct txgbe_vfta *shadow_vfta = TXGBE_DEV_VFTA(eth_dev);
struct txgbe_hwstrip *hwstrip = TXGBE_DEV_HWSTRIP(eth_dev);
@@ -608,7 +608,7 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t intr_vector = 0;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int err, mask = 0;
@@ -669,11 +669,9 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", dev->data->nb_rx_queues);
return -ENOMEM;
@@ -712,7 +710,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct txgbe_adapter *adapter = TXGBE_DEV_ADAPTER(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
if (hw->adapter_stopped)
return 0;
@@ -739,10 +737,7 @@ txgbevf_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
adapter->rss_reta_updated = 0;
hw->dev_start = false;
@@ -755,7 +750,7 @@ txgbevf_dev_close(struct rte_eth_dev *dev)
{
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
int ret;
PMD_INIT_FUNC_TRACE();
@@ -916,7 +911,7 @@ static int
txgbevf_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t vec = TXGBE_MISC_VEC_ID;
@@ -938,7 +933,7 @@ txgbevf_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
struct txgbe_interrupt *intr = TXGBE_DEV_INTR(dev);
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
uint32_t vec = TXGBE_MISC_VEC_ID;
if (rte_intr_allow_others(intr_handle))
@@ -978,7 +973,7 @@ static void
txgbevf_configure_msix(struct rte_eth_dev *dev)
{
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev);
- struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
uint32_t q_idx;
uint32_t vector_idx = TXGBE_MISC_VEC_ID;
@@ -1004,8 +999,10 @@ txgbevf_configure_msix(struct rte_eth_dev *dev)
* as TXGBE_VF_MAXMSIVECOTR = 1
*/
txgbevf_set_ivar_map(hw, 0, q_idx, vector_idx);
- intr_handle->intr_vec[q_idx] = vector_idx;
- if (vector_idx < base + intr_handle->nb_efd - 1)
+ rte_intr_vec_list_index_set(intr_handle, q_idx,
+ vector_idx);
+ if (vector_idx < base + rte_intr_nb_efd_get(intr_handle)
+ - 1)
vector_idx++;
}
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index beb4b8de2d..5111304ff9 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -523,40 +523,43 @@ static int
eth_vhost_update_intr(struct rte_eth_dev *eth_dev, uint16_t rxq_idx)
{
struct rte_intr_handle *handle = eth_dev->intr_handle;
- struct rte_epoll_event rev;
+ struct rte_epoll_event rev, *elist;
int epfd, ret;
- if (!handle)
+ if (handle == NULL)
return 0;
- if (handle->efds[rxq_idx] == handle->elist[rxq_idx].fd)
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ if (rte_intr_efds_index_get(handle, rxq_idx) == elist->fd)
return 0;
VHOST_LOG(INFO, "kickfd for rxq-%d was changed, updating handler.\n",
rxq_idx);
- if (handle->elist[rxq_idx].fd != -1)
+ if (elist->fd != -1)
VHOST_LOG(ERR, "Unexpected previous kickfd value (Got %d, expected -1).\n",
- handle->elist[rxq_idx].fd);
+ elist->fd);
/*
* First remove invalid epoll event, and then install
* the new one. May be solved with a proper API in the
* future.
*/
- epfd = handle->elist[rxq_idx].epfd;
- rev = handle->elist[rxq_idx];
+ epfd = elist->epfd;
+ rev = *elist;
ret = rte_epoll_ctl(epfd, EPOLL_CTL_DEL, rev.fd,
- &handle->elist[rxq_idx]);
+ elist);
if (ret) {
VHOST_LOG(ERR, "Delete epoll event failed.\n");
return ret;
}
- rev.fd = handle->efds[rxq_idx];
- handle->elist[rxq_idx] = rev;
- ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd,
- &handle->elist[rxq_idx]);
+ rev.fd = rte_intr_efds_index_get(handle, rxq_idx);
+ if (rte_intr_elist_index_set(handle, rxq_idx, rev))
+ return -rte_errno;
+
+ elist = rte_intr_elist_index_get(handle, rxq_idx);
+ ret = rte_epoll_ctl(epfd, EPOLL_CTL_ADD, rev.fd, elist);
if (ret) {
VHOST_LOG(ERR, "Add epoll event failed.\n");
return ret;
@@ -634,12 +637,10 @@ eth_vhost_uninstall_intr(struct rte_eth_dev *dev)
{
struct rte_intr_handle *intr_handle = dev->intr_handle;
- if (intr_handle) {
- if (intr_handle->intr_vec)
- free(intr_handle->intr_vec);
- free(intr_handle);
+ if (intr_handle != NULL) {
+ rte_intr_vec_list_free(intr_handle);
+ rte_intr_instance_free(intr_handle);
}
-
dev->intr_handle = NULL;
}
@@ -653,32 +654,31 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
int ret;
/* uninstall firstly if we are reconnecting */
- if (dev->intr_handle)
+ if (dev->intr_handle != NULL)
eth_vhost_uninstall_intr(dev);
- dev->intr_handle = malloc(sizeof(*dev->intr_handle));
- if (!dev->intr_handle) {
+ dev->intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (dev->intr_handle == NULL) {
VHOST_LOG(ERR, "Fail to allocate intr_handle\n");
return -ENOMEM;
}
- memset(dev->intr_handle, 0, sizeof(*dev->intr_handle));
-
- dev->intr_handle->efd_counter_size = sizeof(uint64_t);
+ if (rte_intr_efd_counter_size_set(dev->intr_handle, sizeof(uint64_t)))
+ return -rte_errno;
- dev->intr_handle->intr_vec =
- malloc(nb_rxq * sizeof(dev->intr_handle->intr_vec[0]));
-
- if (!dev->intr_handle->intr_vec) {
+ if (rte_intr_vec_list_alloc(dev->intr_handle, NULL, nb_rxq)) {
VHOST_LOG(ERR,
"Failed to allocate memory for interrupt vector\n");
- free(dev->intr_handle);
+ rte_intr_instance_free(dev->intr_handle);
return -ENOMEM;
}
+
VHOST_LOG(INFO, "Prepare intr vec\n");
for (i = 0; i < nb_rxq; i++) {
- dev->intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i;
- dev->intr_handle->efds[i] = -1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i, RTE_INTR_VEC_RXTX_OFFSET + i))
+ return -rte_errno;
+ if (rte_intr_efds_index_set(dev->intr_handle, i, -1))
+ return -rte_errno;
vq = dev->data->rx_queues[i];
if (!vq) {
VHOST_LOG(INFO, "rxq-%d not setup yet, skip!\n", i);
@@ -697,13 +697,20 @@ eth_vhost_install_intr(struct rte_eth_dev *dev)
"rxq-%d's kickfd is invalid, skip!\n", i);
continue;
}
- dev->intr_handle->efds[i] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(dev->intr_handle, i, vring.kickfd))
+ continue;
VHOST_LOG(INFO, "Installed intr vec for rxq-%d\n", i);
}
- dev->intr_handle->nb_efd = nb_rxq;
- dev->intr_handle->max_intr = nb_rxq + 1;
- dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ if (rte_intr_nb_efd_set(dev->intr_handle, nb_rxq))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(dev->intr_handle, nb_rxq + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
return 0;
}
@@ -908,7 +915,10 @@ vring_conf_update(int vid, struct rte_eth_dev *eth_dev, uint16_t vring_id)
vring_id);
return ret;
}
- eth_dev->intr_handle->efds[rx_idx] = vring.kickfd;
+
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, rx_idx,
+ vring.kickfd))
+ return -rte_errno;
vq = eth_dev->data->rx_queues[rx_idx];
if (!vq) {
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 94120b3490..26de006c77 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -731,8 +731,7 @@ virtio_dev_close(struct rte_eth_dev *dev)
if (intr_conf->lsc || intr_conf->rxq) {
virtio_intr_disable(dev);
rte_intr_efd_disable(dev->intr_handle);
- rte_free(dev->intr_handle->intr_vec);
- dev->intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(dev->intr_handle);
}
virtio_reset(hw);
@@ -1643,7 +1642,9 @@ virtio_queues_bind_intr(struct rte_eth_dev *dev)
PMD_INIT_LOG(INFO, "queue/interrupt binding");
for (i = 0; i < dev->data->nb_rx_queues; ++i) {
- dev->intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(dev->intr_handle, i,
+ i + 1))
+ return -rte_errno;
if (VIRTIO_OPS(hw)->set_queue_irq(hw, hw->vqs[i * 2], i + 1) ==
VIRTIO_MSI_NO_VECTOR) {
PMD_DRV_LOG(ERR, "failed to set queue vector");
@@ -1682,15 +1683,11 @@ virtio_configure_intr(struct rte_eth_dev *dev)
return -1;
}
- if (!dev->intr_handle->intr_vec) {
- dev->intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- hw->max_queue_pairs * sizeof(int), 0);
- if (!dev->intr_handle->intr_vec) {
- PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
- hw->max_queue_pairs);
- return -ENOMEM;
- }
+ if (rte_intr_vec_list_alloc(dev->intr_handle, "intr_vec",
+ hw->max_queue_pairs)) {
+ PMD_INIT_LOG(ERR, "Failed to allocate %u rxq vectors",
+ hw->max_queue_pairs);
+ return -ENOMEM;
}
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 6a6145583b..35aa76b1ff 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -406,23 +406,37 @@ virtio_user_fill_intr_handle(struct virtio_user_dev *dev)
uint32_t i;
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (!eth_dev->intr_handle) {
- eth_dev->intr_handle = malloc(sizeof(*eth_dev->intr_handle));
- if (!eth_dev->intr_handle) {
+ if (eth_dev->intr_handle == NULL) {
+ eth_dev->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (eth_dev->intr_handle == NULL) {
PMD_DRV_LOG(ERR, "(%s) failed to allocate intr_handle", dev->path);
return -1;
}
- memset(eth_dev->intr_handle, 0, sizeof(*eth_dev->intr_handle));
}
- for (i = 0; i < dev->max_queue_pairs; ++i)
- eth_dev->intr_handle->efds[i] = dev->callfds[2 * i];
- eth_dev->intr_handle->nb_efd = dev->max_queue_pairs;
- eth_dev->intr_handle->max_intr = dev->max_queue_pairs + 1;
- eth_dev->intr_handle->type = RTE_INTR_HANDLE_VDEV;
+ for (i = 0; i < dev->max_queue_pairs; ++i) {
+ if (rte_intr_efds_index_set(eth_dev->intr_handle, i,
+ dev->callfds[i]))
+ return -rte_errno;
+ }
+
+ if (rte_intr_nb_efd_set(eth_dev->intr_handle, dev->max_queue_pairs))
+ return -rte_errno;
+
+ if (rte_intr_max_intr_set(eth_dev->intr_handle,
+ dev->max_queue_pairs + 1))
+ return -rte_errno;
+
+ if (rte_intr_type_set(eth_dev->intr_handle, RTE_INTR_HANDLE_VDEV))
+ return -rte_errno;
+
/* For virtio vdev, no need to read counter for clean */
- eth_dev->intr_handle->efd_counter_size = 0;
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ if (rte_intr_efd_counter_size_set(eth_dev->intr_handle, 0))
+ return -rte_errno;
+
+ if (rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev)))
+ return -rte_errno;
return 0;
}
@@ -656,10 +670,8 @@ virtio_user_dev_uninit(struct virtio_user_dev *dev)
{
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
- if (eth_dev->intr_handle) {
- free(eth_dev->intr_handle);
- eth_dev->intr_handle = NULL;
- }
+ rte_intr_instance_free(eth_dev->intr_handle);
+ eth_dev->intr_handle = NULL;
virtio_user_stop_device(dev);
@@ -962,7 +974,7 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
return;
}
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
@@ -972,10 +984,11 @@ virtio_user_dev_delayed_disconnect_handler(void *param)
if (dev->ops->server_disconnect)
dev->ops->server_disconnect(dev);
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle,
+ dev->ops->get_intr_fd(dev));
PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler,
@@ -996,16 +1009,17 @@ virtio_user_dev_delayed_intr_reconfig_handler(void *param)
struct rte_eth_dev *eth_dev = &rte_eth_devices[dev->hw.port_id];
PMD_DRV_LOG(DEBUG, "Unregistering intr fd: %d",
- eth_dev->intr_handle->fd);
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_unregister(eth_dev->intr_handle,
virtio_interrupt_handler,
eth_dev) != 1)
PMD_DRV_LOG(ERR, "interrupt unregister failed");
- eth_dev->intr_handle->fd = dev->ops->get_intr_fd(dev);
+ rte_intr_fd_set(eth_dev->intr_handle, dev->ops->get_intr_fd(dev));
- PMD_DRV_LOG(DEBUG, "Registering intr fd: %d", eth_dev->intr_handle->fd);
+ PMD_DRV_LOG(DEBUG, "Registering intr fd: %d",
+ rte_intr_fd_get(eth_dev->intr_handle));
if (rte_intr_callback_register(eth_dev->intr_handle,
virtio_interrupt_handler, eth_dev))
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 26d9edf531..d1ef1cad08 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -619,11 +619,9 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
return -1;
}
- if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
- intr_handle->intr_vec =
- rte_zmalloc("intr_vec",
- dev->data->nb_rx_queues * sizeof(int), 0);
- if (intr_handle->intr_vec == NULL) {
+ if (rte_intr_dp_is_en(intr_handle)) {
+ if (rte_intr_vec_list_alloc(intr_handle, "intr_vec",
+ dev->data->nb_rx_queues)) {
PMD_INIT_LOG(ERR, "Failed to allocate %d Rx queues intr_vec",
dev->data->nb_rx_queues);
rte_intr_efd_disable(intr_handle);
@@ -634,8 +632,7 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
if (!rte_intr_allow_others(intr_handle) &&
dev->data->dev_conf.intr_conf.lsc != 0) {
PMD_INIT_LOG(ERR, "not enough intr vector to support both Rx interrupt and LSC");
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
@@ -643,17 +640,19 @@ vmxnet3_configure_msix(struct rte_eth_dev *dev)
/* if we cannot allocate one MSI-X vector per queue, don't enable
* interrupt mode.
*/
- if (hw->intr.num_intrs != (intr_handle->nb_efd + 1)) {
+ if (hw->intr.num_intrs !=
+ (rte_intr_nb_efd_get(intr_handle) + 1)) {
PMD_INIT_LOG(ERR, "Device configured with %d Rx intr vectors, expecting %d",
- hw->intr.num_intrs, intr_handle->nb_efd + 1);
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
+ hw->intr.num_intrs,
+ rte_intr_nb_efd_get(intr_handle) + 1);
+ rte_intr_vec_list_free(intr_handle);
rte_intr_efd_disable(intr_handle);
return -1;
}
for (i = 0; i < dev->data->nb_rx_queues; i++)
- intr_handle->intr_vec[i] = i + 1;
+ if (rte_intr_vec_list_index_set(intr_handle, i, i + 1))
+ return -rte_errno;
for (i = 0; i < hw->intr.num_intrs; i++)
hw->intr.mod_levels[i] = UPT1_IML_ADAPTIVE;
@@ -801,7 +800,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
tqd->conf.intrIdx = 1;
else
- tqd->conf.intrIdx = intr_handle->intr_vec[i];
+ tqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
tqd->status.stopped = TRUE;
tqd->status.error = 0;
memset(&tqd->stats, 0, sizeof(tqd->stats));
@@ -824,7 +825,9 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
if (hw->intr.lsc_only)
rqd->conf.intrIdx = 1;
else
- rqd->conf.intrIdx = intr_handle->intr_vec[i];
+ rqd->conf.intrIdx =
+ rte_intr_vec_list_index_get(intr_handle,
+ i);
rqd->status.stopped = TRUE;
rqd->status.error = 0;
memset(&rqd->stats, 0, sizeof(rqd->stats));
@@ -1021,10 +1024,7 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
/* Clean datapath event and queue/vector mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec != NULL) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* quiesce the device first */
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_QUIESCE_DEV);
@@ -1670,7 +1670,9 @@ vmxnet3_dev_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_enable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_enable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle,
+ queue_id));
return 0;
}
@@ -1680,7 +1682,8 @@ vmxnet3_dev_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
{
struct vmxnet3_hw *hw = dev->data->dev_private;
- vmxnet3_disable_intr(hw, dev->intr_handle->intr_vec[queue_id]);
+ vmxnet3_disable_intr(hw,
+ rte_intr_vec_list_index_get(dev->intr_handle, queue_id));
return 0;
}
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 76e6a8530b..8d9db585a4 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -73,7 +73,7 @@ static pthread_t ifpga_monitor_start_thread;
#define IFPGA_MAX_IRQ 12
/* 0 for FME interrupt, others are reserved for AFU irq */
-static struct rte_intr_handle ifpga_irq_handle[IFPGA_MAX_IRQ];
+static struct rte_intr_handle *ifpga_irq_handle[IFPGA_MAX_IRQ];
static struct ifpga_rawdev *
ifpga_rawdev_allocate(struct rte_rawdev *rawdev);
@@ -1345,17 +1345,22 @@ ifpga_unregister_msix_irq(enum ifpga_irq_type type,
int vec_start, rte_intr_callback_fn handler, void *arg)
{
struct rte_intr_handle *intr_handle;
+ int rc, i;
if (type == IFPGA_FME_IRQ)
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
else if (type == IFPGA_AFU_IRQ)
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
else
return 0;
rte_intr_efd_disable(intr_handle);
- return rte_intr_callback_unregister(intr_handle, handler, arg);
+ rc = rte_intr_callback_unregister(intr_handle, handler, arg);
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++)
+ rte_intr_instance_free(ifpga_irq_handle[i]);
+ return rc;
}
int
@@ -1369,6 +1374,14 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
struct opae_adapter *adapter;
struct opae_manager *mgr;
struct opae_accelerator *acc;
+ int *intr_efds = NULL, nb_intr, i;
+
+ for (i = 0; i < IFPGA_MAX_IRQ; i++) {
+ ifpga_irq_handle[i] =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
+ if (ifpga_irq_handle[i] == NULL)
+ return -ENOMEM;
+ }
adapter = ifpga_rawdev_get_priv(dev);
if (!adapter)
@@ -1379,29 +1392,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
return -ENODEV;
if (type == IFPGA_FME_IRQ) {
- intr_handle = &ifpga_irq_handle[0];
+ intr_handle = ifpga_irq_handle[0];
count = 1;
} else if (type == IFPGA_AFU_IRQ) {
- intr_handle = &ifpga_irq_handle[vec_start + 1];
+ intr_handle = ifpga_irq_handle[vec_start + 1];
} else {
return -EINVAL;
}
- intr_handle->type = RTE_INTR_HANDLE_VFIO_MSIX;
+ if (rte_intr_type_set(intr_handle, RTE_INTR_HANDLE_VFIO_MSIX))
+ return -rte_errno;
ret = rte_intr_efd_enable(intr_handle, count);
if (ret)
return -ENODEV;
- intr_handle->fd = intr_handle->efds[0];
+ if (rte_intr_fd_set(intr_handle,
+ rte_intr_efds_index_get(intr_handle, 0)))
+ return -rte_errno;
IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
- name, intr_handle->vfio_dev_fd,
- intr_handle->fd);
+ name, rte_intr_dev_fd_get(intr_handle),
+ rte_intr_fd_get(intr_handle));
if (type == IFPGA_FME_IRQ) {
struct fpga_fme_err_irq_set err_irq_set;
- err_irq_set.evtfd = intr_handle->efds[0];
+ err_irq_set.evtfd = rte_intr_efds_index_get(intr_handle,
+ 0);
ret = opae_manager_ifpga_set_err_irq(mgr, &err_irq_set);
if (ret)
@@ -1411,20 +1428,33 @@ ifpga_register_msix_irq(struct rte_rawdev *dev, int port_id,
if (!acc)
return -EINVAL;
- ret = opae_acc_set_irq(acc, vec_start, count,
- intr_handle->efds);
- if (ret)
+ nb_intr = rte_intr_nb_intr_get(intr_handle);
+
+ intr_efds = calloc(nb_intr, sizeof(int));
+ if (!intr_efds)
+ return -ENOMEM;
+
+ for (i = 0; i < nb_intr; i++)
+ intr_efds[i] = rte_intr_efds_index_get(intr_handle, i);
+
+ ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
}
/* register interrupt handler using DPDK API */
ret = rte_intr_callback_register(intr_handle,
handler, (void *)arg);
- if (ret)
+ if (ret) {
+ free(intr_efds);
return -EINVAL;
+ }
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ free(intr_efds);
return 0;
}
@@ -1491,7 +1521,7 @@ ifpga_rawdev_create(struct rte_pci_device *pci_dev,
data->bus = pci_dev->addr.bus;
data->devid = pci_dev->addr.devid;
data->function = pci_dev->addr.function;
- data->vfio_dev_fd = pci_dev->intr_handle.vfio_dev_fd;
+ data->vfio_dev_fd = rte_intr_dev_fd_get(pci_dev->intr_handle);
adapter = rawdev->dev_private;
/* create a opae_adapter based on above device data */
diff --git a/drivers/raw/ntb/ntb.c b/drivers/raw/ntb/ntb.c
index 78cfcd79f7..46ac02e5ab 100644
--- a/drivers/raw/ntb/ntb.c
+++ b/drivers/raw/ntb/ntb.c
@@ -1044,13 +1044,10 @@ ntb_dev_close(struct rte_rawdev *dev)
ntb_queue_release(dev, i);
hw->queue_pairs = 0;
- intr_handle = &hw->pci_dev->intr_handle;
+ intr_handle = hw->pci_dev->intr_handle;
/* Clean datapath event and vec mapping */
rte_intr_efd_disable(intr_handle);
- if (intr_handle->intr_vec) {
- rte_free(intr_handle->intr_vec);
- intr_handle->intr_vec = NULL;
- }
+ rte_intr_vec_list_free(intr_handle);
/* Disable uio intr before callback unregister */
rte_intr_disable(intr_handle);
@@ -1402,7 +1399,7 @@ ntb_init_hw(struct rte_rawdev *dev, struct rte_pci_device *pci_dev)
/* Init doorbell. */
hw->db_valid_mask = RTE_LEN2MASK(hw->db_cnt, uint64_t);
- intr_handle = &pci_dev->intr_handle;
+ intr_handle = pci_dev->intr_handle;
/* Register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ntb_dev_intr_handler, dev);
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
index 620d5c9122..f8031d0f72 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
@@ -31,7 +31,7 @@ ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
/* Disable error interrupts */
otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
@@ -61,7 +61,7 @@ ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
uintptr_t base)
{
struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = &pci_dev->intr_handle;
+ struct rte_intr_handle *handle = pci_dev->intr_handle;
int ret;
/* Disable error interrupts */
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index 365da2a8b9..dd5251d382 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -162,7 +162,7 @@ ifcvf_vfio_setup(struct ifcvf_internal *internal)
if (rte_pci_map_device(dev))
goto err;
- internal->vfio_dev_fd = dev->intr_handle.vfio_dev_fd;
+ internal->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle);
for (i = 0; i < RTE_MIN(PCI_MAX_RESOURCE, IFCVF_PCI_MAX_RESOURCE);
i++) {
@@ -365,7 +365,8 @@ vdpa_enable_vfio_intr(struct ifcvf_internal *internal, bool m_rx)
irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
irq_set->start = 0;
fd_ptr = (int *)&irq_set->data;
- fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = internal->pdev->intr_handle.fd;
+ fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
+ rte_intr_fd_get(internal->pdev->intr_handle);
for (i = 0; i < nr_vring; i++)
internal->intr_fd[i] = -1;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.c b/drivers/vdpa/mlx5/mlx5_vdpa.c
index 9a6f64797b..b9e84dd9bf 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.c
@@ -543,6 +543,12 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
DRV_LOG(ERR, "Failed to allocate VAR %u.", errno);
goto error;
}
+ priv->err_intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (priv->err_intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
priv->vdev = rte_vdpa_register_device(cdev->dev, &mlx5_vdpa_ops);
if (priv->vdev == NULL) {
DRV_LOG(ERR, "Failed to register vDPA device.");
@@ -561,6 +567,7 @@ mlx5_vdpa_dev_probe(struct mlx5_common_device *cdev)
if (priv) {
if (priv->var)
mlx5_glue->dv_free_var(priv->var);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return -rte_errno;
@@ -592,6 +599,7 @@ mlx5_vdpa_dev_remove(struct mlx5_common_device *cdev)
if (priv->vdev)
rte_vdpa_unregister_device(priv->vdev);
pthread_mutex_destroy(&priv->vq_config_lock);
+ rte_intr_instance_free(priv->err_intr_handle);
rte_free(priv);
}
return 0;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h
index 5045fea773..cf4f384fa4 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa.h
+++ b/drivers/vdpa/mlx5/mlx5_vdpa.h
@@ -89,7 +89,7 @@ struct mlx5_vdpa_virtq {
void *buf;
uint32_t size;
} umems[3];
- struct rte_intr_handle intr_handle;
+ struct rte_intr_handle *intr_handle;
uint64_t err_time[3]; /* RDTSC time of recent errors. */
uint32_t n_retry;
struct mlx5_devx_virtio_q_couners_attr reset;
@@ -137,7 +137,7 @@ struct mlx5_vdpa_priv {
struct mlx5dv_devx_event_channel *eventc;
struct mlx5dv_devx_event_channel *err_chnl;
struct mlx5dv_devx_uar *uar;
- struct rte_intr_handle err_intr_handle;
+ struct rte_intr_handle *err_intr_handle;
struct mlx5_devx_obj *td;
struct mlx5_devx_obj *tiss[16]; /* TIS list for each LAG port. */
uint16_t nr_virtqs;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
index 19497597e6..042d22777f 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
@@ -411,12 +411,17 @@ mlx5_vdpa_err_event_setup(struct mlx5_vdpa_priv *priv)
DRV_LOG(ERR, "Failed to change device event channel FD.");
goto error;
}
- priv->err_intr_handle.fd = priv->err_chnl->fd;
- priv->err_intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&priv->err_intr_handle,
+
+ if (rte_intr_fd_set(priv->err_intr_handle, priv->err_chnl->fd))
+ goto error;
+
+ if (rte_intr_type_set(priv->err_intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv)) {
- priv->err_intr_handle.fd = 0;
+ rte_intr_fd_set(priv->err_intr_handle, 0);
DRV_LOG(ERR, "Failed to register error interrupt for device %d.",
priv->vid);
goto error;
@@ -436,20 +441,20 @@ mlx5_vdpa_err_event_unset(struct mlx5_vdpa_priv *priv)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (!priv->err_intr_handle.fd)
+ if (!rte_intr_fd_get(priv->err_intr_handle))
return;
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&priv->err_intr_handle,
+ ret = rte_intr_callback_unregister(priv->err_intr_handle,
mlx5_vdpa_err_interrupt_handler,
priv);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
"of error interrupt, retries = %d.",
- priv->err_intr_handle.fd, retries);
+ rte_intr_fd_get(priv->err_intr_handle),
+ retries);
rte_pause();
}
}
- memset(&priv->err_intr_handle, 0, sizeof(priv->err_intr_handle));
if (priv->err_chnl) {
#ifdef HAVE_IBV_DEVX_EVENT
union {
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
index c5b357a83b..cb37ba097c 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_virtq.c
@@ -25,7 +25,8 @@ mlx5_vdpa_virtq_handler(void *cb_arg)
int nbytes;
do {
- nbytes = read(virtq->intr_handle.fd, &buf, 8);
+ nbytes = read(rte_intr_fd_get(virtq->intr_handle), &buf,
+ 8);
if (nbytes < 0) {
if (errno == EINTR ||
errno == EWOULDBLOCK ||
@@ -58,21 +59,23 @@ mlx5_vdpa_virtq_unset(struct mlx5_vdpa_virtq *virtq)
int retries = MLX5_VDPA_INTR_RETRIES;
int ret = -EAGAIN;
- if (virtq->intr_handle.fd != -1) {
+ if (rte_intr_fd_get(virtq->intr_handle) != -1) {
while (retries-- && ret == -EAGAIN) {
- ret = rte_intr_callback_unregister(&virtq->intr_handle,
+ ret = rte_intr_callback_unregister(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq);
if (ret == -EAGAIN) {
DRV_LOG(DEBUG, "Try again to unregister fd %d "
- "of virtq %d interrupt, retries = %d.",
- virtq->intr_handle.fd,
- (int)virtq->index, retries);
+ "of virtq %d interrupt, retries = %d.",
+ rte_intr_fd_get(virtq->intr_handle),
+ (int)virtq->index, retries);
+
usleep(MLX5_VDPA_INTR_RETRIES_USEC);
}
}
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
}
+ rte_intr_instance_free(virtq->intr_handle);
if (virtq->virtq) {
ret = mlx5_vdpa_virtq_stop(virtq->priv, virtq->index);
if (ret)
@@ -337,21 +340,33 @@ mlx5_vdpa_virtq_setup(struct mlx5_vdpa_priv *priv, int index)
virtq->priv = priv;
rte_write32(virtq->index, priv->virtq_db_addr);
/* Setup doorbell mapping. */
- virtq->intr_handle.fd = vq.kickfd;
- if (virtq->intr_handle.fd == -1) {
+ virtq->intr_handle =
+ rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_SHARED);
+ if (virtq->intr_handle == NULL) {
+ DRV_LOG(ERR, "Fail to allocate intr_handle");
+ goto error;
+ }
+
+ if (rte_intr_fd_set(virtq->intr_handle, vq.kickfd))
+ goto error;
+
+ if (rte_intr_fd_get(virtq->intr_handle) == -1) {
DRV_LOG(WARNING, "Virtq %d kickfd is invalid.", index);
} else {
- virtq->intr_handle.type = RTE_INTR_HANDLE_EXT;
- if (rte_intr_callback_register(&virtq->intr_handle,
+ if (rte_intr_type_set(virtq->intr_handle, RTE_INTR_HANDLE_EXT))
+ goto error;
+
+ if (rte_intr_callback_register(virtq->intr_handle,
mlx5_vdpa_virtq_handler,
virtq)) {
- virtq->intr_handle.fd = -1;
+ rte_intr_fd_set(virtq->intr_handle, -1);
DRV_LOG(ERR, "Failed to register virtq %d interrupt.",
index);
goto error;
} else {
DRV_LOG(DEBUG, "Register fd %d interrupt for virtq %d.",
- virtq->intr_handle.fd, index);
+ rte_intr_fd_get(virtq->intr_handle),
+ index);
}
}
/* Subscribe virtq error event. */
@@ -506,7 +521,8 @@ mlx5_vdpa_virtq_is_modified(struct mlx5_vdpa_priv *priv,
if (ret)
return -1;
- if (vq.size != virtq->vq_size || vq.kickfd != virtq->intr_handle.fd)
+ if (vq.size != virtq->vq_size || vq.kickfd !=
+ rte_intr_fd_get(virtq->intr_handle))
return 1;
if (virtq->eqp.cq.cq_obj.cq) {
if (vq.callfd != virtq->eqp.cq.callfd)
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 59c5d7b40f..71aa4b2e98 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -32,7 +32,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev,
return;
}
- eth_dev->intr_handle = &pci_dev->intr_handle;
+ eth_dev->intr_handle = pci_dev->intr_handle;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
eth_dev->data->dev_flags = 0;
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 7/9] interrupts: make interrupt handle structure opaque
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (5 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 6/9] drivers: " David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 8/9] interrupts: rename device specific file descriptor David Marchand
` (3 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
Moving interrupt handle structure definition inside a EAL private
header to make its fields totally opaque to the outside world.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- let rte_intr_handle fields untouched:
- split vfio / uio fd renames in a separate commit,
- split event list update in a separate commit,
- moved rte_intr_handle definition to a EAL private header,
- preserved dumping all info in interrupt tracepoints,
---
lib/eal/common/eal_common_interrupts.c | 2 +
lib/eal/common/eal_interrupts.h | 37 +++++++++++++
lib/eal/include/meson.build | 1 -
lib/eal/include/rte_eal_interrupts.h | 72 --------------------------
lib/eal/include/rte_eal_trace.h | 2 +
lib/eal/include/rte_interrupts.h | 24 ++++++++-
6 files changed, 63 insertions(+), 75 deletions(-)
create mode 100644 lib/eal/common/eal_interrupts.h
delete mode 100644 lib/eal/include/rte_eal_interrupts.h
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 46064870f4..5886376d84 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -10,6 +10,8 @@
#include <rte_log.h>
#include <rte_malloc.h>
+#include "eal_interrupts.h"
+
/* Macros to check for valid interrupt handle */
#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
if (intr_handle == NULL) { \
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
new file mode 100644
index 0000000000..beacc04b62
--- /dev/null
+++ b/lib/eal/common/eal_interrupts.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#ifndef EAL_INTERRUPTS_H
+#define EAL_INTERRUPTS_H
+
+struct rte_intr_handle {
+ RTE_STD_C11
+ union {
+ struct {
+ RTE_STD_C11
+ union {
+ /** VFIO device file descriptor */
+ int vfio_dev_fd;
+ /** UIO cfg file desc for uio_pci_generic */
+ int uio_cfg_fd;
+ };
+ int fd; /**< interrupt event file descriptor */
+ };
+ void *windows_handle; /**< device driver handle */
+ };
+ uint32_t alloc_flags; /**< flags passed at allocation */
+ enum rte_intr_handle_type type; /**< handle type */
+ uint32_t max_intr; /**< max interrupt requested */
+ uint32_t nb_efd; /**< number of available efd(event fd) */
+ uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
+ uint16_t nb_intr;
+ /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
+ int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
+ struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
+ /**< intr vector epoll event */
+ uint16_t vec_list_size;
+ int *intr_vec; /**< intr vector number array */
+};
+
+#endif /* EAL_INTERRUPTS_H */
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index 8e258607b8..86468d1a2b 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -49,7 +49,6 @@ headers += files(
'rte_version.h',
'rte_vfio.h',
)
-indirect_headers += files('rte_eal_interrupts.h')
# special case install the generic headers, since they go in a subdir
generic_headers = files(
diff --git a/lib/eal/include/rte_eal_interrupts.h b/lib/eal/include/rte_eal_interrupts.h
deleted file mode 100644
index 60bb60ca59..0000000000
--- a/lib/eal/include/rte_eal_interrupts.h
+++ /dev/null
@@ -1,72 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_INTERRUPTS_H_
-#error "don't include this file directly, please include generic <rte_interrupts.h>"
-#endif
-
-/**
- * @file rte_eal_interrupts.h
- * @internal
- *
- * Contains function prototypes exposed by the EAL for interrupt handling by
- * drivers and other DPDK internal consumers.
- */
-
-#ifndef _RTE_EAL_INTERRUPTS_H_
-#define _RTE_EAL_INTERRUPTS_H_
-
-#define RTE_MAX_RXTX_INTR_VEC_ID 512
-#define RTE_INTR_VEC_ZERO_OFFSET 0
-#define RTE_INTR_VEC_RXTX_OFFSET 1
-
-/**
- * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
- */
-enum rte_intr_handle_type {
- RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
- RTE_INTR_HANDLE_UIO, /**< uio device handle */
- RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
- RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
- RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
- RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
- RTE_INTR_HANDLE_ALARM, /**< alarm handle */
- RTE_INTR_HANDLE_EXT, /**< external handler */
- RTE_INTR_HANDLE_VDEV, /**< virtual device */
- RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
- RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
- RTE_INTR_HANDLE_MAX /**< count of elements */
-};
-
-/** Handle for interrupts. */
-struct rte_intr_handle {
- RTE_STD_C11
- union {
- struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
- int fd; /**< interrupt event file descriptor */
- };
- void *windows_handle; /**< device driver handle */
- };
- uint32_t alloc_flags; /**< flags passed at allocation */
- enum rte_intr_handle_type type; /**< handle type */
- uint32_t max_intr; /**< max interrupt requested */
- uint32_t nb_efd; /**< number of available efd(event fd) */
- uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
- uint16_t nb_intr;
- /**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
- uint16_t vec_list_size;
- int *intr_vec; /**< intr vector number array */
-};
-
-#endif /* _RTE_EAL_INTERRUPTS_H_ */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index 495ae1ee1d..af7b2d0bf0 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -19,6 +19,8 @@ extern "C" {
#include <rte_interrupts.h>
#include <rte_trace_point.h>
+#include "eal_interrupts.h"
+
/* Alarm */
RTE_TRACE_POINT(
rte_eal_trace_alarm_set,
diff --git a/lib/eal/include/rte_interrupts.h b/lib/eal/include/rte_interrupts.h
index a515a8c073..edbf0faeef 100644
--- a/lib/eal/include/rte_interrupts.h
+++ b/lib/eal/include/rte_interrupts.h
@@ -35,6 +35,28 @@ struct rte_intr_handle;
/** Interrupt instance will be shared between primary and secondary processes. */
#define RTE_INTR_INSTANCE_F_SHARED RTE_BIT32(0)
+#define RTE_MAX_RXTX_INTR_VEC_ID 512
+#define RTE_INTR_VEC_ZERO_OFFSET 0
+#define RTE_INTR_VEC_RXTX_OFFSET 1
+
+/**
+ * The interrupt source type, e.g. UIO, VFIO, ALARM etc.
+ */
+enum rte_intr_handle_type {
+ RTE_INTR_HANDLE_UNKNOWN = 0, /**< generic unknown handle */
+ RTE_INTR_HANDLE_UIO, /**< uio device handle */
+ RTE_INTR_HANDLE_UIO_INTX, /**< uio generic handle */
+ RTE_INTR_HANDLE_VFIO_LEGACY, /**< vfio device handle (legacy) */
+ RTE_INTR_HANDLE_VFIO_MSI, /**< vfio device handle (MSI) */
+ RTE_INTR_HANDLE_VFIO_MSIX, /**< vfio device handle (MSIX) */
+ RTE_INTR_HANDLE_ALARM, /**< alarm handle */
+ RTE_INTR_HANDLE_EXT, /**< external handler */
+ RTE_INTR_HANDLE_VDEV, /**< virtual device */
+ RTE_INTR_HANDLE_DEV_EVENT, /**< device event handle */
+ RTE_INTR_HANDLE_VFIO_REQ, /**< VFIO request handle */
+ RTE_INTR_HANDLE_MAX /**< count of elements */
+};
+
/** Function to be registered for the specific interrupt */
typedef void (*rte_intr_callback_fn)(void *cb_arg);
@@ -45,8 +67,6 @@ typedef void (*rte_intr_callback_fn)(void *cb_arg);
typedef void (*rte_intr_unregister_callback_fn)(struct rte_intr_handle *intr_handle,
void *cb_arg);
-#include "rte_eal_interrupts.h"
-
/**
* It registers the callback for the specific interrupt. Multiple
* callbacks can be registered at the same time.
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 8/9] interrupts: rename device specific file descriptor
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (6 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 7/9] interrupts: make interrupt handle structure opaque David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list David Marchand
` (2 subsequent siblings)
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev; +Cc: dmitry.kozliuk, rasland, thomas
From: Harman Kalra <hkalra@marvell.com>
VFIO/UIO are mutually exclusive, storing file descriptor in a single
field is enough.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Changes since v5:
- split from patch5,
---
lib/eal/common/eal_common_interrupts.c | 6 +++---
lib/eal/common/eal_interrupts.h | 8 +-------
lib/eal/include/rte_eal_trace.h | 8 ++++----
3 files changed, 8 insertions(+), 14 deletions(-)
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 5886376d84..2146b933bb 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -72,7 +72,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
intr_handle = rte_intr_instance_alloc(src->alloc_flags);
intr_handle->fd = src->fd;
- intr_handle->vfio_dev_fd = src->vfio_dev_fd;
+ intr_handle->dev_fd = src->dev_fd;
intr_handle->type = src->type;
intr_handle->max_intr = src->max_intr;
intr_handle->nb_efd = src->nb_efd;
@@ -139,7 +139,7 @@ int rte_intr_dev_fd_set(struct rte_intr_handle *intr_handle, int fd)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- intr_handle->vfio_dev_fd = fd;
+ intr_handle->dev_fd = fd;
return 0;
fail:
@@ -150,7 +150,7 @@ int rte_intr_dev_fd_get(const struct rte_intr_handle *intr_handle)
{
CHECK_VALID_INTR_HANDLE(intr_handle);
- return intr_handle->vfio_dev_fd;
+ return intr_handle->dev_fd;
fail:
return -1;
}
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index beacc04b62..1a4e5573b2 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -9,13 +9,7 @@ struct rte_intr_handle {
RTE_STD_C11
union {
struct {
- RTE_STD_C11
- union {
- /** VFIO device file descriptor */
- int vfio_dev_fd;
- /** UIO cfg file desc for uio_pci_generic */
- int uio_cfg_fd;
- };
+ int dev_fd; /**< VFIO/UIO cfg device file descriptor */
int fd; /**< interrupt event file descriptor */
};
void *windows_handle; /**< device driver handle */
diff --git a/lib/eal/include/rte_eal_trace.h b/lib/eal/include/rte_eal_trace.h
index af7b2d0bf0..5ef4398230 100644
--- a/lib/eal/include/rte_eal_trace.h
+++ b/lib/eal/include/rte_eal_trace.h
@@ -151,7 +151,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -164,7 +164,7 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle,
rte_intr_callback_fn cb, void *cb_arg, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -176,7 +176,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_enable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
@@ -186,7 +186,7 @@ RTE_TRACE_POINT(
rte_eal_trace_intr_disable,
RTE_TRACE_POINT_ARGS(const struct rte_intr_handle *handle, int rc),
rte_trace_point_emit_int(rc);
- rte_trace_point_emit_int(handle->vfio_dev_fd);
+ rte_trace_point_emit_int(handle->dev_fd);
rte_trace_point_emit_int(handle->fd);
rte_trace_point_emit_int(handle->type);
rte_trace_point_emit_u32(handle->max_intr);
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (7 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 8/9] interrupts: rename device specific file descriptor David Marchand
@ 2021-10-25 14:27 ` David Marchand
2021-10-28 15:58 ` Ji, Kai
2021-10-25 14:32 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal Raslan Darawsheh
2021-10-25 19:24 ` David Marchand
10 siblings, 1 reply; 152+ messages in thread
From: David Marchand @ 2021-10-25 14:27 UTC (permalink / raw)
To: hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Anatoly Burakov,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
From: Harman Kalra <hkalra@marvell.com>
Dynamically allocating the efds and elist array of intr_handle
structure, based on size provided by user. Eg size can be
MSIX interrupts supported by a PCI device.
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
---
Changes since v6:
- removed unneeded checks on elist/efds array initialisation,
Changes since v5:
- split from patch5,
---
drivers/bus/pci/linux/pci_vfio.c | 6 ++
drivers/common/cnxk/roc_platform.h | 1 +
lib/eal/common/eal_common_interrupts.c | 95 +++++++++++++++++++++++++-
lib/eal/common/eal_interrupts.h | 5 +-
4 files changed, 102 insertions(+), 5 deletions(-)
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index 7b2f8296c5..f622e7f8e6 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -266,6 +266,12 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
return -1;
}
+ /* Reallocate the efds and elist fields of intr_handle based
+ * on PCI device MSIX size.
+ */
+ if (rte_intr_event_list_update(dev->intr_handle, irq.count))
+ return -1;
+
/* if this vector cannot be used with eventfd, fail if we explicitly
* specified interrupt type, otherwise continue */
if ((irq.flags & VFIO_IRQ_INFO_EVENTFD) == 0) {
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 60227b72d0..5da23fe5f8 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -121,6 +121,7 @@
#define plt_intr_instance_alloc rte_intr_instance_alloc
#define plt_intr_instance_dup rte_intr_instance_dup
#define plt_intr_instance_free rte_intr_instance_free
+#define plt_intr_event_list_update rte_intr_event_list_update
#define plt_intr_max_intr_get rte_intr_max_intr_get
#define plt_intr_max_intr_set rte_intr_max_intr_set
#define plt_intr_nb_efd_get rte_intr_nb_efd_get
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 2146b933bb..da3ab006b8 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -53,10 +53,46 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
return NULL;
}
+ if (uses_rte_memory) {
+ intr_handle->efds = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(int), 0);
+ } else {
+ intr_handle->efds = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(int));
+ }
+ if (intr_handle->efds == NULL) {
+ RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
+ if (uses_rte_memory) {
+ intr_handle->elist = rte_zmalloc(NULL,
+ RTE_MAX_RXTX_INTR_VEC_ID * sizeof(struct rte_epoll_event),
+ 0);
+ } else {
+ intr_handle->elist = calloc(RTE_MAX_RXTX_INTR_VEC_ID,
+ sizeof(struct rte_epoll_event));
+ }
+ if (intr_handle->elist == NULL) {
+ RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+
intr_handle->alloc_flags = flags;
intr_handle->nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
return intr_handle;
+fail:
+ if (uses_rte_memory) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle);
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle);
+ }
+ return NULL;
}
struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
@@ -83,14 +119,69 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
return intr_handle;
}
+int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
+{
+ struct rte_epoll_event *tmp_elist;
+ bool uses_rte_memory;
+ int *tmp_efds;
+
+ CHECK_VALID_INTR_HANDLE(intr_handle);
+
+ if (size == 0) {
+ RTE_LOG(DEBUG, EAL, "Size can't be zero\n");
+ rte_errno = EINVAL;
+ goto fail;
+ }
+
+ uses_rte_memory =
+ RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags);
+ if (uses_rte_memory) {
+ tmp_efds = rte_realloc(intr_handle->efds, size * sizeof(int),
+ 0);
+ } else {
+ tmp_efds = realloc(intr_handle->efds, size * sizeof(int));
+ }
+ if (tmp_efds == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->efds = tmp_efds;
+
+ if (uses_rte_memory) {
+ tmp_elist = rte_realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event), 0);
+ } else {
+ tmp_elist = realloc(intr_handle->elist,
+ size * sizeof(struct rte_epoll_event));
+ }
+ if (tmp_elist == NULL) {
+ RTE_LOG(ERR, EAL, "Failed to realloc the event list\n");
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ intr_handle->elist = tmp_elist;
+
+ intr_handle->nb_intr = size;
+
+ return 0;
+fail:
+ return -rte_errno;
+}
+
void rte_intr_instance_free(struct rte_intr_handle *intr_handle)
{
if (intr_handle == NULL)
return;
- if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags))
+ if (RTE_INTR_INSTANCE_USES_RTE_MEMORY(intr_handle->alloc_flags)) {
+ rte_free(intr_handle->efds);
+ rte_free(intr_handle->elist);
rte_free(intr_handle);
- else
+ } else {
+ free(intr_handle->efds);
+ free(intr_handle->elist);
free(intr_handle);
+ }
}
int rte_intr_fd_set(struct rte_intr_handle *intr_handle, int fd)
diff --git a/lib/eal/common/eal_interrupts.h b/lib/eal/common/eal_interrupts.h
index 1a4e5573b2..482781b862 100644
--- a/lib/eal/common/eal_interrupts.h
+++ b/lib/eal/common/eal_interrupts.h
@@ -21,9 +21,8 @@ struct rte_intr_handle {
uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
uint16_t nb_intr;
/**< Max vector count, default RTE_MAX_RXTX_INTR_VEC_ID */
- int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds mapping */
- struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
- /**< intr vector epoll event */
+ int *efds; /**< intr vectors/efds mapping */
+ struct rte_epoll_event *elist; /**< intr vector epoll event */
uint16_t vec_list_size;
int *intr_vec; /**< intr vector number array */
};
--
2.23.0
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (8 preceding siblings ...)
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list David Marchand
@ 2021-10-25 14:32 ` Raslan Darawsheh
2021-10-25 19:24 ` David Marchand
10 siblings, 0 replies; 152+ messages in thread
From: Raslan Darawsheh @ 2021-10-25 14:32 UTC (permalink / raw)
To: David Marchand, hkalra, dev; +Cc: dmitry.kozliuk, NBU-Contact-Thomas Monjalon
Hi,
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Monday, October 25, 2021 5:27 PM
> To: hkalra@marvell.com; dev@dpdk.org
> Cc: dmitry.kozliuk@gmail.com; Raslan Darawsheh <rasland@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Subject: [PATCH v8 0/9] make rte_intr_handle internal
>
> Moving struct rte_intr_handle as an internal structure to avoid any ABI
> breakages in future. Since this structure defines some static arrays and
> changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI specification
> allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on PCI
> device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Furld
> efense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-
> 3A__docs.google.com_s&data=04%7C01%7Crasland%40nvidia.com%7C
> c626e0d058714bc3075a08d997c39557%7C43083d15727340c1b7db39efd9ccc17
> a%7C0%7C0%7C637707688554493769%7CUnknown%7CTWFpbGZsb3d8eyJWI
> joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> 000&sdata=y7vFUXbUzh6ise1zn8bzbfuUGv6L24gCNcUsuWKqRBk%3D&
> amp;reserved=0
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-
> 23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-
> 7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c
> &s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside world
> by wrapping it inside a .c file and providing get set wrapper APIs to read or
> manipulate its fields.. Any changes to be made to any of the fields should be
> done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are
> defined and also hides struct rte_intr_handle definition.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally allocated memory
> information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect if memory should
> be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as with auto detect
> some multiprocess issues existing in the library were causing tests failure.
> * Rebased to top of tree.
>
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as
> RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
> (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
> * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
> are squashed in it,
> * (now) patch 5 concerns other libraries updates,
> * (now) patch 6 concerns drivers updates:
> * instance allocation is moved to probing for auxiliary,
> * there might be a bug for PCI drivers non requesting
> RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
> * (now) patch 7 only hides structure, but keep it in a EAL private
> header, this makes it possible to keep info in tracepoints,
> * (now) patch 8 deals with VFIO/UIO internal fds merge,
> * (now) patch 9 extends event list,
>
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
> RTE_PCI_DRV_NEED_MAPPING,
>
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
>
> --
> David Marchand
>
> Harman Kalra (9):
> interrupts: add allocator and accessors
> interrupts: remove direct access to interrupt handle
> test/interrupts: remove direct access to interrupt handle
> alarm: remove direct access to interrupt handle
> lib: remove direct access to interrupt handle
> drivers: remove direct access to interrupt handle
> interrupts: make interrupt handle structure opaque
> interrupts: rename device specific file descriptor
> interrupts: extend event list
>
> MAINTAINERS | 1 +
> app/test/test_interrupts.c | 164 +++--
> drivers/baseband/acc100/rte_acc100_pmd.c | 14 +-
> .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 24 +-
> drivers/baseband/fpga_lte_fec/fpga_lte_fec.c | 24 +-
> drivers/bus/auxiliary/auxiliary_common.c | 17 +-
> drivers/bus/auxiliary/rte_bus_auxiliary.h | 2 +-
> drivers/bus/dpaa/dpaa_bus.c | 28 +-
> drivers/bus/dpaa/rte_dpaa_bus.h | 2 +-
> drivers/bus/fslmc/fslmc_bus.c | 14 +-
> drivers/bus/fslmc/fslmc_vfio.c | 30 +-
> drivers/bus/fslmc/portal/dpaa2_hw_dpio.c | 18 +-
> drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
> drivers/bus/fslmc/rte_fslmc.h | 2 +-
> drivers/bus/ifpga/ifpga_bus.c | 13 +-
> drivers/bus/ifpga/rte_bus_ifpga.h | 2 +-
> drivers/bus/pci/bsd/pci.c | 20 +-
> drivers/bus/pci/linux/pci.c | 4 +-
> drivers/bus/pci/linux/pci_uio.c | 69 +-
> drivers/bus/pci/linux/pci_vfio.c | 108 ++-
> drivers/bus/pci/pci_common.c | 47 +-
> drivers/bus/pci/pci_common_uio.c | 21 +-
> drivers/bus/pci/rte_bus_pci.h | 4 +-
> drivers/bus/vmbus/linux/vmbus_bus.c | 6 +
> drivers/bus/vmbus/linux/vmbus_uio.c | 35 +-
> drivers/bus/vmbus/rte_bus_vmbus.h | 2 +-
> drivers/bus/vmbus/vmbus_common_uio.c | 23 +-
> drivers/common/cnxk/roc_cpt.c | 8 +-
> drivers/common/cnxk/roc_dev.c | 14 +-
> drivers/common/cnxk/roc_irq.c | 107 +--
> drivers/common/cnxk/roc_nix_inl_dev_irq.c | 8 +-
> drivers/common/cnxk/roc_nix_irq.c | 36 +-
> drivers/common/cnxk/roc_npa.c | 2 +-
> drivers/common/cnxk/roc_platform.h | 49 +-
> drivers/common/cnxk/roc_sso.c | 4 +-
> drivers/common/cnxk/roc_tim.c | 4 +-
> drivers/common/octeontx2/otx2_dev.c | 14 +-
> drivers/common/octeontx2/otx2_irq.c | 117 ++--
> .../octeontx2/otx2_cryptodev_hw_access.c | 4 +-
> drivers/event/octeontx2/otx2_evdev_irq.c | 12 +-
> drivers/mempool/octeontx2/otx2_mempool.c | 2 +-
> drivers/net/atlantic/atl_ethdev.c | 20 +-
> drivers/net/avp/avp_ethdev.c | 8 +-
> drivers/net/axgbe/axgbe_ethdev.c | 12 +-
> drivers/net/axgbe/axgbe_mdio.c | 6 +-
> drivers/net/bnx2x/bnx2x_ethdev.c | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 33 +-
> drivers/net/bnxt/bnxt_irq.c | 4 +-
> drivers/net/dpaa/dpaa_ethdev.c | 48 +-
> drivers/net/dpaa2/dpaa2_ethdev.c | 10 +-
> drivers/net/e1000/em_ethdev.c | 23 +-
> drivers/net/e1000/igb_ethdev.c | 79 +--
> drivers/net/ena/ena_ethdev.c | 35 +-
> drivers/net/enic/enic_main.c | 26 +-
> drivers/net/failsafe/failsafe.c | 21 +-
> drivers/net/failsafe/failsafe_intr.c | 43 +-
> drivers/net/failsafe/failsafe_ops.c | 19 +-
> drivers/net/failsafe/failsafe_private.h | 2 +-
> drivers/net/fm10k/fm10k_ethdev.c | 32 +-
> drivers/net/hinic/hinic_pmd_ethdev.c | 10 +-
> drivers/net/hns3/hns3_ethdev.c | 57 +-
> drivers/net/hns3/hns3_ethdev_vf.c | 64 +-
> drivers/net/hns3/hns3_rxtx.c | 2 +-
> drivers/net/i40e/i40e_ethdev.c | 53 +-
> drivers/net/iavf/iavf_ethdev.c | 42 +-
> drivers/net/iavf/iavf_vchnl.c | 4 +-
> drivers/net/ice/ice_dcf.c | 10 +-
> drivers/net/ice/ice_dcf_ethdev.c | 21 +-
> drivers/net/ice/ice_ethdev.c | 49 +-
> drivers/net/igc/igc_ethdev.c | 45 +-
> drivers/net/ionic/ionic_ethdev.c | 17 +-
> drivers/net/ixgbe/ixgbe_ethdev.c | 66 +-
> drivers/net/memif/memif_socket.c | 108 ++-
> drivers/net/memif/memif_socket.h | 4 +-
> drivers/net/memif/rte_eth_memif.c | 56 +-
> drivers/net/memif/rte_eth_memif.h | 2 +-
> drivers/net/mlx4/mlx4.c | 19 +-
> drivers/net/mlx4/mlx4.h | 2 +-
> drivers/net/mlx4/mlx4_intr.c | 47 +-
> drivers/net/mlx5/linux/mlx5_os.c | 55 +-
> drivers/net/mlx5/linux/mlx5_socket.c | 25 +-
> drivers/net/mlx5/mlx5.h | 6 +-
> drivers/net/mlx5/mlx5_rxq.c | 43 +-
> drivers/net/mlx5/mlx5_trigger.c | 4 +-
> drivers/net/mlx5/mlx5_txpp.c | 25 +-
> drivers/net/netvsc/hn_ethdev.c | 4 +-
> drivers/net/nfp/nfp_common.c | 34 +-
> drivers/net/nfp/nfp_ethdev.c | 13 +-
> drivers/net/nfp/nfp_ethdev_vf.c | 13 +-
> drivers/net/ngbe/ngbe_ethdev.c | 29 +-
> drivers/net/octeontx2/otx2_ethdev_irq.c | 35 +-
> drivers/net/qede/qede_ethdev.c | 16 +-
> drivers/net/sfc/sfc_intr.c | 30 +-
> drivers/net/tap/rte_eth_tap.c | 33 +-
> drivers/net/tap/rte_eth_tap.h | 2 +-
> drivers/net/tap/tap_intr.c | 33 +-
> drivers/net/thunderx/nicvf_ethdev.c | 10 +
> drivers/net/thunderx/nicvf_struct.h | 2 +-
> drivers/net/txgbe/txgbe_ethdev.c | 38 +-
> drivers/net/txgbe/txgbe_ethdev_vf.c | 33 +-
> drivers/net/vhost/rte_eth_vhost.c | 80 ++-
> drivers/net/virtio/virtio_ethdev.c | 21 +-
> .../net/virtio/virtio_user/virtio_user_dev.c | 56 +-
> drivers/net/vmxnet3/vmxnet3_ethdev.c | 43 +-
> drivers/raw/ifpga/ifpga_rawdev.c | 62 +-
> drivers/raw/ntb/ntb.c | 9 +-
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 4 +-
> drivers/vdpa/ifc/ifcvf_vdpa.c | 5 +-
> drivers/vdpa/mlx5/mlx5_vdpa.c | 8 +
> drivers/vdpa/mlx5/mlx5_vdpa.h | 4 +-
> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 21 +-
> drivers/vdpa/mlx5/mlx5_vdpa_virtq.c | 44 +-
> lib/bbdev/rte_bbdev.c | 4 +-
> lib/eal/common/eal_common_interrupts.c | 500 ++++++++++++++
> lib/eal/common/eal_interrupts.h | 30 +
> lib/eal/common/eal_private.h | 10 +
> lib/eal/common/meson.build | 1 +
> lib/eal/freebsd/eal.c | 1 +
> lib/eal/freebsd/eal_alarm.c | 35 +-
> lib/eal/freebsd/eal_interrupts.c | 85 ++-
> lib/eal/include/meson.build | 2 +-
> lib/eal/include/rte_eal_interrupts.h | 269 --------
> lib/eal/include/rte_eal_trace.h | 10 +-
> lib/eal/include/rte_epoll.h | 118 ++++
> lib/eal/include/rte_interrupts.h | 651 +++++++++++++++++-
> lib/eal/linux/eal.c | 1 +
> lib/eal/linux/eal_alarm.c | 32 +-
> lib/eal/linux/eal_dev.c | 57 +-
> lib/eal/linux/eal_interrupts.c | 304 ++++----
> lib/eal/version.map | 45 +-
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_ethdev.c | 14 +-
> 132 files changed, 3449 insertions(+), 1748 deletions(-) create mode 100644
> lib/eal/common/eal_common_interrupts.c
> create mode 100644 lib/eal/common/eal_interrupts.h delete mode 100644
> lib/eal/include/rte_eal_interrupts.h
> create mode 100644 lib/eal/include/rte_epoll.h
>
> --
> 2.23.0
Tested-by: Raslan Darawsheh <rasland@nvidia.com>
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
` (9 preceding siblings ...)
2021-10-25 14:32 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal Raslan Darawsheh
@ 2021-10-25 19:24 ` David Marchand
10 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-25 19:24 UTC (permalink / raw)
To: Harman Kalra, dev; +Cc: Dmitry Kozlyuk, Raslan Darawsheh, Thomas Monjalon
On Mon, Oct 25, 2021 at 4:27 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
>
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
> preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
> 3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
> XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
> Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=
>
> This series makes struct rte_intr_handle totally opaque to the outside
> world by wrapping it inside a .c file and providing get set wrapper APIs
> to read or manipulate its fields.. Any changes to be made to any of the
> fields should be done via these get set APIs.
> Introduced a new eal_common_interrupts.c where all these APIs are defined
> and also hides struct rte_intr_handle definition.
>
> v1:
> * Fixed freebsd compilation failure
> * Fixed seg fault in case of memif
>
> v2:
> * Merged the prototype and implementation patch to 1.
> * Restricting allocation of single interrupt instance.
> * Removed base APIs, as they were exposing internally
> allocated memory information.
> * Fixed some memory leak issues.
> * Marked some library specific APIs as internal.
>
> v3:
> * Removed flag from instance alloc API, rather auto detect
> if memory should be allocated using glibc malloc APIs or
> rte_malloc*
> * Added APIs for get/set windows handle.
> * Defined macros for repeated checks.
>
> v4:
> * Rectified some typo in the APIs documentation.
> * Better names for some internal variables.
>
> v5:
> * Reverted back to passing flag to instance alloc API, as
> with auto detect some multiprocess issues existing in the
> library were causing tests failure.
> * Rebased to top of tree.
>
> v6:
> * renamed RTE_INTR_INSTANCE_F_UNSHARED as RTE_INTR_INSTANCE_F_PRIVATE,
> * changed API and removed need for alloc_flag content exposure
> (see rte_intr_instance_dup() in patch 1 and 2),
> * exported all symbols for Windows,
> * fixed leak in unit tests in case of alloc failure,
> * split (previously) patch 4 into three patches
> * (now) patch 4 only concerns alarm and (previously) patch 6 cleanup bits
> are squashed in it,
> * (now) patch 5 concerns other libraries updates,
> * (now) patch 6 concerns drivers updates:
> * instance allocation is moved to probing for auxiliary,
> * there might be a bug for PCI drivers non requesting
> RTE_PCI_DRV_NEED_MAPPING, but code is left as v5,
> * split (previously) patch 5 into three patches
> * (now) patch 7 only hides structure, but keep it in a EAL private
> header, this makes it possible to keep info in tracepoints,
> * (now) patch 8 deals with VFIO/UIO internal fds merge,
> * (now) patch 9 extends event list,
>
> v7:
> * fixed compilation on FreeBSD,
> * removed unused interrupt handle in FreeBSD alarm code,
> * fixed interrupt handle allocation for PCI drivers without
> RTE_PCI_DRV_NEED_MAPPING,
>
> v8:
> * lowered logs level to DEBUG in sanity checks,
> * fixed corner case with vector list access,
>
> --
> David Marchand
>
> Harman Kalra (9):
> interrupts: add allocator and accessors
> interrupts: remove direct access to interrupt handle
> test/interrupts: remove direct access to interrupt handle
> alarm: remove direct access to interrupt handle
> lib: remove direct access to interrupt handle
> drivers: remove direct access to interrupt handle
> interrupts: make interrupt handle structure opaque
> interrupts: rename device specific file descriptor
> interrupts: extend event list
Series applied, thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v7 5/9] lib: remove direct access to interrupt handle
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 5/9] lib: " David Marchand
@ 2021-10-28 6:14 ` Jiang, YuX
0 siblings, 0 replies; 152+ messages in thread
From: Jiang, YuX @ 2021-10-28 6:14 UTC (permalink / raw)
To: David Marchand, hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Chautru, Nicolas, Yigit, Ferruh,
Andrew Rybchenko
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of David Marchand
> Sent: Monday, October 25, 2021 9:35 PM
> To: hkalra@marvell.com; dev@dpdk.org
> Cc: dmitry.kozliuk@gmail.com; rasland@nvidia.com; thomas@monjalon.net;
> Chautru, Nicolas <nicolas.chautru@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Subject: [dpdk-dev] [PATCH v7 5/9] lib: remove direct access to interrupt
> handle
>
> From: Harman Kalra <hkalra@marvell.com>
>
> Removing direct access to interrupt handle structure fields, rather use
> respective get set APIs for the same.
> Making changes to all the libraries access the interrupt handle fields.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> Changes since v5:
> - split from patch4,
>
> ---
Hi Harman,
When we test dpdk-21.11-rc1, find a bug https://bugs.dpdk.org/show_bug.cgi?id=845, its bad commit id is c2bd9367e18f5b00c1a3c5eb281a512ef52c5dfd/Author: Harman Kalra <hkalra@marvell.com>
Could you pls have a look?
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list David Marchand
@ 2021-10-28 15:58 ` Ji, Kai
2021-10-28 17:16 ` David Marchand
0 siblings, 1 reply; 152+ messages in thread
From: Ji, Kai @ 2021-10-28 15:58 UTC (permalink / raw)
To: David Marchand, hkalra, dev
Cc: dmitry.kozliuk, rasland, thomas, Burakov, Anatoly,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Mcnamara, John, Yigit, Ferruh, Ananyev, Konstantin
Hi Harman,
This patch is causing QAT failed during interrupt init, the event list does not support interrupt count size zero in QAT case.
There is also Bugzilla relate to this issue: https://bugs.dpdk.org/show_bug.cgi?id=843
Regards
Kai
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of David Marchand
> Sent: Monday, October 25, 2021 3:27 PM
> To: hkalra@marvell.com; dev@dpdk.org
> Cc: dmitry.kozliuk@gmail.com; rasland@nvidia.com; thomas@monjalon.net;
> Burakov, Anatoly <anatoly.burakov@intel.com>; Nithin Dabilpuram
> <ndabilpuram@marvell.com>; Kiran Kumar K <kirankumark@marvell.com>;
> Sunil Kumar Kori <skori@marvell.com>; Satha Rao
> <skoteshwar@marvell.com>
> Subject: [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list
>
> From: Harman Kalra <hkalra@marvell.com> https://bugs.dpdk.org/show_bug.cgi?id=843
>
> Dynamically allocating the efds and elist array of intr_handle structure, based
> on size provided by user. Eg size can be MSIX interrupts supported by a PCI
> device.
>
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> ---
> Changes since v6:
> - removed unneeded checks on elist/efds array initialisation,
>
> Changes since v5:
> - split from patch5,
>
> ---
^ permalink raw reply [flat|nested] 152+ messages in thread
* Re: [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list
2021-10-28 15:58 ` Ji, Kai
@ 2021-10-28 17:16 ` David Marchand
0 siblings, 0 replies; 152+ messages in thread
From: David Marchand @ 2021-10-28 17:16 UTC (permalink / raw)
To: Ji, Kai, hkalra
Cc: dev, dmitry.kozliuk, rasland, thomas, Burakov, Anatoly,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Mcnamara, John, Yigit, Ferruh, Ananyev, Konstantin
On Thu, Oct 28, 2021 at 5:58 PM Ji, Kai <kai.ji@intel.com> wrote:
> This patch is causing QAT failed during interrupt init, the event list does not support interrupt count size zero in QAT case.
>
> There is also Bugzilla relate to this issue: https://bugs.dpdk.org/show_bug.cgi?id=843
(We could avoid updating event list if it is large enough but) your
problem must be that QAT does not have MSIX.
Can you try this quick fix?
diff --git a/drivers/bus/pci/linux/pci_vfio.c b/drivers/bus/pci/linux/pci_vfio.c
index f622e7f8e6..13733d03f3 100644
--- a/drivers/bus/pci/linux/pci_vfio.c
+++ b/drivers/bus/pci/linux/pci_vfio.c
@@ -269,7 +269,8 @@ pci_vfio_setup_interrupts(struct rte_pci_device
*dev, int vfio_dev_fd)
/* Reallocate the efds and elist fields of intr_handle based
* on PCI device MSIX size.
*/
- if (rte_intr_event_list_update(dev->intr_handle, irq.count))
+ if (i == VFIO_PCI_MSIX_IRQ_INDEX &&
+
rte_intr_event_list_update(dev->intr_handle, irq.count))
return -1;
/* if this vector cannot be used with eventfd, fail if
we explicitly
--
David Marchand
^ permalink raw reply [flat|nested] 152+ messages in thread
end of thread, other threads:[~2021-10-28 17:16 UTC | newest]
Thread overview: 152+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26 14:57 [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 1/7] eal: interrupt handle API prototypes Harman Kalra
2021-08-31 15:52 ` Kinsella, Ray
2021-08-26 14:57 ` [dpdk-dev] [RFC 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-08-31 15:53 ` Kinsella, Ray
2021-08-26 14:57 ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-08-26 14:57 ` [dpdk-dev] [RFC 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 1/7] eal: interrupt handle API prototypes Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-09-28 15:46 ` David Marchand
2021-10-04 8:51 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-04 9:57 ` David Marchand
2021-10-12 15:22 ` Thomas Monjalon
2021-10-13 17:54 ` Harman Kalra
2021-10-13 17:57 ` Harman Kalra
2021-10-13 18:52 ` Thomas Monjalon
2021-10-14 8:22 ` Thomas Monjalon
2021-10-14 9:31 ` Harman Kalra
2021-10-14 9:37 ` David Marchand
2021-10-14 9:41 ` Thomas Monjalon
2021-10-14 10:31 ` Harman Kalra
2021-10-14 10:35 ` Thomas Monjalon
2021-10-14 10:44 ` Harman Kalra
2021-10-14 12:04 ` Thomas Monjalon
2021-10-14 10:25 ` Dmitry Kozlyuk
2021-10-03 18:05 ` [dpdk-dev] " Dmitry Kozlyuk
2021-10-04 10:37 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-04 11:18 ` Dmitry Kozlyuk
2021-10-04 14:03 ` Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-09-03 12:40 ` [dpdk-dev] [PATCH v1 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 5/7] drivers: remove direct access to interrupt handle fields Harman Kalra
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-03 18:16 ` Dmitry Kozlyuk
2021-10-04 14:09 ` [dpdk-dev] [EXT] " Harman Kalra
2021-09-03 12:41 ` [dpdk-dev] [PATCH v1 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
2021-09-15 14:13 ` [dpdk-dev] [PATCH v1 0/7] make rte_intr_handle internal Harman Kalra
2021-09-23 8:20 ` David Marchand
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 0/6] " Harman Kalra
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 1/6] eal/interrupts: implement get set APIs Harman Kalra
2021-10-14 0:58 ` Dmitry Kozlyuk
2021-10-14 17:15 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-14 17:53 ` Dmitry Kozlyuk
2021-10-15 7:53 ` Thomas Monjalon
2021-10-14 7:31 ` [dpdk-dev] " David Marchand
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-14 0:59 ` Dmitry Kozlyuk
2021-10-14 17:31 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-14 17:53 ` Dmitry Kozlyuk
2021-10-05 12:14 ` [dpdk-dev] [PATCH v2 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 4/6] drivers: remove direct access to interrupt handle Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-05 12:15 ` [dpdk-dev] [PATCH v2 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
2021-10-05 16:07 ` [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Stephen Hemminger
2021-10-07 10:57 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 " Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 1/7] malloc: introduce malloc is ready API Harman Kalra
2021-10-19 15:53 ` Thomas Monjalon
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-18 22:07 ` Dmitry Kozlyuk
2021-10-19 8:50 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-19 18:44 ` Harman Kalra
2021-10-18 22:56 ` [dpdk-dev] " Stephen Hemminger
2021-10-19 8:32 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-19 15:58 ` Thomas Monjalon
2021-10-20 15:30 ` Dmitry Kozlyuk
2021-10-21 9:16 ` Harman Kalra
2021-10-21 12:33 ` Dmitry Kozlyuk
2021-10-21 13:32 ` David Marchand
2021-10-21 16:05 ` Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 5/7] drivers: remove direct access to interrupt handle Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-18 19:37 ` [dpdk-dev] [PATCH v3 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 0/7] make rte_intr_handle internal Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 1/7] malloc: introduce malloc is ready API Harman Kalra
2021-10-19 22:01 ` Dmitry Kozlyuk
2021-10-19 22:04 ` Dmitry Kozlyuk
2021-10-20 9:01 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 2/7] eal/interrupts: implement get set APIs Harman Kalra
2021-10-20 6:14 ` David Marchand
2021-10-20 14:29 ` Dmitry Kozlyuk
2021-10-20 16:15 ` Dmitry Kozlyuk
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-19 21:27 ` Dmitry Kozlyuk
2021-10-20 9:25 ` [dpdk-dev] [EXT] " Harman Kalra
2021-10-20 9:52 ` Dmitry Kozlyuk
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 4/7] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 5/7] drivers: remove direct access to interrupt handle Harman Kalra
2021-10-20 1:57 ` Hyong Youb Kim (hyonkim)
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 6/7] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-19 18:35 ` [dpdk-dev] [PATCH v4 7/7] eal/alarm: introduce alarm fini routine Harman Kalra
2021-10-19 21:39 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 1/6] eal/interrupts: implement get set APIs Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 2/6] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 3/6] test/interrupt: apply get set interrupt handle APIs Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 4/6] drivers: remove direct access to interrupt handle Harman Kalra
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 5/6] eal/interrupts: make interrupt handle structure opaque Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 20:49 ` [dpdk-dev] [PATCH v5 6/6] eal/alarm: introduce alarm fini routine Harman Kalra
2021-10-22 23:33 ` Dmitry Kozlyuk
2021-10-22 23:37 ` Dmitry Kozlyuk
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 0/9] make rte_intr_handle internal David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 1/9] interrupts: add allocator and accessors David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 2/9] interrupts: remove direct access to interrupt handle David Marchand
2021-10-25 6:57 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 3/9] test/interrupts: " David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 4/9] alarm: " David Marchand
2021-10-25 10:49 ` Dmitry Kozlyuk
2021-10-25 11:09 ` David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 5/9] lib: " David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 6/9] drivers: " David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 7/9] interrupts: make interrupt handle structure opaque David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 8/9] interrupts: rename device specific file descriptor David Marchand
2021-10-24 20:04 ` [dpdk-dev] [PATCH v6 9/9] interrupts: extend event list David Marchand
2021-10-25 10:49 ` Dmitry Kozlyuk
2021-10-25 11:11 ` David Marchand
2021-10-25 13:04 ` [dpdk-dev] [PATCH v5 0/6] make rte_intr_handle internal Raslan Darawsheh
2021-10-25 13:09 ` David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 0/9] " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 1/9] interrupts: add allocator and accessors David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 2/9] interrupts: remove direct access to interrupt handle David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 3/9] test/interrupts: " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 4/9] alarm: " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 5/9] lib: " David Marchand
2021-10-28 6:14 ` Jiang, YuX
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 6/9] drivers: " David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 7/9] interrupts: make interrupt handle structure opaque David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 8/9] interrupts: rename device specific file descriptor David Marchand
2021-10-25 13:34 ` [dpdk-dev] [PATCH v7 9/9] interrupts: extend event list David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 1/9] interrupts: add allocator and accessors David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 2/9] interrupts: remove direct access to interrupt handle David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 3/9] test/interrupts: " David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 4/9] alarm: " David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 5/9] lib: " David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 6/9] drivers: " David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 7/9] interrupts: make interrupt handle structure opaque David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 8/9] interrupts: rename device specific file descriptor David Marchand
2021-10-25 14:27 ` [dpdk-dev] [PATCH v8 9/9] interrupts: extend event list David Marchand
2021-10-28 15:58 ` Ji, Kai
2021-10-28 17:16 ` David Marchand
2021-10-25 14:32 ` [dpdk-dev] [PATCH v8 0/9] make rte_intr_handle internal Raslan Darawsheh
2021-10-25 19:24 ` David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).