From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
To: "Timothy McDaniel" <timothy.mcdaniel@intel.com>,
"Hemant Agrawal" <hemant.agrawal@nxp.com>,
"Nipun Gupta" <nipun.gupta@nxp.com>,
"Mattias Rönnblom" <mattias.ronnblom@ericsson.com>,
"Jerin Jacob Kollanukkaran" <jerinj@marvell.com>,
"Liang Ma" <liang.j.ma@intel.com>,
"Peter Mccarthy" <peter.mccarthy@intel.com>,
"Harry van Haaren" <harry.van.haaren@intel.com>,
"Nikhil Rao" <nikhil.rao@intel.com>,
"Ray Kinsella" <mdr@ashroe.eu>,
"Neil Horman" <nhorman@tuxdriver.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"erik.g.carrillo@intel.com" <erik.g.carrillo@intel.com>,
"gage.eads@intel.com" <gage.eads@intel.com>
Subject: Re: [dpdk-dev] [EXT] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints
Date: Mon, 12 Oct 2020 19:06:18 +0000 [thread overview]
Message-ID: <BN6PR18MB114052A22890AC83A13C3FF4DE070@BN6PR18MB1140.namprd18.prod.outlook.com> (raw)
In-Reply-To: <1601929674-27662-2-git-send-email-timothy.mcdaniel@intel.com>
>This commit implements the eventdev ABI changes required by
>the DLB PMD.
>
>Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>
For octeontx/octeontx2
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>---
> drivers/event/dpaa/dpaa_eventdev.c | 3 +-
> drivers/event/dpaa2/dpaa2_eventdev.c | 5 +-
> drivers/event/dsw/dsw_evdev.c | 3 +-
> drivers/event/octeontx/ssovf_evdev.c | 5 +-
> drivers/event/octeontx2/otx2_evdev.c | 3 +-
> drivers/event/opdl/opdl_evdev.c | 3 +-
> drivers/event/skeleton/skeleton_eventdev.c | 5 +-
> drivers/event/sw/sw_evdev.c | 8 ++--
> drivers/event/sw/sw_evdev_selftest.c | 6 +--
> lib/librte_eventdev/rte_event_eth_tx_adapter.c | 2 +-
> lib/librte_eventdev/rte_eventdev.c | 66
>+++++++++++++++++++++++---
> lib/librte_eventdev/rte_eventdev.h | 51 ++++++++++++++++----
> lib/librte_eventdev/rte_eventdev_pmd_pci.h | 1 -
> lib/librte_eventdev/rte_eventdev_trace.h | 7 +--
> lib/librte_eventdev/rte_eventdev_version.map | 4 +-
> 15 files changed, 134 insertions(+), 38 deletions(-)
>
>diff --git a/drivers/event/dpaa/dpaa_eventdev.c
>b/drivers/event/dpaa/dpaa_eventdev.c
>index b5ae87a..07cd079 100644
>--- a/drivers/event/dpaa/dpaa_eventdev.c
>+++ b/drivers/event/dpaa/dpaa_eventdev.c
>@@ -355,7 +355,8 @@ dpaa_event_dev_info_get(struct rte_eventdev
>*dev,
> RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
> RTE_EVENT_DEV_CAP_BURST_MODE |
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>- RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+ RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static int
>diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c
>b/drivers/event/dpaa2/dpaa2_eventdev.c
>index 3ae4441..712db6c 100644
>--- a/drivers/event/dpaa2/dpaa2_eventdev.c
>+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
>@@ -406,7 +406,8 @@ dpaa2_eventdev_info_get(struct rte_eventdev
>*dev,
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES;
>+ RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
>+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
>
> }
>
>@@ -536,7 +537,7 @@ dpaa2_eventdev_port_def_conf(struct
>rte_eventdev *dev, uint8_t port_id,
> DPAA2_EVENT_MAX_PORT_DEQUEUE_DEPTH;
> port_conf->enqueue_depth =
> DPAA2_EVENT_MAX_PORT_ENQUEUE_DEPTH;
>- port_conf->disable_implicit_release = 0;
>+ port_conf->event_port_cfg = 0;
> }
>
> static int
>diff --git a/drivers/event/dsw/dsw_evdev.c
>b/drivers/event/dsw/dsw_evdev.c
>index e796975..933a5a5 100644
>--- a/drivers/event/dsw/dsw_evdev.c
>+++ b/drivers/event/dsw/dsw_evdev.c
>@@ -224,7 +224,8 @@ dsw_info_get(struct rte_eventdev *dev
>__rte_unused,
> .event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE|
> RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED|
> RTE_EVENT_DEV_CAP_NONSEQ_MODE|
>- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT
>+ RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT|
>+ RTE_EVENT_DEV_CAP_CARRY_FLOW_ID
> };
> }
>
>diff --git a/drivers/event/octeontx/ssovf_evdev.c
>b/drivers/event/octeontx/ssovf_evdev.c
>index 4fc4e8f..1c6bcca 100644
>--- a/drivers/event/octeontx/ssovf_evdev.c
>+++ b/drivers/event/octeontx/ssovf_evdev.c
>@@ -152,7 +152,8 @@ ssovf_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *dev_info)
>
> RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES|
>
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
> RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
>
> }
>
>@@ -218,7 +219,7 @@ ssovf_port_def_conf(struct rte_eventdev *dev,
>uint8_t port_id,
> port_conf->new_event_threshold = edev->max_num_events;
> port_conf->dequeue_depth = 1;
> port_conf->enqueue_depth = 1;
>- port_conf->disable_implicit_release = 0;
>+ port_conf->event_port_cfg = 0;
> }
>
> static void
>diff --git a/drivers/event/octeontx2/otx2_evdev.c
>b/drivers/event/octeontx2/otx2_evdev.c
>index b8b57c3..ae35bb5 100644
>--- a/drivers/event/octeontx2/otx2_evdev.c
>+++ b/drivers/event/octeontx2/otx2_evdev.c
>@@ -501,7 +501,8 @@ otx2_sso_info_get(struct rte_eventdev
>*event_dev,
>
> RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
>
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
> RTE_EVENT_DEV_CAP_NONSEQ_MODE;
>+
> RTE_EVENT_DEV_CAP_NONSEQ_MODE |
>+
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static void
>diff --git a/drivers/event/opdl/opdl_evdev.c
>b/drivers/event/opdl/opdl_evdev.c
>index 9b2f75f..3050578 100644
>--- a/drivers/event/opdl/opdl_evdev.c
>+++ b/drivers/event/opdl/opdl_evdev.c
>@@ -374,7 +374,8 @@ opdl_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *info)
> .max_event_port_dequeue_depth =
>MAX_OPDL_CONS_Q_DEPTH,
> .max_event_port_enqueue_depth =
>MAX_OPDL_CONS_Q_DEPTH,
> .max_num_events = OPDL_INFLIGHT_EVENTS_TOTAL,
>- .event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE,
>+ .event_dev_cap =
>RTE_EVENT_DEV_CAP_BURST_MODE |
>+
>RTE_EVENT_DEV_CAP_CARRY_FLOW_ID,
> };
>
> *info = evdev_opdl_info;
>diff --git a/drivers/event/skeleton/skeleton_eventdev.c
>b/drivers/event/skeleton/skeleton_eventdev.c
>index c889220..6fd1102 100644
>--- a/drivers/event/skeleton/skeleton_eventdev.c
>+++ b/drivers/event/skeleton/skeleton_eventdev.c
>@@ -101,7 +101,8 @@ skeleton_eventdev_info_get(struct
>rte_eventdev *dev,
> dev_info->max_num_events = (1ULL << 20);
> dev_info->event_dev_cap =
>RTE_EVENT_DEV_CAP_QUEUE_QOS |
>
> RTE_EVENT_DEV_CAP_BURST_MODE |
>-
> RTE_EVENT_DEV_CAP_EVENT_QOS;
>+
> RTE_EVENT_DEV_CAP_EVENT_QOS |
>+
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID;
> }
>
> static int
>@@ -209,7 +210,7 @@ skeleton_eventdev_port_def_conf(struct
>rte_eventdev *dev, uint8_t port_id,
> port_conf->new_event_threshold = 32 * 1024;
> port_conf->dequeue_depth = 16;
> port_conf->enqueue_depth = 16;
>- port_conf->disable_implicit_release = 0;
>+ port_conf->event_port_cfg = 0;
> }
>
> static void
>diff --git a/drivers/event/sw/sw_evdev.c
>b/drivers/event/sw/sw_evdev.c
>index 98dae71..058f568 100644
>--- a/drivers/event/sw/sw_evdev.c
>+++ b/drivers/event/sw/sw_evdev.c
>@@ -175,7 +175,8 @@ sw_port_setup(struct rte_eventdev *dev,
>uint8_t port_id,
> }
>
> p->inflight_max = conf->new_event_threshold;
>- p->implicit_release = !conf->disable_implicit_release;
>+ p->implicit_release = !(conf->event_port_cfg &
>+
> RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>
> /* check if ring exists, same as rx_worker above */
> snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
>@@ -508,7 +509,7 @@ sw_port_def_conf(struct rte_eventdev *dev,
>uint8_t port_id,
> port_conf->new_event_threshold = 1024;
> port_conf->dequeue_depth = 16;
> port_conf->enqueue_depth = 16;
>- port_conf->disable_implicit_release = 0;
>+ port_conf->event_port_cfg = 0;
> }
>
> static int
>@@ -615,7 +616,8 @@ sw_info_get(struct rte_eventdev *dev, struct
>rte_event_dev_info *info)
>
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE|
>
> RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
>
> RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
>-
> RTE_EVENT_DEV_CAP_NONSEQ_MODE),
>+ RTE_EVENT_DEV_CAP_NONSEQ_MODE
>|
>+
> RTE_EVENT_DEV_CAP_CARRY_FLOW_ID),
> };
>
> *info = evdev_sw_info;
>diff --git a/drivers/event/sw/sw_evdev_selftest.c
>b/drivers/event/sw/sw_evdev_selftest.c
>index 38c21fa..4a7d823 100644
>--- a/drivers/event/sw/sw_evdev_selftest.c
>+++ b/drivers/event/sw/sw_evdev_selftest.c
>@@ -172,7 +172,6 @@ create_ports(struct test *t, int num_ports)
> .new_event_threshold = 1024,
> .dequeue_depth = 32,
> .enqueue_depth = 64,
>- .disable_implicit_release = 0,
> };
> if (num_ports > MAX_PORTS)
> return -1;
>@@ -1227,7 +1226,6 @@ port_reconfig_credits(struct test *t)
> .new_event_threshold = 128,
> .dequeue_depth = 32,
> .enqueue_depth = 64,
>- .disable_implicit_release = 0,
> };
> if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
> printf("%d Error setting up port\n", __LINE__);
>@@ -1317,7 +1315,6 @@ port_single_lb_reconfig(struct test *t)
> .new_event_threshold = 128,
> .dequeue_depth = 32,
> .enqueue_depth = 64,
>- .disable_implicit_release = 0,
> };
> if (rte_event_port_setup(evdev, 0, &port_conf) < 0) {
> printf("%d Error setting up port\n", __LINE__);
>@@ -3079,7 +3076,8 @@ worker_loopback(struct test *t, uint8_t
>disable_implicit_release)
> * only be initialized once - and this needs to be set for multiple
>runs
> */
> conf.new_event_threshold = 512;
>- conf.disable_implicit_release = disable_implicit_release;
>+ conf.event_port_cfg = disable_implicit_release ?
>+ RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL : 0;
>
> if (rte_event_port_setup(evdev, 0, &conf) < 0) {
> printf("Error setting up RX port\n");
>diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>index bb21dc4..8a72256 100644
>--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.c
>@@ -286,7 +286,7 @@ txa_service_conf_cb(uint8_t __rte_unused id,
>uint8_t dev_id,
> return ret;
> }
>
>- pc->disable_implicit_release = 0;
>+ pc->event_port_cfg = 0;
> ret = rte_event_port_setup(dev_id, port_id, pc);
> if (ret) {
> RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
>diff --git a/lib/librte_eventdev/rte_eventdev.c
>b/lib/librte_eventdev/rte_eventdev.c
>index 82c177c..3a5b738 100644
>--- a/lib/librte_eventdev/rte_eventdev.c
>+++ b/lib/librte_eventdev/rte_eventdev.c
>@@ -32,6 +32,7 @@
> #include <rte_ethdev.h>
> #include <rte_cryptodev.h>
> #include <rte_cryptodev_pmd.h>
>+#include <rte_compat.h>
>
> #include "rte_eventdev.h"
> #include "rte_eventdev_pmd.h"
>@@ -437,9 +438,29 @@ rte_event_dev_configure(uint8_t dev_id,
> dev_id);
> return -EINVAL;
> }
>- if (dev_conf->nb_event_queues > info.max_event_queues) {
>- RTE_EDEV_LOG_ERR("%d nb_event_queues=%d >
>max_event_queues=%d",
>- dev_id, dev_conf->nb_event_queues,
>info.max_event_queues);
>+ if (dev_conf->nb_event_queues > info.max_event_queues +
>+ info.max_single_link_event_port_queue_pairs)
>{
>+ RTE_EDEV_LOG_ERR("%d nb_event_queues=%d >
>max_event_queues=%d +
>max_single_link_event_port_queue_pairs=%d",
>+ dev_id, dev_conf->nb_event_queues,
>+ info.max_event_queues,
>+
>info.max_single_link_event_port_queue_pairs);
>+ return -EINVAL;
>+ }
>+ if (dev_conf->nb_event_queues -
>+ dev_conf->nb_single_link_event_port_queues >
>+ info.max_event_queues) {
>+ RTE_EDEV_LOG_ERR("id%d nb_event_queues=%d -
>nb_single_link_event_port_queues=%d > max_event_queues=%d",
>+ dev_id, dev_conf->nb_event_queues,
>+ dev_conf-
>>nb_single_link_event_port_queues,
>+ info.max_event_queues);
>+ return -EINVAL;
>+ }
>+ if (dev_conf->nb_single_link_event_port_queues >
>+ dev_conf->nb_event_queues) {
>+ RTE_EDEV_LOG_ERR("dev%d
>nb_single_link_event_port_queues=%d > nb_event_queues=%d",
>+ dev_id,
>+ dev_conf-
>>nb_single_link_event_port_queues,
>+ dev_conf->nb_event_queues);
> return -EINVAL;
> }
>
>@@ -448,9 +469,31 @@ rte_event_dev_configure(uint8_t dev_id,
> RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot
>be zero", dev_id);
> return -EINVAL;
> }
>- if (dev_conf->nb_event_ports > info.max_event_ports) {
>- RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d >
>max_event_ports= %d",
>- dev_id, dev_conf->nb_event_ports,
>info.max_event_ports);
>+ if (dev_conf->nb_event_ports > info.max_event_ports +
>+ info.max_single_link_event_port_queue_pairs)
>{
>+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d >
>max_event_ports=%d + max_single_link_event_port_queue_pairs=%d",
>+ dev_id, dev_conf->nb_event_ports,
>+ info.max_event_ports,
>+
>info.max_single_link_event_port_queue_pairs);
>+ return -EINVAL;
>+ }
>+ if (dev_conf->nb_event_ports -
>+ dev_conf->nb_single_link_event_port_queues
>+ > info.max_event_ports) {
>+ RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d -
>nb_single_link_event_port_queues=%d > max_event_ports=%d",
>+ dev_id, dev_conf->nb_event_ports,
>+ dev_conf-
>>nb_single_link_event_port_queues,
>+ info.max_event_ports);
>+ return -EINVAL;
>+ }
>+
>+ if (dev_conf->nb_single_link_event_port_queues >
>+ dev_conf->nb_event_ports) {
>+ RTE_EDEV_LOG_ERR(
>+ "dev%d
>nb_single_link_event_port_queues=%d > nb_event_ports=%d",
>+ dev_id,
>+ dev_conf-
>>nb_single_link_event_port_queues,
>+ dev_conf->nb_event_ports);
> return -EINVAL;
> }
>
>@@ -737,7 +780,8 @@ rte_event_port_setup(uint8_t dev_id, uint8_t
>port_id,
> return -EINVAL;
> }
>
>- if (port_conf && port_conf->disable_implicit_release &&
>+ if (port_conf &&
>+ (port_conf->event_port_cfg &
>RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL) &&
> !(dev->data->event_dev_cap &
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)) {
> RTE_EDEV_LOG_ERR(
>@@ -830,6 +874,14 @@ rte_event_port_attr_get(uint8_t dev_id,
>uint8_t port_id, uint32_t attr_id,
> case RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD:
> *attr_value = dev->data-
>>ports_cfg[port_id].new_event_threshold;
> break;
>+ case RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE:
>+ {
>+ uint32_t config;
>+
>+ config = dev->data-
>>ports_cfg[port_id].event_port_cfg;
>+ *attr_value = !!(config &
>RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL);
>+ break;
>+ }
> default:
> return -EINVAL;
> };
>diff --git a/lib/librte_eventdev/rte_eventdev.h
>b/lib/librte_eventdev/rte_eventdev.h
>index 7dc8323..ce1fc2c 100644
>--- a/lib/librte_eventdev/rte_eventdev.h
>+++ b/lib/librte_eventdev/rte_eventdev.h
>@@ -291,6 +291,12 @@ struct rte_event;
> * single queue to each port or map a single queue to many port.
> */
>
>+#define RTE_EVENT_DEV_CAP_CARRY_FLOW_ID (1ULL << 9)
>+/**< Event device preserves the flow ID from the enqueued
>+ * event to the dequeued event if the flag is set. Otherwise,
>+ * the content of this field is implementation dependent.
>+ */
>+
> /* Event device priority levels */
> #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0
> /**< Highest priority expressed across eventdev subsystem
>@@ -380,6 +386,10 @@ struct rte_event_dev_info {
> * event port by this device.
> * A device that does not support bulk enqueue will set this as 1.
> */
>+ uint8_t max_event_port_links;
>+ /**< Maximum number of queues that can be linked to a single
>event
>+ * port by this device.
>+ */
> int32_t max_num_events;
> /**< A *closed system* event dev has a limit on the number of
>events it
> * can manage at a time. An *open system* event dev does not
>have a
>@@ -387,6 +397,12 @@ struct rte_event_dev_info {
> */
> uint32_t event_dev_cap;
> /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
>+ uint8_t max_single_link_event_port_queue_pairs;
>+ /**< Maximum number of event ports and queues that are
>optimized for
>+ * (and only capable of) single-link configurations supported by
>this
>+ * device. These ports and queues are not accounted for in
>+ * max_event_ports or max_event_queues.
>+ */
> };
>
> /**
>@@ -494,6 +510,14 @@ struct rte_event_dev_config {
> */
> uint32_t event_dev_cfg;
> /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
>+ uint8_t nb_single_link_event_port_queues;
>+ /**< Number of event ports and queues that will be singly-
>linked to
>+ * each other. These are a subset of the overall event ports and
>+ * queues; this value cannot exceed *nb_event_ports* or
>+ * *nb_event_queues*. If the device has ports and queues that
>are
>+ * optimized for single-link usage, this field is a hint for how
>many
>+ * to allocate; otherwise, regular event ports and queues can be
>used.
>+ */
> };
>
> /**
>@@ -519,7 +543,6 @@ int
> rte_event_dev_configure(uint8_t dev_id,
> const struct rte_event_dev_config *dev_conf);
>
>-
> /* Event queue specific APIs */
>
> /* Event queue configuration bitmap flags */
>@@ -671,6 +694,20 @@ rte_event_queue_attr_get(uint8_t dev_id,
>uint8_t queue_id, uint32_t attr_id,
>
> /* Event port specific APIs */
>
>+/* Event port configuration bitmap flags */
>+#define RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL (1ULL << 0)
>+/**< Configure the port not to release outstanding events in
>+ * rte_event_dev_dequeue_burst(). If set, all events received through
>+ * the port must be explicitly released with RTE_EVENT_OP_RELEASE
>or
>+ * RTE_EVENT_OP_FORWARD. Must be unset if the device is not
>+ * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>+ */
>+#define RTE_EVENT_PORT_CFG_SINGLE_LINK (1ULL << 1)
>+/**< This event port links only to a single event queue.
>+ *
>+ * @see rte_event_port_setup(), rte_event_port_link()
>+ */
>+
> /** Event port configuration structure */
> struct rte_event_port_conf {
> int32_t new_event_threshold;
>@@ -698,13 +735,7 @@ struct rte_event_port_conf {
> * which previously supplied to rte_event_dev_configure().
> * Ignored when device is not
>RTE_EVENT_DEV_CAP_BURST_MODE capable.
> */
>- uint8_t disable_implicit_release;
>- /**< Configure the port not to release outstanding events in
>- * rte_event_dev_dequeue_burst(). If true, all events received
>through
>- * the port must be explicitly released with
>RTE_EVENT_OP_RELEASE or
>- * RTE_EVENT_OP_FORWARD. Must be false when the device is
>not
>- * RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capable.
>- */
>+ uint32_t event_port_cfg; /**< Port cfg
>flags(EVENT_PORT_CFG_) */
> };
>
> /**
>@@ -769,6 +800,10 @@ rte_event_port_setup(uint8_t dev_id, uint8_t
>port_id,
> * The new event threshold of the port
> */
> #define RTE_EVENT_PORT_ATTR_NEW_EVENT_THRESHOLD 2
>+/**
>+ * The implicit release disable attribute of the port
>+ */
>+#define RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE 3
>
> /**
> * Get an attribute from a port.
>diff --git a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>index 443cd38..a3f9244 100644
>--- a/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>+++ b/lib/librte_eventdev/rte_eventdev_pmd_pci.h
>@@ -88,7 +88,6 @@ rte_event_pmd_pci_probe(struct rte_pci_driver
>*pci_drv,
> return -ENXIO;
> }
>
>-
> /**
> * @internal
> * Wrapper for use by pci drivers as a .remove function to detach a
>event
>diff --git a/lib/librte_eventdev/rte_eventdev_trace.h
>b/lib/librte_eventdev/rte_eventdev_trace.h
>index 4de6341..5ec43d8 100644
>--- a/lib/librte_eventdev/rte_eventdev_trace.h
>+++ b/lib/librte_eventdev/rte_eventdev_trace.h
>@@ -34,6 +34,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_u32(dev_conf-
>>nb_event_port_dequeue_depth);
> rte_trace_point_emit_u32(dev_conf-
>>nb_event_port_enqueue_depth);
> rte_trace_point_emit_u32(dev_conf->event_dev_cfg);
>+ rte_trace_point_emit_u8(dev_conf-
>>nb_single_link_event_port_queues);
> rte_trace_point_emit_int(rc);
> )
>
>@@ -59,7 +60,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_i32(port_conf->new_event_threshold);
> rte_trace_point_emit_u16(port_conf->dequeue_depth);
> rte_trace_point_emit_u16(port_conf->enqueue_depth);
>- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
> rte_trace_point_emit_int(rc);
> )
>
>@@ -165,7 +166,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_i32(port_conf->new_event_threshold);
> rte_trace_point_emit_u16(port_conf->dequeue_depth);
> rte_trace_point_emit_u16(port_conf->enqueue_depth);
>- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
> rte_trace_point_emit_ptr(conf_cb);
> rte_trace_point_emit_int(rc);
> )
>@@ -257,7 +258,7 @@ RTE_TRACE_POINT(
> rte_trace_point_emit_i32(port_conf->new_event_threshold);
> rte_trace_point_emit_u16(port_conf->dequeue_depth);
> rte_trace_point_emit_u16(port_conf->enqueue_depth);
>- rte_trace_point_emit_u8(port_conf->disable_implicit_release);
>+ rte_trace_point_emit_u32(port_conf->event_port_cfg);
> )
>
> RTE_TRACE_POINT(
>diff --git a/lib/librte_eventdev/rte_eventdev_version.map
>b/lib/librte_eventdev/rte_eventdev_version.map
>index 3d9d0ca..2846d04 100644
>--- a/lib/librte_eventdev/rte_eventdev_version.map
>+++ b/lib/librte_eventdev/rte_eventdev_version.map
>@@ -100,7 +100,6 @@ EXPERIMENTAL {
> # added in 20.05
> __rte_eventdev_trace_configure;
> __rte_eventdev_trace_queue_setup;
>- __rte_eventdev_trace_port_setup;
> __rte_eventdev_trace_port_link;
> __rte_eventdev_trace_port_unlink;
> __rte_eventdev_trace_start;
>@@ -134,4 +133,7 @@ EXPERIMENTAL {
> __rte_eventdev_trace_crypto_adapter_queue_pair_del;
> __rte_eventdev_trace_crypto_adapter_start;
> __rte_eventdev_trace_crypto_adapter_stop;
>+
>+ # changed in 20.11
>+ __rte_eventdev_trace_port_setup;
> };
>--
>2.6.4
next prev parent reply other threads:[~2020-10-12 19:06 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-05 20:27 [dpdk-dev] [PATCH v2 0/2] Eventdev ABI changes for DLB/DLB2 Timothy McDaniel
2020-10-05 20:27 ` [dpdk-dev] [PATCH v2 1/2] eventdev: eventdev: express DLB/DLB2 PMD constraints Timothy McDaniel
2020-10-06 8:15 ` Van Haaren, Harry
2020-10-12 19:06 ` Pavan Nikhilesh Bhagavatula [this message]
2020-10-05 20:27 ` [dpdk-dev] [PATCH v2 2/2] eventdev: update app and examples for new eventdev ABI Timothy McDaniel
2020-10-06 8:26 ` Van Haaren, Harry
2020-10-12 19:09 ` Pavan Nikhilesh Bhagavatula
2020-10-13 19:20 ` Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BN6PR18MB114052A22890AC83A13C3FF4DE070@BN6PR18MB1140.namprd18.prod.outlook.com \
--to=pbhagavatula@marvell.com \
--cc=dev@dpdk.org \
--cc=erik.g.carrillo@intel.com \
--cc=gage.eads@intel.com \
--cc=harry.van.haaren@intel.com \
--cc=hemant.agrawal@nxp.com \
--cc=jerinj@marvell.com \
--cc=liang.j.ma@intel.com \
--cc=mattias.ronnblom@ericsson.com \
--cc=mdr@ashroe.eu \
--cc=nhorman@tuxdriver.com \
--cc=nikhil.rao@intel.com \
--cc=nipun.gupta@nxp.com \
--cc=peter.mccarthy@intel.com \
--cc=timothy.mcdaniel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).