* [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev
@ 2017-12-15 12:59 Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 1/6] bus/dpaa: added event dequeue and consumption support Sunil Kumar Kori
` (7 more replies)
0 siblings, 8 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Event device support for atomic and parallel queues.
This patch set includes following changes:
1. Configuration of atomic and parallel queues with given event device.
2. Also maintains previous dequeue method, via poll mode queues.
3. Added Rx functions to dequeue data from portal.
4. DCA consumption logic for atomic queues.
5. Dynamic Logging macros for event device
Sunil Kumar Kori (6):
bus/dpaa: added event dequeue and consumption support
bus/dpaa: add dpaa eventdev dynamic log support
net/dpaa: ethdev Rx queue configurations with eventdev
event/dpaa: add eventdev PMD
config: enabling compilation of DPAA eventdev PMD
doc: add DPAA eventdev guide
config/common_base | 4 +
config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +
doc/guides/eventdevs/dpaa.rst | 144 +++++
doc/guides/eventdevs/index.rst | 1 +
drivers/bus/dpaa/base/qbman/qman.c | 90 ++-
drivers/bus/dpaa/dpaa_bus.c | 6 +
drivers/bus/dpaa/include/fsl_qman.h | 26 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 6 +
drivers/bus/dpaa/rte_dpaa_bus.h | 14 +
drivers/bus/dpaa/rte_dpaa_logs.h | 16 +
drivers/event/Makefile | 1 +
drivers/event/dpaa/Makefile | 37 ++
drivers/event/dpaa/dpaa_eventdev.c | 652 ++++++++++++++++++++++
drivers/event/dpaa/dpaa_eventdev.h | 86 +++
drivers/event/dpaa/rte_pmd_dpaa_event_version.map | 4 +
drivers/net/dpaa/Makefile | 2 +
drivers/net/dpaa/dpaa_ethdev.c | 110 +++-
drivers/net/dpaa/dpaa_ethdev.h | 29 +
drivers/net/dpaa/dpaa_rxtx.c | 79 ++-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 2 +
mk/rte.app.mk | 1 +
21 files changed, 1296 insertions(+), 17 deletions(-)
create mode 100644 doc/guides/eventdevs/dpaa.rst
create mode 100644 drivers/event/dpaa/Makefile
create mode 100644 drivers/event/dpaa/dpaa_eventdev.c
create mode 100644 drivers/event/dpaa/dpaa_eventdev.h
create mode 100644 drivers/event/dpaa/rte_pmd_dpaa_event_version.map
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 1/6] bus/dpaa: added event dequeue and consumption support
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 2/6] bus/dpaa: add dpaa eventdev dynamic log support Sunil Kumar Kori
` (6 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
To receive events from given event port, corresponding
function needs to be added which receives events
from portal. Also added function to consume received
events based on entry index.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 90 +++++++++++++++++++++++++++++--
drivers/bus/dpaa/dpaa_bus.c | 1 +
drivers/bus/dpaa/include/fsl_qman.h | 26 +++++++--
drivers/bus/dpaa/rte_bus_dpaa_version.map | 5 ++
drivers/bus/dpaa/rte_dpaa_bus.h | 14 +++++
drivers/net/dpaa/dpaa_rxtx.c | 1 +
6 files changed, 128 insertions(+), 9 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 42d509d..f39e618 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -41,6 +41,7 @@
#include "qman.h"
#include <rte_branch_prediction.h>
#include <rte_dpaa_bus.h>
+#include <rte_eventdev.h>
/* Compilation constants */
#define DQRR_MAXFILL 15
@@ -1144,6 +1145,74 @@ unsigned int qman_portal_poll_rx(unsigned int poll_limit,
return limit;
}
+u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
+ void **bufs)
+{
+ const struct qm_dqrr_entry *dq;
+ struct qman_fq *fq;
+ enum qman_cb_dqrr_result res;
+ unsigned int limit = 0;
+ struct qman_portal *p = get_affine_portal();
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ struct qm_dqrr_entry *shadow;
+#endif
+ unsigned int rx_number = 0;
+
+ do {
+ qm_dqrr_pvb_update(&p->p);
+ dq = qm_dqrr_current(&p->p);
+ if (!dq)
+ break;
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ /*
+ * If running on an LE system the fields of the
+ * dequeue entry must be swapper. Because the
+ * QMan HW will ignore writes the DQRR entry is
+ * copied and the index stored within the copy
+ */
+ shadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];
+ *shadow = *dq;
+ dq = shadow;
+ shadow->fqid = be32_to_cpu(shadow->fqid);
+ shadow->contextB = be32_to_cpu(shadow->contextB);
+ shadow->seqnum = be16_to_cpu(shadow->seqnum);
+ hw_fd_to_cpu(&shadow->fd);
+#endif
+
+ /* SDQCR: context_b points to the FQ */
+#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP
+ fq = get_fq_table_entry(dq->contextB);
+#else
+ fq = (void *)(uintptr_t)dq->contextB;
+#endif
+ /* Now let the callback do its stuff */
+ res = fq->cb.dqrr_dpdk_cb(&ev[rx_number], p, fq,
+ dq, &bufs[rx_number]);
+ rx_number++;
+ /* Interpret 'dq' from a driver perspective. */
+ /*
+ * Parking isn't possible unless HELDACTIVE was set. NB,
+ * FORCEELIGIBLE implies HELDACTIVE, so we only need to
+ * check for HELDACTIVE to cover both.
+ */
+ DPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||
+ (res != qman_cb_dqrr_park));
+ if (res != qman_cb_dqrr_defer)
+ qm_dqrr_cdc_consume_1ptr(&p->p, dq,
+ res == qman_cb_dqrr_park);
+ /* Move forward */
+ qm_dqrr_next(&p->p);
+ /*
+ * Entry processed and consumed, increment our counter. The
+ * callback can request that we exit after consuming the
+ * entry, and we also exit if we reach our processing limit,
+ * so loop back only if neither of these conditions is met.
+ */
+ } while (++limit < poll_limit);
+
+ return limit;
+}
+
struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)
{
struct qman_portal *p = get_affine_portal();
@@ -1262,13 +1331,20 @@ u32 qman_static_dequeue_get(struct qman_portal *qp)
return p->sdqcr;
}
-void qman_dca(struct qm_dqrr_entry *dq, int park_request)
+void qman_dca(const struct qm_dqrr_entry *dq, int park_request)
{
struct qman_portal *p = get_affine_portal();
qm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);
}
+void qman_dca_index(u8 index, int park_request)
+{
+ struct qman_portal *p = get_affine_portal();
+
+ qm_dqrr_cdc_consume_1(&p->p, index, park_request);
+}
+
/* Frame queue API */
static const char *mcr_result_str(u8 result)
{
@@ -2116,8 +2192,8 @@ int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)
}
int qman_enqueue_multi(struct qman_fq *fq,
- const struct qm_fd *fd,
- int frames_to_send)
+ const struct qm_fd *fd, u32 *flags,
+ int frames_to_send)
{
struct qman_portal *p = get_affine_portal();
struct qm_portal *portal = &p->p;
@@ -2125,7 +2201,7 @@ int qman_enqueue_multi(struct qman_fq *fq,
register struct qm_eqcr *eqcr = &portal->eqcr;
struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;
- u8 i, diff, old_ci, sent = 0;
+ u8 i = 0, diff, old_ci, sent = 0;
/* Update the available entries if no entry is free */
if (!eqcr->available) {
@@ -2149,7 +2225,11 @@ int qman_enqueue_multi(struct qman_fq *fq,
eq->fd.addr = cpu_to_be40(fd->addr);
eq->fd.status = cpu_to_be32(fd->status);
eq->fd.opaque = cpu_to_be32(fd->opaque);
-
+ if (flags[i] & QMAN_ENQUEUE_FLAG_DCA) {
+ eq->dca = QM_EQCR_DCA_ENABLE |
+ ((flags[i] >> 8) & QM_EQCR_DCA_IDXMASK);
+ }
+ i++;
eq = (void *)((unsigned long)(eq + 1) &
(~(unsigned long)(QM_EQCR_SIZE << 6)));
eqcr->available--;
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 8d74643..01b332a 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -80,6 +80,7 @@ pthread_key_t dpaa_portal_key;
unsigned int dpaa_svr_family;
RTE_DEFINE_PER_LCORE(bool, _dpaa_io);
+RTE_DEFINE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
static inline void
dpaa_add_to_device_list(struct rte_dpaa_device *dev)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index 7ec07ee..145c1c1 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -45,6 +45,7 @@ extern "C" {
#endif
#include <dpaa_rbtree.h>
+#include <rte_eventdev.h>
/* FQ lookups (turn this on for 64bit user-space) */
#if (__WORDSIZE == 64)
@@ -1239,6 +1240,7 @@ struct qman_fq {
/* DPDK Interface */
void *dpaa_intf;
+ struct rte_event ev;
/* affined portal in case of static queue */
struct qman_portal *qp;
@@ -1329,6 +1331,9 @@ struct qman_cgr {
*/
int qman_get_portal_index(void);
+u32 qman_portal_dequeue(struct rte_event ev[], unsigned int poll_limit,
+ void **bufs);
+
/**
* qman_affine_channel - return the channel ID of an portal
* @cpu: the cpu whose affine portal is the subject of the query
@@ -1462,7 +1467,21 @@ u32 qman_static_dequeue_get(struct qman_portal *qp);
* function must be called from the same CPU as that which processed the DQRR
* entry in the first place.
*/
-void qman_dca(struct qm_dqrr_entry *dq, int park_request);
+void qman_dca(const struct qm_dqrr_entry *dq, int park_request);
+
+/**
+ * qman_dca_index - Perform a Discrete Consumption Acknowledgment
+ * @index: the DQRR index to be consumed
+ * @park_request: indicates whether the held-active @fq should be parked
+ *
+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had
+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this
+ * does not take a 'portal' argument but implies the core affine portal from the
+ * cpu that is currently executing the function. For reasons of locking, this
+ * function must be called from the same CPU as that which processed the DQRR
+ * entry in the first place.
+ */
+void qman_dca_index(u8 index, int park_request);
/**
* qman_eqcr_is_empty - Determine if portal's EQCR is empty
@@ -1730,9 +1749,8 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);
*/
int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);
-int qman_enqueue_multi(struct qman_fq *fq,
- const struct qm_fd *fd,
- int frames_to_send);
+int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags,
+ int frames_to_send);
typedef int (*qman_cb_precommit) (void *arg);
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index 460cfbf..afc40bc 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -69,14 +69,19 @@ DPDK_18.02 {
global:
dpaa_svr_family;
+ per_lcore_held_bufs;
+ qm_channel_pool1;
qman_alloc_cgrid_range;
qman_alloc_pool_range;
qman_create_cgr;
+ qman_dca_index;
qman_delete_cgr;
qman_modify_cgr;
+ qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
qman_release_cgrid_range;
+ qman_static_dequeue_add;
rte_dpaa_portal_fq_close;
rte_dpaa_portal_fq_init;
diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index b0f7d48..6aa9e60 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -181,6 +181,20 @@ static void dpaainitfn_ ##nm(void) \
} \
RTE_PMD_EXPORT_NAME(nm, __COUNTER__)
+/* Create storage for dqrr entries per lcore */
+#define DPAA_PORTAL_DEQUEUE_DEPTH 16
+struct dpaa_portal_dqrr {
+ void *mbuf[DPAA_PORTAL_DEQUEUE_DEPTH];
+ uint64_t dqrr_held;
+ uint8_t dqrr_size;
+};
+
+RTE_DECLARE_PER_LCORE(struct dpaa_portal_dqrr, held_bufs);
+
+#define DPAA_PER_LCORE_DQRR_SIZE RTE_PER_LCORE(held_bufs).dqrr_size
+#define DPAA_PER_LCORE_DQRR_HELD RTE_PER_LCORE(held_bufs).dqrr_held
+#define DPAA_PER_LCORE_DQRR_MBUF(i) RTE_PER_LCORE(held_bufs).mbuf[i]
+
#ifdef __cplusplus
}
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 088fbe1..1caecf2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -800,6 +800,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
loop = 0;
while (loop < frames_to_send) {
loop += qman_enqueue_multi(q, &fd_arr[loop],
+ NULL,
frames_to_send - loop);
}
nb_bufs -= frames_to_send;
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 2/6] bus/dpaa: add dpaa eventdev dynamic log support
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 1/6] bus/dpaa: added event dequeue and consumption support Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 3/6] net/dpaa: ethdev Rx queue configurations with eventdev Sunil Kumar Kori
` (5 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
drivers/bus/dpaa/dpaa_bus.c | 5 +++++
drivers/bus/dpaa/rte_bus_dpaa_version.map | 1 +
drivers/bus/dpaa/rte_dpaa_logs.h | 16 ++++++++++++++++
3 files changed, 22 insertions(+)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 01b332a..60a1ad5 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -70,6 +70,7 @@
int dpaa_logtype_bus;
int dpaa_logtype_mempool;
int dpaa_logtype_pmd;
+int dpaa_logtype_eventdev;
struct rte_dpaa_bus rte_dpaa_bus;
struct netcfg_info *dpaa_netcfg;
@@ -562,4 +563,8 @@ dpaa_init_log(void)
dpaa_logtype_pmd = rte_log_register("pmd.dpaa");
if (dpaa_logtype_pmd >= 0)
rte_log_set_level(dpaa_logtype_pmd, RTE_LOG_NOTICE);
+
+ dpaa_logtype_eventdev = rte_log_register("eventdev.dpaa");
+ if (dpaa_logtype_eventdev >= 0)
+ rte_log_set_level(dpaa_logtype_eventdev, RTE_LOG_NOTICE);
}
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index afc40bc..93cd118 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -68,6 +68,7 @@ DPDK_17.11 {
DPDK_18.02 {
global:
+ dpaa_logtype_eventdev;
dpaa_svr_family;
per_lcore_held_bufs;
qm_channel_pool1;
diff --git a/drivers/bus/dpaa/rte_dpaa_logs.h b/drivers/bus/dpaa/rte_dpaa_logs.h
index 037c96b..f36aac1 100644
--- a/drivers/bus/dpaa/rte_dpaa_logs.h
+++ b/drivers/bus/dpaa/rte_dpaa_logs.h
@@ -38,6 +38,7 @@
extern int dpaa_logtype_bus;
extern int dpaa_logtype_mempool;
extern int dpaa_logtype_pmd;
+extern int dpaa_logtype_eventdev;
#define DPAA_BUS_LOG(level, fmt, args...) \
rte_log(RTE_LOG_ ## level, dpaa_logtype_bus, "%s(): " fmt "\n", \
@@ -100,6 +101,21 @@ extern int dpaa_logtype_pmd;
#define DPAA_PMD_WARN(fmt, args...) \
DPAA_PMD_LOG(WARNING, fmt, ## args)
+#define DPAA_EVENTDEV_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ## level, dpaa_logtype_eventdev, "%s(): " fmt "\n", \
+ __func__, ##args)
+
+#define EVENTDEV_INIT_FUNC_TRACE() DPAA_EVENTDEV_LOG(DEBUG, " >>")
+
+#define DPAA_EVENTDEV_DEBUG(fmt, args...) \
+ DPAA_EVENTDEV_LOG(DEBUG, fmt, ## args)
+#define DPAA_EVENTDEV_ERR(fmt, args...) \
+ DPAA_EVENTDEV_LOG(ERR, fmt, ## args)
+#define DPAA_EVENTDEV_INFO(fmt, args...) \
+ DPAA_EVENTDEV_LOG(INFO, fmt, ## args)
+#define DPAA_EVENTDEV_WARN(fmt, args...) \
+ DPAA_EVENTDEV_LOG(WARNING, fmt, ## args)
+
/* DP Logs, toggled out at compile time if level lower than current level */
#define DPAA_DP_LOG(level, fmt, args...) \
RTE_LOG_DP(level, PMD, fmt, ## args)
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 3/6] net/dpaa: ethdev Rx queue configurations with eventdev
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 1/6] bus/dpaa: added event dequeue and consumption support Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 2/6] bus/dpaa: add dpaa eventdev dynamic log support Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 4/6] event/dpaa: add eventdev PMD Sunil Kumar Kori
` (4 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Given ethernet Rx queues can be attached with event queue in
parallel or atomic mode. Patch imlmplements Rx queue
configuration and their corresponding callbacks to handle
events from respective queues.
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
drivers/net/dpaa/Makefile | 2 +
drivers/net/dpaa/dpaa_ethdev.c | 110 ++++++++++++++++++++++++++++--
drivers/net/dpaa/dpaa_ethdev.h | 29 ++++++++
drivers/net/dpaa/dpaa_rxtx.c | 80 +++++++++++++++++++++-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 2 +
5 files changed, 214 insertions(+), 9 deletions(-)
diff --git a/drivers/net/dpaa/Makefile b/drivers/net/dpaa/Makefile
index a99d1ee..c644353 100644
--- a/drivers/net/dpaa/Makefile
+++ b/drivers/net/dpaa/Makefile
@@ -43,7 +43,9 @@ CFLAGS += -I$(RTE_SDK_DPAA)/
CFLAGS += -I$(RTE_SDK_DPAA)/include
CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/base/qbman
CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/event/dpaa
CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 7798994..457e421 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -121,6 +121,21 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
static struct rte_dpaa_driver rte_dpaa_pmd;
+static inline void
+dpaa_poll_queue_default_config(struct qm_mcc_initfq *opts)
+{
+ memset(opts, 0, sizeof(struct qm_mcc_initfq));
+ opts->we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
+ opts->fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
+ QM_FQCTRL_PREFERINCACHE;
+ opts->fqd.context_a.stashing.exclusive = 0;
+ if (dpaa_svr_family != SVR_LS1046A_FAMILY)
+ opts->fqd.context_a.stashing.annotation_cl =
+ DPAA_IF_RX_ANNOTATION_STASH;
+ opts->fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
+ opts->fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+}
+
static int
dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
@@ -561,6 +576,92 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return 0;
}
+int dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
+ int eth_rx_queue_id,
+ u16 ch_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+ int ret;
+ u32 flags = 0;
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[eth_rx_queue_id];
+ struct qm_mcc_initfq opts = {0};
+
+ dpaa_poll_queue_default_config(&opts);
+
+ switch (queue_conf->ev.sched_type) {
+ case RTE_SCHED_TYPE_ATOMIC:
+ opts.fqd.fq_ctrl |= QM_FQCTRL_HOLDACTIVE;
+ /* Reset FQCTRL_AVOIDBLOCK bit as it is unnecessary
+ * configuration with HOLD_ACTIVE setting
+ */
+ opts.fqd.fq_ctrl &= (~QM_FQCTRL_AVOIDBLOCK);
+ rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb_atomic;
+ break;
+ case RTE_SCHED_TYPE_ORDERED:
+ DPAA_PMD_ERR("Ordered queue schedule type is not supported\n");
+ return -1;
+ default:
+ opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK;
+ rxq->cb.dqrr_dpdk_cb = dpaa_rx_cb_parallel;
+ break;
+ }
+
+ opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
+ opts.fqd.dest.channel = ch_id;
+ opts.fqd.dest.wq = queue_conf->ev.priority;
+
+ if (dpaa_intf->cgr_rx) {
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = dpaa_intf->cgr_rx[eth_rx_queue_id].cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+
+ flags = QMAN_INITFQ_FLAG_SCHED;
+
+ ret = qman_init_fq(rxq, flags, &opts);
+ if (ret) {
+ DPAA_PMD_ERR("Channel/Queue association failed. fqid %d ret:%d",
+ rxq->fqid, ret);
+ return ret;
+ }
+
+ /* copy configuration which needs to be filled during dequeue */
+ memcpy(&rxq->ev, &queue_conf->ev, sizeof(struct rte_event));
+ dev->data->rx_queues[eth_rx_queue_id] = rxq;
+
+ return ret;
+}
+
+int dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
+ int eth_rx_queue_id)
+{
+ struct qm_mcc_initfq opts;
+ int ret;
+ u32 flags = 0;
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct qman_fq *rxq = &dpaa_intf->rx_queues[eth_rx_queue_id];
+
+ dpaa_poll_queue_default_config(&opts);
+
+ if (dpaa_intf->cgr_rx) {
+ opts.we_mask |= QM_INITFQ_WE_CGID;
+ opts.fqd.cgid = dpaa_intf->cgr_rx[eth_rx_queue_id].cgrid;
+ opts.fqd.fq_ctrl |= QM_FQCTRL_CGE;
+ }
+
+ ret = qman_init_fq(rxq, flags, &opts);
+ if (ret) {
+ DPAA_PMD_ERR("init rx fqid %d failed with ret: %d",
+ rxq->fqid, ret);
+ }
+
+ rxq->cb.dqrr_dpdk_cb = NULL;
+ dev->data->rx_queues[eth_rx_queue_id] = NULL;
+
+ return 0;
+}
+
static
void dpaa_eth_rx_queue_release(void *rxq __rte_unused)
{
@@ -881,13 +982,8 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
return ret;
}
fq->is_static = false;
- opts.we_mask = QM_INITFQ_WE_FQCTRL | QM_INITFQ_WE_CONTEXTA;
- opts.fqd.fq_ctrl = QM_FQCTRL_AVOIDBLOCK | QM_FQCTRL_CTXASTASHING |
- QM_FQCTRL_PREFERINCACHE;
- opts.fqd.context_a.stashing.exclusive = 0;
- opts.fqd.context_a.stashing.annotation_cl = DPAA_IF_RX_ANNOTATION_STASH;
- opts.fqd.context_a.stashing.data_cl = DPAA_IF_RX_DATA_STASH;
- opts.fqd.context_a.stashing.context_cl = DPAA_IF_RX_CONTEXT_STASH;
+
+ dpaa_poll_queue_default_config(&opts);
if (cgr_rx) {
/* Enable tail drop with cgr on this queue */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index c0a8430..b81522a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -36,6 +36,7 @@
/* System headers */
#include <stdbool.h>
#include <rte_ethdev.h>
+#include <rte_event_eth_rx_adapter.h>
#include <fsl_usd.h>
#include <fsl_qman.h>
@@ -50,6 +51,13 @@
#error "Annotation requirement is more than RTE_PKTMBUF_HEADROOM"
#endif
+/* mbuf->seqn will be used to store event entry index for
+ * driver specific usage. For parallel mode queues, invalid
+ * index will be set and for atomic mode queues, valid value
+ * ranging from 1 to 16.
+ */
+#define DPAA_INVALID_MBUF_SEQN 0
+
/* we will re-use the HEADROOM for annotation in RX */
#define DPAA_HW_BUF_RESERVE 0
#define DPAA_PACKET_LAYOUT_ALIGN 64
@@ -178,4 +186,25 @@ struct dpaa_if_stats {
uint64_t tund; /**<Tx Undersized */
};
+int dpaa_eth_eventq_attach(const struct rte_eth_dev *dev,
+ int eth_rx_queue_id,
+ u16 ch_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
+
+int dpaa_eth_eventq_detach(const struct rte_eth_dev *dev,
+ int eth_rx_queue_id);
+
+enum qman_cb_dqrr_result
+dpaa_rx_cb_parallel(void *event,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs);
+enum qman_cb_dqrr_result
+dpaa_rx_cb_atomic(void *event,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs);
+
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 1caecf2..53be8a9 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -59,12 +59,14 @@
#include <rte_tcp.h>
#include <rte_udp.h>
#include <rte_net.h>
+#include <rte_eventdev.h>
#include "dpaa_ethdev.h"
#include "dpaa_rxtx.h"
#include <rte_dpaa_bus.h>
#include <dpaa_mempool.h>
+#include <qman.h>
#include <fsl_usd.h>
#include <fsl_qman.h>
#include <fsl_bman.h>
@@ -451,6 +453,67 @@ dpaa_eth_queue_portal_rx(struct qman_fq *fq,
return qman_portal_poll_rx(nb_bufs, (void **)bufs, fq->qp);
}
+enum qman_cb_dqrr_result
+dpaa_rx_cb_parallel(void *event,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs)
+{
+ u32 ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+ struct rte_mbuf *mbuf;
+ struct rte_event *ev = (struct rte_event *)event;
+
+ mbuf = dpaa_eth_fd_to_mbuf(&dqrr->fd, ifid);
+ ev->event_ptr = (void *)mbuf;
+ ev->flow_id = fq->ev.flow_id;
+ ev->sub_event_type = fq->ev.sub_event_type;
+ ev->event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = fq->ev.sched_type;
+ ev->queue_id = fq->ev.queue_id;
+ ev->priority = fq->ev.priority;
+ ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN;
+ mbuf->seqn = DPAA_INVALID_MBUF_SEQN;
+ *bufs = mbuf;
+
+ return qman_cb_dqrr_consume;
+}
+
+enum qman_cb_dqrr_result
+dpaa_rx_cb_atomic(void *event,
+ struct qman_portal *qm __always_unused,
+ struct qman_fq *fq,
+ const struct qm_dqrr_entry *dqrr,
+ void **bufs)
+{
+ u8 index;
+ u32 ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
+ struct rte_mbuf *mbuf;
+ struct rte_event *ev = (struct rte_event *)event;
+
+ mbuf = dpaa_eth_fd_to_mbuf(&dqrr->fd, ifid);
+ ev->event_ptr = (void *)mbuf;
+ ev->flow_id = fq->ev.flow_id;
+ ev->sub_event_type = fq->ev.sub_event_type;
+ ev->event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev->op = RTE_EVENT_OP_NEW;
+ ev->sched_type = fq->ev.sched_type;
+ ev->queue_id = fq->ev.queue_id;
+ ev->priority = fq->ev.priority;
+
+ /* Save active dqrr entries */
+ index = DQRR_PTR2IDX(dqrr);
+ DPAA_PER_LCORE_DQRR_SIZE++;
+ DPAA_PER_LCORE_DQRR_HELD |= 1 << index;
+ DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf;
+ ev->impl_opaque = index + 1;
+ mbuf->seqn = (uint32_t)index + 1;
+ *bufs = mbuf;
+
+ return qman_cb_dqrr_defer;
+}
+
uint16_t dpaa_eth_queue_rx(void *q,
struct rte_mbuf **bufs,
uint16_t nb_bufs)
@@ -734,6 +797,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
uint32_t frames_to_send, loop, sent = 0;
uint16_t state;
int ret;
+ uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
ret = rte_dpaa_portal_init((void *)0);
if (ret) {
@@ -794,14 +858,26 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
goto send_pkts;
}
}
+ seqn = mbuf->seqn;
+ if (seqn != DPAA_INVALID_MBUF_SEQN) {
+ index = seqn - 1;
+ if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) {
+ flags[loop] =
+ ((index & QM_EQCR_DCA_IDXMASK) << 8);
+ flags[loop] |= QMAN_ENQUEUE_FLAG_DCA;
+ DPAA_PER_LCORE_DQRR_SIZE--;
+ DPAA_PER_LCORE_DQRR_HELD &=
+ ~(1 << index);
+ }
+ }
}
send_pkts:
loop = 0;
while (loop < frames_to_send) {
loop += qman_enqueue_multi(q, &fd_arr[loop],
- NULL,
- frames_to_send - loop);
+ &flags[loop],
+ frames_to_send - loop);
}
nb_bufs -= frames_to_send;
sent += frames_to_send;
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index d76acbd..888f203 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -6,6 +6,8 @@ DPDK_17.11 {
DPDK_18.02 {
global:
+ dpaa_eth_eventq_attach;
+ dpaa_eth_eventq_detach;
rte_pmd_dpaa_set_tx_loopback;
local: *;
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 4/6] event/dpaa: add eventdev PMD
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
` (2 preceding siblings ...)
2017-12-15 12:59 ` [dpdk-dev] [PATCH 3/6] net/dpaa: ethdev Rx queue configurations with eventdev Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 5/6] config: enabling compilation of DPAA " Sunil Kumar Kori
` (3 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
drivers/event/Makefile | 1 +
drivers/event/dpaa/Makefile | 37 ++
drivers/event/dpaa/dpaa_eventdev.c | 652 ++++++++++++++++++++++
drivers/event/dpaa/dpaa_eventdev.h | 86 +++
drivers/event/dpaa/rte_pmd_dpaa_event_version.map | 4 +
5 files changed, 780 insertions(+)
create mode 100644 drivers/event/dpaa/Makefile
create mode 100644 drivers/event/dpaa/dpaa_eventdev.c
create mode 100644 drivers/event/dpaa/dpaa_eventdev.h
create mode 100644 drivers/event/dpaa/rte_pmd_dpaa_event_version.map
diff --git a/drivers/event/Makefile b/drivers/event/Makefile
index 1f9c0ba..c726234 100644
--- a/drivers/event/Makefile
+++ b/drivers/event/Makefile
@@ -35,5 +35,6 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton
DIRS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += sw
DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += octeontx
DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += dpaa2
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += dpaa
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/dpaa/Makefile b/drivers/event/dpaa/Makefile
new file mode 100644
index 0000000..bd0b6c9
--- /dev/null
+++ b/drivers/event/dpaa/Makefile
@@ -0,0 +1,37 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2017 NXP
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+RTE_SDK_DPAA=$(RTE_SDK)/drivers/net/dpaa
+
+#
+# library name
+#
+LIB = librte_pmd_dpaa_event.a
+
+CFLAGS := -I$(SRCDIR) $(CFLAGS)
+CFLAGS += -O3 $(WERROR_FLAGS)
+CFLAGS += -Wno-pointer-arith
+CFLAGS += -I$(RTE_SDK_DPAA)/
+CFLAGS += -I$(RTE_SDK_DPAA)/include
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa
+CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/
+CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/librte_eal/linuxapp/eal/include
+
+EXPORT_MAP := rte_pmd_dpaa_event_version.map
+
+LIBABIVER := 1
+
+# Interfaces with DPDK
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += dpaa_eventdev.c
+
+LDLIBS += -lrte_bus_dpaa
+LDLIBS += -lrte_mempool_dpaa
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring
+LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs
+LDLIBS += -lrte_eventdev -lrte_pmd_dpaa -lrte_bus_vdev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
new file mode 100644
index 0000000..371599f
--- /dev/null
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -0,0 +1,652 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+#include <sys/epoll.h>
+
+#include <rte_atomic.h>
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_pci.h>
+#include <rte_eventdev.h>
+#include <rte_eventdev_pmd_vdev.h>
+#include <rte_ethdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_dpaa_bus.h>
+#include <rte_dpaa_logs.h>
+#include <rte_cycles_64.h>
+
+#include <dpaa_ethdev.h>
+#include "dpaa_eventdev.h"
+#include <dpaa_mempool.h>
+
+/*
+ * Clarifications
+ * Evendev = Virtual Instance for SoC
+ * Eventport = Portal Instance
+ * Eventqueue = Channel Instance
+ * 1 Eventdev can have N Eventqueue
+ */
+
+static int
+dpaa_event_dequeue_timeout_ticks(struct rte_eventdev *dev, uint64_t ns,
+ uint64_t *timeout_ticks)
+{
+ uint64_t cycles_per_second;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+
+ cycles_per_second = rte_get_timer_hz();
+ *timeout_ticks = ns * (cycles_per_second / NS_PER_S);
+
+ return 0;
+}
+
+static void
+dpaa_eventq_portal_add(u16 ch_id)
+{
+ uint32_t sdqcr;
+
+ sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(ch_id);
+ qman_static_dequeue_add(sdqcr, NULL);
+}
+
+static uint16_t
+dpaa_event_enqueue_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
+{
+ uint16_t i;
+ struct rte_mbuf *mbuf;
+
+ RTE_SET_USED(port);
+ /*Release all the contexts saved previously*/
+ for (i = 0; i < nb_events; i++) {
+ switch (ev[i].op) {
+ case RTE_EVENT_OP_RELEASE:
+ qman_dca_index(ev[i].impl_opaque, 0);
+ mbuf = DPAA_PER_LCORE_DQRR_MBUF(i);
+ mbuf->seqn = DPAA_INVALID_MBUF_SEQN;
+ DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i);
+ DPAA_PER_LCORE_DQRR_SIZE--;
+ break;
+ default:
+ break;
+ }
+ }
+
+ return nb_events;
+}
+
+static uint16_t
+dpaa_event_enqueue(void *port, const struct rte_event *ev)
+{
+ return dpaa_event_enqueue_burst(port, ev, 1);
+}
+
+static uint16_t
+dpaa_event_dequeue_burst(void *port, struct rte_event ev[],
+ uint16_t nb_events, uint64_t timeout_ticks)
+{
+ int ret;
+ u16 ch_id;
+ void *buffers[8];
+ u32 num_frames, i;
+ uint64_t wait_time, cur_ticks, start_ticks;
+ struct dpaa_port *portal = (struct dpaa_port *)port;
+ struct rte_mbuf *mbuf;
+
+ /* Affine current thread context to a qman portal */
+ ret = rte_dpaa_portal_init((void *)0);
+ if (ret) {
+ DPAA_EVENTDEV_ERR("Unable to initialize portal");
+ return ret;
+ }
+
+ if (unlikely(!portal->is_port_linked)) {
+ /*
+ * Affine event queue for current thread context
+ * to a qman portal.
+ */
+ for (i = 0; i < portal->num_linked_evq; i++) {
+ ch_id = portal->evq_info[i].ch_id;
+ dpaa_eventq_portal_add(ch_id);
+ }
+ portal->is_port_linked = true;
+ }
+
+ /* Check if there are atomic contexts to be released */
+ i = 0;
+ while (DPAA_PER_LCORE_DQRR_SIZE) {
+ if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) {
+ qman_dca_index(i, 0);
+ mbuf = DPAA_PER_LCORE_DQRR_MBUF(i);
+ mbuf->seqn = DPAA_INVALID_MBUF_SEQN;
+ DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i);
+ DPAA_PER_LCORE_DQRR_SIZE--;
+ }
+ i++;
+ }
+ DPAA_PER_LCORE_DQRR_HELD = 0;
+
+ if (portal->timeout == DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID)
+ wait_time = timeout_ticks;
+ else
+ wait_time = portal->timeout;
+
+ /* Lets dequeue the frames */
+ start_ticks = rte_get_timer_cycles();
+ wait_time += start_ticks;
+ do {
+ num_frames = qman_portal_dequeue(ev, nb_events, buffers);
+ if (num_frames != 0)
+ break;
+ cur_ticks = rte_get_timer_cycles();
+ } while (cur_ticks < wait_time);
+
+ return num_frames;
+}
+
+static uint16_t
+dpaa_event_dequeue(void *port, struct rte_event *ev, uint64_t timeout_ticks)
+{
+ return dpaa_event_dequeue_burst(port, ev, 1, timeout_ticks);
+}
+
+static void
+dpaa_event_dev_info_get(struct rte_eventdev *dev,
+ struct rte_event_dev_info *dev_info)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ dev_info->driver_name = "event_dpaa";
+ dev_info->min_dequeue_timeout_ns =
+ DPAA_EVENT_MIN_DEQUEUE_TIMEOUT;
+ dev_info->max_dequeue_timeout_ns =
+ DPAA_EVENT_MAX_DEQUEUE_TIMEOUT;
+ dev_info->dequeue_timeout_ns =
+ DPAA_EVENT_MIN_DEQUEUE_TIMEOUT;
+ dev_info->max_event_queues =
+ DPAA_EVENT_MAX_QUEUES;
+ dev_info->max_event_queue_flows =
+ DPAA_EVENT_MAX_QUEUE_FLOWS;
+ dev_info->max_event_queue_priority_levels =
+ DPAA_EVENT_MAX_QUEUE_PRIORITY_LEVELS;
+ dev_info->max_event_priority_levels =
+ DPAA_EVENT_MAX_EVENT_PRIORITY_LEVELS;
+ dev_info->max_event_ports =
+ DPAA_EVENT_MAX_EVENT_PORT;
+ dev_info->max_event_port_dequeue_depth =
+ DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH;
+ dev_info->max_event_port_enqueue_depth =
+ DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH;
+ /*
+ * TODO: Need to find out that how to fetch this info
+ * from kernel or somewhere else.
+ */
+ dev_info->max_num_events =
+ DPAA_EVENT_MAX_NUM_EVENTS;
+ dev_info->event_dev_cap =
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
+ RTE_EVENT_DEV_CAP_BURST_MODE;
+}
+
+static int
+dpaa_event_dev_configure(const struct rte_eventdev *dev)
+{
+ struct dpaa_eventdev *priv = dev->data->dev_private;
+ struct rte_event_dev_config *conf = &dev->data->dev_conf;
+ int ret, i;
+ uint32_t *ch_id;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ priv->dequeue_timeout_ns = conf->dequeue_timeout_ns;
+ priv->nb_events_limit = conf->nb_events_limit;
+ priv->nb_event_queues = conf->nb_event_queues;
+ priv->nb_event_ports = conf->nb_event_ports;
+ priv->nb_event_queue_flows = conf->nb_event_queue_flows;
+ priv->nb_event_port_dequeue_depth = conf->nb_event_port_dequeue_depth;
+ priv->nb_event_port_enqueue_depth = conf->nb_event_port_enqueue_depth;
+ priv->event_dev_cfg = conf->event_dev_cfg;
+
+ /* Check dequeue timeout method is per dequeue or global */
+ if (priv->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
+ /*
+ * Use timeout value as given in dequeue operation.
+ * So invalidating this timetout value.
+ */
+ priv->dequeue_timeout_ns = 0;
+ }
+
+ ch_id = rte_malloc("dpaa-channels",
+ sizeof(uint32_t) * priv->nb_event_queues,
+ RTE_CACHE_LINE_SIZE);
+ if (ch_id == NULL) {
+ EVENTDEV_DRV_ERR("Fail to allocate memory for dpaa channels\n");
+ return -ENOMEM;
+ }
+ /* Create requested event queues within the given event device */
+ ret = qman_alloc_pool_range(ch_id, priv->nb_event_queues, 1, 0);
+ if (ret < 0) {
+ EVENTDEV_DRV_ERR("Failed to create internal channel\n");
+ rte_free(ch_id);
+ return ret;
+ }
+ for (i = 0; i < priv->nb_event_queues; i++)
+ priv->evq_info[i].ch_id = (u16)ch_id[i];
+
+ /* Lets prepare event ports */
+ memset(&priv->ports[0], 0,
+ sizeof(struct dpaa_port) * priv->nb_event_ports);
+ if (priv->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
+ for (i = 0; i < priv->nb_event_ports; i++) {
+ priv->ports[i].timeout =
+ DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID;
+ }
+ } else if (priv->dequeue_timeout_ns == 0) {
+ for (i = 0; i < priv->nb_event_ports; i++) {
+ dpaa_event_dequeue_timeout_ticks(NULL,
+ DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_NS,
+ &priv->ports[i].timeout);
+ }
+ } else {
+ for (i = 0; i < priv->nb_event_ports; i++) {
+ dpaa_event_dequeue_timeout_ticks(NULL,
+ priv->dequeue_timeout_ns,
+ &priv->ports[i].timeout);
+ }
+ }
+ /*
+ * TODO: Currently portals are affined with threads. Maximum threads
+ * can be created equals to number of lcore.
+ */
+ rte_free(ch_id);
+ EVENTDEV_DRV_LOG(DEBUG, "Configured eventdev devid=%d",
+ dev->data->dev_id);
+
+ return 0;
+}
+
+static int
+dpaa_event_dev_start(struct rte_eventdev *dev)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+static void
+dpaa_event_dev_stop(struct rte_eventdev *dev)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+ RTE_SET_USED(dev);
+}
+
+static int
+dpaa_event_dev_close(struct rte_eventdev *dev)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+ RTE_SET_USED(dev);
+
+ return 0;
+}
+
+static void
+dpaa_event_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
+ struct rte_event_queue_conf *queue_conf)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(queue_id);
+
+ memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
+ queue_conf->schedule_type = RTE_SCHED_TYPE_PARALLEL;
+ queue_conf->priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+}
+
+static int
+dpaa_event_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
+ const struct rte_event_queue_conf *queue_conf)
+{
+ struct dpaa_eventdev *priv = dev->data->dev_private;
+ struct dpaa_eventq *evq_info = &priv->evq_info[queue_id];
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ switch (queue_conf->schedule_type) {
+ case RTE_SCHED_TYPE_PARALLEL:
+ case RTE_SCHED_TYPE_ATOMIC:
+ break;
+ case RTE_SCHED_TYPE_ORDERED:
+ EVENTDEV_DRV_ERR("Schedule type is not supported.");
+ return -1;
+ }
+ evq_info->event_queue_cfg = queue_conf->event_queue_cfg;
+ evq_info->event_queue_id = queue_id;
+
+ return 0;
+}
+
+static void
+dpaa_event_queue_release(struct rte_eventdev *dev, uint8_t queue_id)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(queue_id);
+}
+
+static void
+dpaa_event_port_default_conf_get(struct rte_eventdev *dev, uint8_t port_id,
+ struct rte_event_port_conf *port_conf)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(port_id);
+
+ port_conf->new_event_threshold = DPAA_EVENT_MAX_NUM_EVENTS;
+ port_conf->dequeue_depth = DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH;
+ port_conf->enqueue_depth = DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH;
+}
+
+static int
+dpaa_event_port_setup(struct rte_eventdev *dev, uint8_t port_id,
+ const struct rte_event_port_conf *port_conf)
+{
+ struct dpaa_eventdev *eventdev = dev->data->dev_private;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(port_conf);
+ dev->data->ports[port_id] = &eventdev->ports[port_id];
+
+ return 0;
+}
+
+static void
+dpaa_event_port_release(void *port)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(port);
+}
+
+static int
+dpaa_event_port_link(struct rte_eventdev *dev, void *port,
+ const uint8_t queues[], const uint8_t priorities[],
+ uint16_t nb_links)
+{
+ struct dpaa_eventdev *priv = dev->data->dev_private;
+ struct dpaa_port *event_port = (struct dpaa_port *)port;
+ struct dpaa_eventq *event_queue;
+ uint8_t eventq_id;
+ int i;
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(priorities);
+
+ /* First check that input configuration are valid */
+ for (i = 0; i < nb_links; i++) {
+ eventq_id = queues[i];
+ event_queue = &priv->evq_info[eventq_id];
+ if ((event_queue->event_queue_cfg
+ & RTE_EVENT_QUEUE_CFG_SINGLE_LINK)
+ && (event_queue->event_port)) {
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < nb_links; i++) {
+ eventq_id = queues[i];
+ event_queue = &priv->evq_info[eventq_id];
+ event_port->evq_info[i].event_queue_id = eventq_id;
+ event_port->evq_info[i].ch_id = event_queue->ch_id;
+ event_queue->event_port = port;
+ }
+
+ event_port->num_linked_evq = event_port->num_linked_evq + i;
+
+ return (int)i;
+}
+
+static int
+dpaa_event_port_unlink(struct rte_eventdev *dev, void *port,
+ uint8_t queues[], uint16_t nb_links)
+{
+ int i;
+ uint8_t eventq_id;
+ struct dpaa_eventq *event_queue;
+ struct dpaa_eventdev *priv = dev->data->dev_private;
+ struct dpaa_port *event_port = (struct dpaa_port *)port;
+
+ if (!event_port->num_linked_evq)
+ return nb_links;
+
+ for (i = 0; i < nb_links; i++) {
+ eventq_id = queues[i];
+ event_port->evq_info[eventq_id].event_queue_id = -1;
+ event_port->evq_info[eventq_id].ch_id = 0;
+ event_queue = &priv->evq_info[eventq_id];
+ event_queue->event_port = NULL;
+ }
+
+ event_port->num_linked_evq = event_port->num_linked_evq - i;
+
+ return (int)i;
+}
+
+static int
+dpaa_event_eth_rx_adapter_caps_get(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev,
+ uint32_t *caps)
+{
+ const char *ethdev_driver = eth_dev->device->driver->name;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+
+ if (!strcmp(ethdev_driver, "net_dpaa"))
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_DPAA_CAP;
+ else
+ *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
+
+ return 0;
+}
+
+static int
+dpaa_event_eth_rx_adapter_queue_add(
+ const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id,
+ const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+ struct dpaa_eventdev *eventdev = dev->data->dev_private;
+ uint8_t ev_qid = queue_conf->ev.queue_id;
+ u16 ch_id = eventdev->evq_info[ev_qid].ch_id;
+ struct dpaa_if *dpaa_intf = eth_dev->data->dev_private;
+ int ret, i;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ if (rx_queue_id == -1) {
+ for (i = 0; i < dpaa_intf->nb_rx_queues; i++) {
+ ret = dpaa_eth_eventq_attach(eth_dev, i, ch_id,
+ queue_conf);
+ if (ret) {
+ EVENTDEV_DRV_ERR(
+ "Event Queue attach failed:%d\n", ret);
+ goto detach_configured_queues;
+ }
+ }
+ return 0;
+ }
+
+ ret = dpaa_eth_eventq_attach(eth_dev, rx_queue_id, ch_id, queue_conf);
+ if (ret)
+ EVENTDEV_DRV_ERR("dpaa_eth_eventq_attach failed:%d\n", ret);
+ return ret;
+
+detach_configured_queues:
+
+ for (i = (i - 1); i >= 0 ; i--)
+ dpaa_eth_eventq_detach(eth_dev, i);
+
+ return ret;
+}
+
+static int
+dpaa_event_eth_rx_adapter_queue_del(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev,
+ int32_t rx_queue_id)
+{
+ int ret, i;
+ struct dpaa_if *dpaa_intf = eth_dev->data->dev_private;
+
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ if (rx_queue_id == -1) {
+ for (i = 0; i < dpaa_intf->nb_rx_queues; i++) {
+ ret = dpaa_eth_eventq_detach(eth_dev, i);
+ if (ret)
+ EVENTDEV_DRV_ERR(
+ "Event Queue detach failed:%d\n", ret);
+ }
+
+ return 0;
+ }
+
+ ret = dpaa_eth_eventq_detach(eth_dev, rx_queue_id);
+ if (ret)
+ EVENTDEV_DRV_ERR("dpaa_eth_eventq_detach failed:%d\n", ret);
+ return ret;
+}
+
+static int
+dpaa_event_eth_rx_adapter_start(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
+static int
+dpaa_event_eth_rx_adapter_stop(const struct rte_eventdev *dev,
+ const struct rte_eth_dev *eth_dev)
+{
+ EVENTDEV_DRV_FUNC_TRACE();
+
+ RTE_SET_USED(dev);
+ RTE_SET_USED(eth_dev);
+
+ return 0;
+}
+
+static const struct rte_eventdev_ops dpaa_eventdev_ops = {
+ .dev_infos_get = dpaa_event_dev_info_get,
+ .dev_configure = dpaa_event_dev_configure,
+ .dev_start = dpaa_event_dev_start,
+ .dev_stop = dpaa_event_dev_stop,
+ .dev_close = dpaa_event_dev_close,
+ .queue_def_conf = dpaa_event_queue_def_conf,
+ .queue_setup = dpaa_event_queue_setup,
+ .queue_release = dpaa_event_queue_release,
+ .port_def_conf = dpaa_event_port_default_conf_get,
+ .port_setup = dpaa_event_port_setup,
+ .port_release = dpaa_event_port_release,
+ .port_link = dpaa_event_port_link,
+ .port_unlink = dpaa_event_port_unlink,
+ .timeout_ticks = dpaa_event_dequeue_timeout_ticks,
+ .eth_rx_adapter_caps_get = dpaa_event_eth_rx_adapter_caps_get,
+ .eth_rx_adapter_queue_add = dpaa_event_eth_rx_adapter_queue_add,
+ .eth_rx_adapter_queue_del = dpaa_event_eth_rx_adapter_queue_del,
+ .eth_rx_adapter_start = dpaa_event_eth_rx_adapter_start,
+ .eth_rx_adapter_stop = dpaa_event_eth_rx_adapter_stop,
+};
+
+static int
+dpaa_event_dev_create(const char *name)
+{
+ struct rte_eventdev *eventdev;
+ struct dpaa_eventdev *priv;
+
+ eventdev = rte_event_pmd_vdev_init(name,
+ sizeof(struct dpaa_eventdev),
+ rte_socket_id());
+ if (eventdev == NULL) {
+ EVENTDEV_DRV_ERR("Failed to create eventdev vdev %s", name);
+ goto fail;
+ }
+
+ eventdev->dev_ops = &dpaa_eventdev_ops;
+ eventdev->enqueue = dpaa_event_enqueue;
+ eventdev->enqueue_burst = dpaa_event_enqueue_burst;
+ eventdev->dequeue = dpaa_event_dequeue;
+ eventdev->dequeue_burst = dpaa_event_dequeue_burst;
+
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ priv = eventdev->data->dev_private;
+ priv->max_event_queues = DPAA_EVENT_MAX_QUEUES;
+
+ return 0;
+fail:
+ return -EFAULT;
+}
+
+static int
+dpaa_event_dev_probe(struct rte_vdev_device *vdev)
+{
+ const char *name;
+
+ name = rte_vdev_device_name(vdev);
+ EVENTDEV_DRV_LOG(INFO, "Initializing %s", name);
+
+ return dpaa_event_dev_create(name);
+}
+
+static int
+dpaa_event_dev_remove(struct rte_vdev_device *vdev)
+{
+ const char *name;
+
+ name = rte_vdev_device_name(vdev);
+ EVENTDEV_DRV_LOG(INFO, "Closing %s", name);
+
+ return rte_event_pmd_vdev_uninit(name);
+}
+
+static struct rte_vdev_driver vdev_eventdev_dpaa_pmd = {
+ .probe = dpaa_event_dev_probe,
+ .remove = dpaa_event_dev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(EVENTDEV_NAME_DPAA_PMD, vdev_eventdev_dpaa_pmd);
diff --git a/drivers/event/dpaa/dpaa_eventdev.h b/drivers/event/dpaa/dpaa_eventdev.h
new file mode 100644
index 0000000..965c1fd
--- /dev/null
+++ b/drivers/event/dpaa/dpaa_eventdev.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2017 NXP
+ */
+
+#ifndef __DPAA_EVENTDEV_H__
+#define __DPAA_EVENTDEV_H__
+
+#include <rte_eventdev_pmd.h>
+#include <rte_eventdev_pmd_vdev.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+
+#define EVENTDEV_NAME_DPAA_PMD event_dpaa
+
+#ifdef RTE_LIBRTE_PMD_DPAA_EVENTDEV_DEBUG
+#define EVENTDEV_DRV_LOG(level, fmt, args...) \
+ RTE_LOG(level, EVENTDEV, "%s(): " fmt "\n", __func__, ## args)
+#define EVENTDEV_DRV_FUNC_TRACE() EVENT_DRV_LOG(DEBUG, ">>")
+#else
+#define EVENTDEV_DRV_LOG(level, fmt, args...) do { } while (0)
+#define EVENTDEV_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#define EVENTDEV_DRV_ERR(fmt, args...) \
+ RTE_LOG(ERR, EVENTDEV, "%s(): " fmt "\n", __func__, ## args)
+
+#define DPAA_EVENT_MAX_PORTS 8
+#define DPAA_EVENT_MAX_QUEUES 16
+#define DPAA_EVENT_MIN_DEQUEUE_TIMEOUT 1
+#define DPAA_EVENT_MAX_DEQUEUE_TIMEOUT (UINT32_MAX - 1)
+#define DPAA_EVENT_MAX_QUEUE_FLOWS 2048
+#define DPAA_EVENT_MAX_QUEUE_PRIORITY_LEVELS 8
+#define DPAA_EVENT_MAX_EVENT_PRIORITY_LEVELS 0
+#define DPAA_EVENT_MAX_EVENT_PORT RTE_MAX_LCORE
+#define DPAA_EVENT_MAX_PORT_DEQUEUE_DEPTH 8
+#define DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_NS 100UL
+#define DPAA_EVENT_PORT_DEQUEUE_TIMEOUT_INVALID ((uint64_t)-1)
+#define DPAA_EVENT_MAX_PORT_ENQUEUE_DEPTH 1
+#define DPAA_EVENT_MAX_NUM_EVENTS (INT32_MAX - 1)
+
+#define DPAA_EVENT_DEV_CAP \
+do { \
+ RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | \
+ RTE_EVENT_DEV_CAP_BURST_MODE; \
+} while (0)
+
+#define DPAA_EVENT_QUEUE_ATOMIC_FLOWS 0
+#define DPAA_EVENT_QUEUE_ORDER_SEQUENCES 2048
+
+#define RTE_EVENT_ETH_RX_ADAPTER_DPAA_CAP \
+ (RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT | \
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ | \
+ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID)
+
+struct dpaa_eventq {
+ /* Channel Id */
+ uint16_t ch_id;
+ /* Configuration provided by the user */
+ uint32_t event_queue_cfg;
+ uint32_t event_queue_id;
+ /* Event port */
+ void *event_port;
+};
+
+struct dpaa_port {
+ struct dpaa_eventq evq_info[DPAA_EVENT_MAX_QUEUES];
+ uint8_t num_linked_evq;
+ uint8_t is_port_linked;
+ uint64_t timeout;
+};
+
+struct dpaa_eventdev {
+ struct dpaa_eventq evq_info[DPAA_EVENT_MAX_QUEUES];
+ struct dpaa_port ports[DPAA_EVENT_MAX_PORTS];
+ uint32_t dequeue_timeout_ns;
+ uint32_t nb_events_limit;
+ uint8_t max_event_queues;
+ uint8_t nb_event_queues;
+ uint8_t nb_event_ports;
+ uint8_t resvd;
+ uint32_t nb_event_queue_flows;
+ uint32_t nb_event_port_dequeue_depth;
+ uint32_t nb_event_port_enqueue_depth;
+ uint32_t event_dev_cfg;
+};
+#endif /* __DPAA_EVENTDEV_H__ */
diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
new file mode 100644
index 0000000..179140f
--- /dev/null
+++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
@@ -0,0 +1,4 @@
+DPDK_18.02 {
+
+ local: *;
+};
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 5/6] config: enabling compilation of DPAA eventdev PMD
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
` (3 preceding siblings ...)
2017-12-15 12:59 ` [dpdk-dev] [PATCH 4/6] event/dpaa: add eventdev PMD Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide Sunil Kumar Kori
` (2 subsequent siblings)
7 siblings, 0 replies; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
config/common_base | 4 ++++
config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +++
mk/rte.app.mk | 1 +
3 files changed, 8 insertions(+)
diff --git a/config/common_base b/config/common_base
index e74febe..6fc7366 100644
--- a/config/common_base
+++ b/config/common_base
@@ -332,6 +332,10 @@ CONFIG_RTE_LIBRTE_DPAA_BUS=n
CONFIG_RTE_LIBRTE_DPAA_MEMPOOL=n
CONFIG_RTE_LIBRTE_DPAA_PMD=n
+# Compile software NXP DPAA Event Dev PMD
+CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV=n
+CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV_DEBUG=n
+
#
# Compile burst-oriented Cavium OCTEONTX network PMD driver
#
diff --git a/config/defconfig_arm64-dpaa-linuxapp-gcc b/config/defconfig_arm64-dpaa-linuxapp-gcc
index e577432..c163f9d 100644
--- a/config/defconfig_arm64-dpaa-linuxapp-gcc
+++ b/config/defconfig_arm64-dpaa-linuxapp-gcc
@@ -58,6 +58,9 @@ CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="dpaa"
# Compile software NXP DPAA PMD
CONFIG_RTE_LIBRTE_DPAA_PMD=y
+# Compile software NXP DPAA Event Dev PMD
+CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV=y
+
#
# FSL DPAA caam - crypto driver
#
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 6a6a745..22512fc 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -198,6 +198,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += -lrte_pmd_sw_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += -lrte_pmd_octeontx_ssovf
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += -lrte_pmd_dpaa2_event
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += -lrte_pmd_dpaa_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx
_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += -lrte_pmd_octeontx
endif # CONFIG_RTE_LIBRTE_EVENTDEV
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
` (4 preceding siblings ...)
2017-12-15 12:59 ` [dpdk-dev] [PATCH 5/6] config: enabling compilation of DPAA " Sunil Kumar Kori
@ 2017-12-15 12:59 ` Sunil Kumar Kori
2017-12-15 15:09 ` Kovacevic, Marko
2017-12-18 9:09 ` [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Jerin Jacob
2017-12-19 11:28 ` Hemant Agrawal
7 siblings, 1 reply; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 12:59 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="yes", Size: 5173 bytes --]
Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
---
doc/guides/eventdevs/dpaa.rst | 144 +++++++++++++++++++++++++++++++++++++++++
doc/guides/eventdevs/index.rst | 1 +
2 files changed, 145 insertions(+)
create mode 100644 doc/guides/eventdevs/dpaa.rst
diff --git a/doc/guides/eventdevs/dpaa.rst b/doc/guides/eventdevs/dpaa.rst
new file mode 100644
index 0000000..9224ebc
--- /dev/null
+++ b/doc/guides/eventdevs/dpaa.rst
@@ -0,0 +1,144 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2017 NXP
+
+NXP DPAA Eventdev Driver
+=========================
+
+The dpaa eventdev is an implementation of the eventdev API, that provides a
+wide range of the eventdev features. The eventdev relies on a dpaa based
+platform to perform event scheduling.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA EVENTDEV implements many features in the eventdev API;
+
+- Hardware based event scheduler
+- 4 event ports
+- 4 event queues
+- Parallel flows
+- Atomic flows
+
+Supported DPAA SoCs
+--------------------
+
+- LS1046A
+- LS1043A
+
+Prerequisites
+-------------
+
+There are following pre-requisities for executing EVENTDEV on a DPAA compatible
+platform:
+
+1. **ARM 64 Tool Chain**
+
+ For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/6.4-2017.08/aarch64-linux-gnu/>`_.
+
+2. **Linux Kernel**
+
+ It can be obtained from `NXP's Github hosting <https://github.com/qoriq-open-source/linux>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+ from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+As an alternative method, DPAA EVENTDEV can also be executed using images provided
+as part of SDK from NXP. The SDK includes all the above prerequisites necessary
+to bring up a DPAA board.
+
+The following dependencies are not part of DPDK and must be installed
+separately:
+
+- **NXP Linux SDK**
+
+ NXP Linux software development kit (SDK) includes support for family
+ of QorIQ® ARM-Architecture-based system on chip (SoC) processors
+ and corresponding boards.
+
+ It includes the Linux board support packages (BSPs) for NXP SoCs,
+ a fully operational tool chain, kernel and board specific modules.
+
+ SDK and related information can be obtained from: `NXP QorIQ SDK <http://www.nxp.com/products/software-and-tools/run-time-software/linux-sdk/linux-sdk-for-qoriq-processors:SDKLINUX>`_.
+
+- **DPDK Extra Scripts**
+
+ DPAA based resources can be configured easily with the help of ready to use
+ xml files as provided in the DPDK Extra repository.
+
+ `DPDK Extras Scripts <https://github.com/qoriq-open-source/dpdk-extras>`_.
+
+Currently supported by DPDK:
+
+- NXP SDK **2.0+** or LSDK **17.09+**
+- Supported architectures: **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV`` (default ``y``)
+
+ Toggle compilation of the ``librte_pmd_dpaa_event`` driver.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV_DEBUG`` (default ``n``)
+
+ Toggle display of generic debugging messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the DPAA EVENTDEV PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-dpaa-linuxapp-gcc install
+
+Initialization
+--------------
+
+The dpaa eventdev is exposed as a vdev device which consists of a set of channels
+and queues. On EAL initialization, dpaa components will be
+probed and then vdev device can be created from the application code by
+
+* Invoking ``rte_vdev_init("event_dpaa")`` from the application
+
+* Using ``--vdev="event_dpaa"`` in the EAL options, which will call
+ rte_vdev_init() internally
+
+Example:
+
+.. code-block:: console
+
+ ./your_eventdev_application --vdev="event_dpaa"
+
+Limitations
+-----------
+
+1. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
+ Please configure export DPAA_NUM_PUSH_QUEUES=0
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA SoCs``.
+
+Port-core binding
+~~~~~~~~~~~~~~~~~
+
+DPAA EVENTDEV driver requires event port 'x' to be used on core 'x'.
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index ba2048c..22f6480 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -38,5 +38,6 @@ application trough the eventdev API.
:numbered:
dpaa2
+ dpaa
sw
octeontx
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide
2017-12-15 12:59 ` [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide Sunil Kumar Kori
@ 2017-12-15 15:09 ` Kovacevic, Marko
0 siblings, 0 replies; 12+ messages in thread
From: Kovacevic, Marko @ 2017-12-15 15:09 UTC (permalink / raw)
To: Sunil Kumar Kori, jerin.jacob; +Cc: dev, hemant.agrawal
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Sunil Kumar Kori
> Sent: Friday, December 15, 2017 1:00 PM
> To: jerin.jacob@caviumnetworks.com
> Cc: dev@dpdk.org; hemant.agrawal@nxp.com
> Subject: [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide
>
> Signed-off-by: Sunil Kumar Kori <sunil.kori@nxp.com>
> +There are following pre-requisities for executing EVENTDEV on a DPAA
Small typo requisities / requisites
Maybe just small changes with these headers to keep consistency with the other headers
Rootfile system/ Rootfile System
Port-core binding/ Port-core Binding
Marko Kovacevic
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
` (5 preceding siblings ...)
2017-12-15 12:59 ` [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide Sunil Kumar Kori
@ 2017-12-18 9:09 ` Jerin Jacob
2017-12-19 11:28 ` Hemant Agrawal
7 siblings, 0 replies; 12+ messages in thread
From: Jerin Jacob @ 2017-12-18 9:09 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: dev, hemant.agrawal
-----Original Message-----
> Date: Fri, 15 Dec 2017 18:29:27 +0530
> From: Sunil Kumar Kori <sunil.kori@nxp.com>
> To: jerin.jacob@caviumnetworks.com
> CC: dev@dpdk.org, hemant.agrawal@nxp.com
> Subject: [PATCH 0/6] event/dpaa: Support for eventdev
> X-Mailer: git-send-email 2.9.3
>
> Event device support for atomic and parallel queues.
>
> This patch set includes following changes:
> 1. Configuration of atomic and parallel queues with given event device.
> 2. Also maintains previous dequeue method, via poll mode queues.
> 3. Added Rx functions to dequeue data from portal.
> 4. DCA consumption logic for atomic queues.
> 5. Dynamic Logging macros for event device
>
> Sunil Kumar Kori (6):
> bus/dpaa: added event dequeue and consumption support
> bus/dpaa: add dpaa eventdev dynamic log support
> net/dpaa: ethdev Rx queue configurations with eventdev
> event/dpaa: add eventdev PMD
> config: enabling compilation of DPAA eventdev PMD
> doc: add DPAA eventdev guide
>
> config/common_base | 4 +
> config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +
> doc/guides/eventdevs/dpaa.rst | 144 +++++
> doc/guides/eventdevs/index.rst | 1 +
> drivers/bus/dpaa/base/qbman/qman.c | 90 ++-
> drivers/bus/dpaa/dpaa_bus.c | 6 +
> drivers/bus/dpaa/include/fsl_qman.h | 26 +-
> drivers/bus/dpaa/rte_bus_dpaa_version.map | 6 +
> drivers/bus/dpaa/rte_dpaa_bus.h | 14 +
> drivers/bus/dpaa/rte_dpaa_logs.h | 16 +
> drivers/event/Makefile | 1 +
> drivers/event/dpaa/Makefile | 37 ++
> drivers/event/dpaa/dpaa_eventdev.c | 652 ++++++++++++++++++++++
> drivers/event/dpaa/dpaa_eventdev.h | 86 +++
> drivers/event/dpaa/rte_pmd_dpaa_event_version.map | 4 +
> drivers/net/dpaa/Makefile | 2 +
> drivers/net/dpaa/dpaa_ethdev.c | 110 +++-
> drivers/net/dpaa/dpaa_ethdev.h | 29 +
> drivers/net/dpaa/dpaa_rxtx.c | 79 ++-
> drivers/net/dpaa/rte_pmd_dpaa_version.map | 2 +
> mk/rte.app.mk | 1 +
Please update the MAINTAINERS file and release notes.
> 21 files changed, 1296 insertions(+), 17 deletions(-)
> create mode 100644 doc/guides/eventdevs/dpaa.rst
> create mode 100644 drivers/event/dpaa/Makefile
> create mode 100644 drivers/event/dpaa/dpaa_eventdev.c
> create mode 100644 drivers/event/dpaa/dpaa_eventdev.h
> create mode 100644 drivers/event/dpaa/rte_pmd_dpaa_event_version.map
>
> --
> 2.9.3
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
` (6 preceding siblings ...)
2017-12-18 9:09 ` [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Jerin Jacob
@ 2017-12-19 11:28 ` Hemant Agrawal
7 siblings, 0 replies; 12+ messages in thread
From: Hemant Agrawal @ 2017-12-19 11:28 UTC (permalink / raw)
To: Sunil Kumar Kori, jerin.jacob; +Cc: dev
On 12/15/2017 6:29 PM, Sunil Kumar Kori wrote:
> Event device support for atomic and parallel queues.
>
> This patch set includes following changes:
> 1. Configuration of atomic and parallel queues with given event device.
> 2. Also maintains previous dequeue method, via poll mode queues.
> 3. Added Rx functions to dequeue data from portal.
> 4. DCA consumption logic for atomic queues.
> 5. Dynamic Logging macros for event device
>
Please also specify dependency on these patches if any on other patch
series.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev
2017-12-15 13:08 Sunil Kumar Kori
@ 2017-12-16 12:31 ` Jerin Jacob
0 siblings, 0 replies; 12+ messages in thread
From: Jerin Jacob @ 2017-12-16 12:31 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: dev, hemant.agrawal
-----Original Message-----
> Date: Fri, 15 Dec 2017 18:38:22 +0530
> From: Sunil Kumar Kori <sunil.kori@nxp.com>
> To: jerin.jacob@caviumnetworks.com
> CC: dev@dpdk.org, hemant.agrawal@nxp.com
> Subject: [PATCH 0/6] event/dpaa: Support for eventdev
> X-Mailer: git-send-email 2.9.3
>
> Event device support for atomic and parallel queues.
>
> This patch set includes following changes:
> 1. Configuration of atomic and parallel queues with given event device.
> 2. Also maintains previous dequeue method, via poll mode queues.
> 3. Added Rx functions to dequeue data from portal.
> 4. DCA consumption logic for atomic queues.
> 5. Dynamic Logging macros for event device
>
> Sunil Kumar Kori (6):
> bus/dpaa: added event dequeue and consumption support
> bus/dpaa: add dpaa eventdev dynamic log support
> net/dpaa: ethdev Rx queue configurations with eventdev
> event/dpaa: add eventdev PMD
> config: enabling compilation of DPAA eventdev PMD
> doc: add DPAA eventdev guide
Looks like you have sent the same series twice.
Please update the patchwork's patch status to reflect case.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev
@ 2017-12-15 13:08 Sunil Kumar Kori
2017-12-16 12:31 ` Jerin Jacob
0 siblings, 1 reply; 12+ messages in thread
From: Sunil Kumar Kori @ 2017-12-15 13:08 UTC (permalink / raw)
To: jerin.jacob; +Cc: dev, hemant.agrawal
Event device support for atomic and parallel queues.
This patch set includes following changes:
1. Configuration of atomic and parallel queues with given event device.
2. Also maintains previous dequeue method, via poll mode queues.
3. Added Rx functions to dequeue data from portal.
4. DCA consumption logic for atomic queues.
5. Dynamic Logging macros for event device
Sunil Kumar Kori (6):
bus/dpaa: added event dequeue and consumption support
bus/dpaa: add dpaa eventdev dynamic log support
net/dpaa: ethdev Rx queue configurations with eventdev
event/dpaa: add eventdev PMD
config: enabling compilation of DPAA eventdev PMD
doc: add DPAA eventdev guide
config/common_base | 4 +
config/defconfig_arm64-dpaa-linuxapp-gcc | 3 +
doc/guides/eventdevs/dpaa.rst | 144 +++++
doc/guides/eventdevs/index.rst | 1 +
drivers/bus/dpaa/base/qbman/qman.c | 90 ++-
drivers/bus/dpaa/dpaa_bus.c | 6 +
drivers/bus/dpaa/include/fsl_qman.h | 26 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 6 +
drivers/bus/dpaa/rte_dpaa_bus.h | 14 +
drivers/bus/dpaa/rte_dpaa_logs.h | 16 +
drivers/event/Makefile | 1 +
drivers/event/dpaa/Makefile | 37 ++
drivers/event/dpaa/dpaa_eventdev.c | 652 ++++++++++++++++++++++
drivers/event/dpaa/dpaa_eventdev.h | 86 +++
drivers/event/dpaa/rte_pmd_dpaa_event_version.map | 4 +
drivers/net/dpaa/Makefile | 2 +
drivers/net/dpaa/dpaa_ethdev.c | 110 +++-
drivers/net/dpaa/dpaa_ethdev.h | 29 +
drivers/net/dpaa/dpaa_rxtx.c | 79 ++-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 2 +
mk/rte.app.mk | 1 +
21 files changed, 1296 insertions(+), 17 deletions(-)
create mode 100644 doc/guides/eventdevs/dpaa.rst
create mode 100644 drivers/event/dpaa/Makefile
create mode 100644 drivers/event/dpaa/dpaa_eventdev.c
create mode 100644 drivers/event/dpaa/dpaa_eventdev.h
create mode 100644 drivers/event/dpaa/rte_pmd_dpaa_event_version.map
--
2.9.3
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2017-12-19 11:28 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-15 12:59 [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 1/6] bus/dpaa: added event dequeue and consumption support Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 2/6] bus/dpaa: add dpaa eventdev dynamic log support Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 3/6] net/dpaa: ethdev Rx queue configurations with eventdev Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 4/6] event/dpaa: add eventdev PMD Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 5/6] config: enabling compilation of DPAA " Sunil Kumar Kori
2017-12-15 12:59 ` [dpdk-dev] [PATCH 6/6] doc: add DPAA eventdev guide Sunil Kumar Kori
2017-12-15 15:09 ` Kovacevic, Marko
2017-12-18 9:09 ` [dpdk-dev] [PATCH 0/6] event/dpaa: Support for eventdev Jerin Jacob
2017-12-19 11:28 ` Hemant Agrawal
2017-12-15 13:08 Sunil Kumar Kori
2017-12-16 12:31 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).