* [PATCH v2 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
` (19 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh, stable
From: Gagandeep Singh <g.singh@nxp.com>
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experienace while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 301057723e..9c90ee25a6 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -317,6 +339,7 @@ static int drain_mr_fqrni(struct qm_portal *p)
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 02/18] net/dpaa: fix typecasting ch ID to u32
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
` (18 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj, hemant.agrawal, stable
From: Rohit Raj <rohit.raj@nxp.com>
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecase
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 060b8c678f..1a2de5240f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
` (17 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 41195eb0a7..beeb03dbf2 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 04/18] bus/dpaa: fix the fman details status
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (2 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
` (16 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 05/18] bus/dpaa: add port buffer manager stats
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (3 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
` (15 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
Add BMI statistics and improving the existing extended
statistics
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 61 ++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fman.h | 4 +-
drivers/bus/dpaa/include/fsl_fman.h | 12 ++++++
drivers/bus/dpaa/version.map | 4 ++
drivers/net/dpaa/dpaa_ethdev.c | 46 ++++++++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 12 ++++++
6 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 97e792806f..124c69edb4 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -267,6 +267,67 @@ fman_if_stats_reset(struct fman_if *p)
;
}
+void
+fman_if_bmi_stats_enable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp |= FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_disable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp &= ~FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ int i = 0;
+
+ value[i++] = (u32)in_be32(®s->fmbm_rfrc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfbc);
+ value[i++] = (u32)in_be32(®s->fmbm_rlfc);
+ value[i++] = (u32)in_be32(®s->fmbm_rffc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfdc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfldec);
+ value[i++] = (u32)in_be32(®s->fmbm_rodc);
+ value[i++] = (u32)in_be32(®s->fmbm_rbdc);
+}
+
+void
+fman_if_bmi_stats_reset(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+
+ out_be32(®s->fmbm_rfrc, 0);
+ out_be32(®s->fmbm_rfbc, 0);
+ out_be32(®s->fmbm_rlfc, 0);
+ out_be32(®s->fmbm_rffc, 0);
+ out_be32(®s->fmbm_rfdc, 0);
+ out_be32(®s->fmbm_rfldec, 0);
+ out_be32(®s->fmbm_rodc, 0);
+ out_be32(®s->fmbm_rbdc, 0);
+}
+
void
fman_if_promiscuous_enable(struct fman_if *p)
{
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..60681068ea 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -56,6 +56,8 @@
#define FMAN_PORT_BMI_FIFO_UNITS 0x100
#define FMAN_PORT_IC_OFFSET_UNITS 0x10
+#define FMAN_BMI_COUNTERS_EN 0x80000000
+
#define FMAN_ENABLE_BPOOL_DEPLETION 0xF00000F0
#define HASH_CTRL_MCAST_EN 0x00000100
@@ -260,7 +262,7 @@ struct rx_bmi_regs {
/**< Buffer Manager pool Information-*/
uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
/**< Allocate Counter-*/
- uint32_t reserved0130[8];
+ uint32_t reserved0120[16];
/**< 0x130/0x140 - 0x15F reserved -*/
uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
/**< Congestion Group Map*/
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 20690f8329..5a9750ad0c 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -60,6 +60,18 @@ void fman_if_stats_reset(struct fman_if *p);
__rte_internal
void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+__rte_internal
+void fman_if_bmi_stats_enable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_disable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value);
+
+__rte_internal
+void fman_if_bmi_stats_reset(struct fman_if *p);
+
/* Set ignore pause option for a specific interface */
void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index 3f547f75cf..a17d57632e 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -24,6 +24,10 @@ INTERNAL {
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
+ fman_if_bmi_stats_enable;
+ fman_if_bmi_stats_disable;
+ fman_if_bmi_stats_get_all;
+ fman_if_bmi_stats_reset;
fman_if_clear_mac_addr;
fman_if_disable_rx;
fman_if_discard_rx_errors;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1a2de5240f..90b34e42f2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -131,6 +131,22 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tvlan)},
{"rx_undersized",
offsetof(struct dpaa_if_stats, tund)},
+ {"rx_frame_counter",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfrc)},
+ {"rx_bad_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfbc)},
+ {"rx_large_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rlfc)},
+ {"rx_filter_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rffc)},
+ {"rx_frame_discrad_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfdc)},
+ {"rx_frame_list_dma_err_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfldec)},
+ {"rx_out_of_buffer_discard ",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rodc)},
+ {"rx_buf_diallocate",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rbdc)},
};
static struct rte_dpaa_driver rte_dpaa_pmd;
@@ -430,6 +446,7 @@ static void dpaa_interrupt_handler(void *param)
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
uint16_t i;
PMD_INIT_FUNC_TRACE();
@@ -443,7 +460,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dev->process_private);
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -461,8 +480,10 @@ static int dpaa_eth_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac) {
+ fman_if_bmi_stats_disable(fif);
fman_if_disable_rx(fif);
+ }
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
for (i = 0; i < dev->data->nb_rx_queues; i++)
@@ -769,6 +790,7 @@ static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
fman_if_stats_reset(dev->process_private);
+ fman_if_bmi_stats_reset(dev->process_private);
return 0;
}
@@ -777,8 +799,9 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i = 0, j, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (n < num)
return num;
@@ -789,10 +812,16 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < num; i++) {
+ for (i = 0; i < num - (bmi_count - 1); i++) {
xstats[i].id = i;
xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
}
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < num; i++, j++) {
+ xstats[i].id = i;
+ xstats[i].value = values[j];
+ }
+
return i;
}
@@ -819,8 +848,9 @@ static int
dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t *values, unsigned int n)
{
- unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i, j, stat_cnt = RTE_DIM(dpaa_xstats_strings);
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (!ids) {
if (n < stat_cnt)
@@ -832,10 +862,14 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < stat_cnt; i++)
+ for (i = 0; i < stat_cnt - (bmi_count - 1); i++)
values[i] =
values_copy[dpaa_xstats_strings[i].offset / 8];
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < stat_cnt; i++, j++)
+ values[i] = values_copy[j];
+
return stat_cnt;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b6c61b8b6b..261a5a3ca7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -212,6 +212,18 @@ dpaa_rx_cb_atomic(void *event,
const struct qm_dqrr_entry *dqrr,
void **bufs);
+struct dpaa_if_rx_bmi_stats {
+ uint32_t fmbm_rstc; /**< Rx Statistics Counters*/
+ uint32_t fmbm_rfrc; /**< Rx Frame Counter*/
+ uint32_t fmbm_rfbc; /**< Rx Bad Frames Counter*/
+ uint32_t fmbm_rlfc; /**< Rx Large Frames Counter*/
+ uint32_t fmbm_rffc; /**< Rx Filter Frames Counter*/
+ uint32_t fmbm_rfdc; /**< Rx Frame Discard Counter*/
+ uint32_t fmbm_rfldec; /**< Rx Frames List DMA Error Counter*/
+ uint32_t fmbm_rodc; /**< Rx Out of Buffers Discard nntr*/
+ uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
+};
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 06/18] net/dpaa: support Tx confirmation to enable PTP
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (4 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
` (14 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
TX confirmation provides dedicated confirmation
queues for transmitted packets. These queues are
used by software to get the status and release
transmitted packets buffers.
This patch also changes the IEEE1588 support as devargs
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 124 ++++++++++++++++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 +-
drivers/net/dpaa/dpaa_rxtx.c | 49 +++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 2 +
5 files changed, 154 insertions(+), 28 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index e8402dff52..acf4daab02 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -264,6 +264,9 @@ for details.
Done
testpmd>
+* Use dev arg option ``drv_ieee1588=1`` to enable ieee 1588 support at
+ driver level. e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``
+
FMAN Config
-----------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 90b34e42f2..bba305cfb1 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2020,2022-2024 NXP
*
*/
/* System headers */
@@ -30,6 +30,7 @@
#include <rte_eal.h>
#include <rte_alarm.h>
#include <rte_ether.h>
+#include <rte_kvargs.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
#include <rte_ring.h>
@@ -50,6 +51,7 @@
#include <process.h>
#include <fmlib/fm_ext.h>
+#define DRIVER_IEEE1588 "drv_ieee1588"
#define CHECK_INTERVAL 100 /* 100ms */
#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */
@@ -83,6 +85,7 @@ static uint64_t dev_tx_offloads_nodis =
static int is_global_init;
static int fmc_q = 1; /* Indicates the use of static fmc for distribution */
static int default_q; /* use default queue - FMC is not executed*/
+int dpaa_ieee_1588; /* use to indicate if IEEE 1588 is enabled for the driver */
/* At present we only allow up to 4 push mode queues as default - as each of
* this queue need dedicated portal and we are short of portals.
*/
@@ -1826,9 +1829,15 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_b = 0;
- /* no tx-confirmation */
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
- opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+ if (dpaa_ieee_1588) {
+ opts.fqd.context_a.lo = 0;
+ opts.fqd.context_a.hi = fman_dealloc_bufs_mask_hi;
+ } else {
+ /* no tx-confirmation */
+ opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
+ opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ }
+
if (fman_ip_rev >= FMAN_V3) {
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= 0x04000000;
@@ -1861,9 +1870,10 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
-static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
+ * to support PTP
+ */
+static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
@@ -1872,15 +1882,15 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
ret = qman_reserve_fqid(fqid);
if (ret) {
- DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("Reserve fqid %d failed with ret: %d",
fqid, ret);
return -EINVAL;
}
/* "map" this Rx FQ to one of the interfaces Tx FQID */
- DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+ DPAA_PMD_DEBUG("Creating fq %p, fqid %d", fq, fqid);
ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
if (ret) {
- DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("create fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
@@ -1888,11 +1898,10 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
ret = qman_init_fq(fq, 0, &opts);
if (ret)
- DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("init fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
-#endif
/* Initialise a network interface */
static int
@@ -1927,6 +1936,43 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+static int
+check_devargs_handler(__rte_unused const char *key, const char *value,
+ __rte_unused void *opaque)
+{
+ if (strcmp(value, "1"))
+ return -1;
+
+ return 0;
+}
+
+static int
+dpaa_get_devargs(struct rte_devargs *devargs, const char *key)
+{
+ struct rte_kvargs *kvlist;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (!kvlist)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, key)) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+
+ if (rte_kvargs_process(kvlist, key,
+ check_devargs_handler, NULL) < 0) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+ rte_kvargs_free(kvlist);
+
+ return 1;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1944,6 +1990,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
+ struct rte_device *dev = eth_dev->device;
PMD_INIT_FUNC_TRACE();
@@ -1960,6 +2007,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
+ if (dpaa_get_devargs(dev->devargs, DRIVER_IEEE1588))
+ dpaa_ieee_1588 = 1;
+
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
@@ -2079,6 +2129,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ dpaa_intf->tx_conf_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+ MAX_DPAA_CORES, MAX_CACHELINE);
+ if (!dpaa_intf->tx_conf_queues) {
+ DPAA_PMD_ERR("Failed to alloc mem for TX conf queues\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
/* If congestion control is enabled globally*/
if (td_tx_threshold) {
dpaa_intf->cgr_tx = rte_zmalloc(NULL,
@@ -2115,22 +2173,32 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ if (dpaa_ieee_1588)
#endif
+ {
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
+ fman_intf->fqid_tx_confirm);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
+ }
DPAA_PMD_DEBUG("All frame queues created");
@@ -2388,4 +2456,6 @@ static struct rte_dpaa_driver rte_dpaa_pmd = {
};
RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
+RTE_PMD_REGISTER_PARAM_STRING(net_dpaa,
+ DRIVER_IEEE1588 "=<int>");
RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_pmd, NOTICE);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 261a5a3ca7..b427b29cb6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2024 NXP
*
*/
#ifndef __DPAA_ETHDEV_H__
@@ -112,6 +112,7 @@
#define FMC_FILE "/tmp/fmc.bin"
extern struct rte_mempool *dpaa_tx_sg_pool;
+extern int dpaa_ieee_1588;
/* structure to free external and indirect
* buffers.
@@ -131,6 +132,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c2579d65ee..8593e20200 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1082,6 +1082,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
+ struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
@@ -1162,6 +1165,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
+
+ if (dpaa_ieee_1588)
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
+
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1190,6 +1197,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
+ if (dpaa_ieee_1588)
+ dpaa_eth_tx_conf(fq_txconf);
+
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1200,6 +1210,45 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+void
+dpaa_eth_tx_conf(void *q)
+{
+ struct qman_fq *fq = q;
+ struct qm_dqrr_entry *dq;
+ int num_tx_conf, ret, dq_num;
+ uint32_t vdqcr_flags = 0;
+
+ if (unlikely(rte_dpaa_bpid_info == NULL &&
+ rte_eal_process_type() == RTE_PROC_SECONDARY))
+ rte_dpaa_bpid_info = fq->bp_array;
+
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+ ret = rte_dpaa_portal_init((void *)0);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal");
+ return;
+ }
+ }
+
+ num_tx_conf = DPAA_MAX_DEQUEUE_NUM_FRAMES - 2;
+
+ do {
+ dq_num = 0;
+ ret = qman_set_vdq(fq, num_tx_conf, vdqcr_flags);
+ if (ret)
+ return;
+ do {
+ dq = qman_dequeue(fq);
+ if (!dq)
+ continue;
+ dq_num++;
+ dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ qman_dqrr_consume(fq, dq);
+ dpaa_free_mbuf(&dq->fd);
+ } while (fq->flags & QMAN_FQ_STATE_VDQCR);
+ } while (dq_num == num_tx_conf);
+}
+
uint16_t
dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b2d7c0f2a3..042602e087 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -281,6 +281,8 @@ uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+void dpaa_eth_tx_conf(void *q);
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 07/18] net/dpaa: add support to separate Tx conf queues
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (5 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
` (13 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch separates Tx confirmation queues for kernel
and DPDK so as to support the VSP case.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fsl_qman.h | 4 ++-
drivers/net/dpaa/dpaa_ethdev.c | 45 +++++++++++++++++++++--------
drivers/net/dpaa/dpaa_rxtx.c | 3 +-
3 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index c0677976e8..db14dfb839 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2012 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -1237,6 +1237,8 @@ struct qman_fq {
/* DPDK Interface */
void *dpaa_intf;
+ /*to store tx_conf_queue corresponding to tx_queue*/
+ struct qman_fq *tx_conf_queue;
struct rte_event ev;
/* affined portal in case of static queue */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bba305cfb1..3ee3029729 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1870,9 +1870,30 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
- * to support PTP
- */
+static int
+dpaa_tx_conf_queue_init(struct qman_fq *fq)
+{
+ struct qm_mcc_initfq opts = {0};
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID, fq);
+ if (ret) {
+ DPAA_PMD_ERR("create Tx_conf failed with ret: %d", ret);
+ return ret;
+ }
+
+ opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+ opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+ ret = qman_init_fq(fq, 0, &opts);
+ if (ret)
+ DPAA_PMD_ERR("init Tx_conf fqid %d failed with ret: %d",
+ fq->fqid, ret);
+ return ret;
+}
+
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
@@ -2170,6 +2191,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+
+ if (dpaa_ieee_1588) {
+ ret = dpaa_tx_conf_queue_init(&dpaa_intf->tx_conf_queues[loop]);
+ if (ret)
+ goto free_tx;
+
+ dpaa_intf->tx_conf_queues[loop].dpaa_intf = dpaa_intf;
+ dpaa_intf->tx_queues[loop].tx_conf_queue = &dpaa_intf->tx_conf_queues[loop];
+ }
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
@@ -2190,16 +2220,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
goto free_tx;
}
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
- fman_intf->fqid_tx_confirm);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
- goto free_tx;
- }
- dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
}
-
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 8593e20200..3bd35c7a0e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1083,8 +1083,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
- struct dpaa_if *dpaa_intf = fq->dpaa_intf;
- struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
+ struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 08/18] net/dpaa: share MAC FMC scheme and CC parse
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (6 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
` (12 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
For Shared MAC:
1) Allocate RXQ from VSP scheme. (Virtual Storage Profile)
2) Allocate RXQ from Coarse classifiation (CC) rules to VSP.
2) Remove RXQ allocated which is reconfigured without VSP.
3) Don't alloc default queue and err queues.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 2 +-
drivers/bus/dpaa/include/fman.h | 3 +-
drivers/net/dpaa/dpaa_ethdev.c | 60 +++--
drivers/net/dpaa/dpaa_ethdev.h | 13 +-
drivers/net/dpaa/dpaa_flow.c | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 421 +++++++++++++++++++-----------
drivers/net/dpaa/dpaa_rxtx.c | 20 +-
7 files changed, 348 insertions(+), 179 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index beeb03dbf2..bf41a3ed96 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -42,7 +42,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 60681068ea..3642b43be7 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -72,10 +72,11 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline = 0,
+ fman_offline_internal = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
+ fman_onic,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3ee3029729..bf14d73433 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -255,7 +255,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
DPAA_PMD_ERR("Cannot open IF socket");
return -errno;
}
-
strncpy(ifr.ifr_name, dpaa_intf->name, IFNAMSIZ - 1);
if (ioctl(socket_fd, SIOCGIFMTU, &ifr) < 0) {
@@ -1893,6 +1892,7 @@ dpaa_tx_conf_queue_init(struct qman_fq *fq)
return ret;
}
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
@@ -1923,6 +1923,7 @@ static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
fqid, ret);
return ret;
}
+#endif
/* Initialise a network interface */
static int
@@ -1957,6 +1958,41 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int
+dpaa_error_queue_init(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
+{
+ int i, ret;
+ struct qman_fq *err_queues = dpaa_intf->debug_queues;
+ uint32_t err_fqid = 0;
+
+ if (fman_intf->is_shared_mac) {
+ DPAA_PMD_DEBUG("Shared MAC's err queues are handled in kernel");
+ return 0;
+ }
+
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
+ if (i == DPAA_DEBUG_FQ_RX_ERROR)
+ err_fqid = fman_intf->fqid_rx_err;
+ else if (i == DPAA_DEBUG_FQ_TX_ERROR)
+ err_fqid = fman_intf->fqid_tx_err;
+ else
+ continue;
+ ret = dpaa_def_queue_init(&err_queues[i], err_fqid);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA %s ERROR queue init failed!",
+ i == DPAA_DEBUG_FQ_RX_ERROR ?
+ "RX" : "TX");
+ return ret;
+ }
+ err_queues[i].dpaa_intf = dpaa_intf;
+ }
+
+ return 0;
+}
+#endif
+
static int
check_devargs_handler(__rte_unused const char *key, const char *value,
__rte_unused void *opaque)
@@ -2202,25 +2238,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-
-#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
- if (dpaa_ieee_1588)
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ ret = dpaa_error_queue_init(dpaa_intf, fman_intf);
+ if (ret)
+ goto free_tx;
#endif
- {
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- }
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b427b29cb6..0a1ceb376a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -78,8 +78,11 @@
#define DPAA_IF_RX_CONTEXT_STASH 0
/* Each "debug" FQ is represented by one of these */
-#define DPAA_DEBUG_FQ_RX_ERROR 0
-#define DPAA_DEBUG_FQ_TX_ERROR 1
+enum {
+ DPAA_DEBUG_FQ_RX_ERROR,
+ DPAA_DEBUG_FQ_TX_ERROR,
+ DPAA_DEBUG_FQ_MAX_NUM
+};
#define DPAA_RSS_OFFLOAD_ALL ( \
RTE_ETH_RSS_L2_PAYLOAD | \
@@ -107,6 +110,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
+#define DPAA_1G_MAC_START_IDX 1
+#define DPAA_10G_MAC_START_IDX 9
+#define DPAA_2_5G_MAC_START_IDX DPAA_10G_MAC_START_IDX
+
#define DPAA_DEFAULT_RXQ_VSP_ID 1
#define FMC_FILE "/tmp/fmc.bin"
@@ -134,7 +141,7 @@ struct dpaa_if {
struct qman_fq *tx_queues;
struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
- struct qman_fq debug_queues[2];
+ struct qman_fq debug_queues[DPAA_DEBUG_FQ_MAX_NUM];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 02aca78d05..082bd5d014 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -651,7 +651,13 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_mac_1g)
+ /* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
+ * ports so that kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_mac_1g &&
+ fif->mac_idx >= DPAA_10G_MAC_START_IDX)
+ return e_FM_PORT_TYPE_RX_10G;
+ else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_RX;
else if (fif->mac_type == fman_mac_2_5g)
return e_FM_PORT_TYPE_RX_2_5G;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index f8c9360311..7dc42f6e23 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2023 NXP
*/
/* System headers */
@@ -204,139 +204,258 @@ struct fmc_model_t {
struct fmc_model_t *g_fmc_model;
-static int dpaa_port_fmc_port_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx)
+static int
+dpaa_port_fmc_port_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx)
{
int current_port = fmc_model->apply_order[apply_idx].index;
const fmc_port *pport = &fmc_model->port[current_port];
- const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2};
+ uint32_t num;
+
+ if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
+ pport->number == fif->mac_idx &&
+ (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic))
+ return current_port;
+
+ if (fif->mac_type == fman_mac_1g) {
+ if (pport->type != e_FM_PORT_TYPE_RX)
+ return -ENODEV;
+ num = pport->number + DPAA_1G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_2_5g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_2_5G)
+ return -ENODEV;
+ num = pport->number + DPAA_2_5G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_10g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_10G)
+ return -ENODEV;
+ num = pport->number + DPAA_10G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ DPAA_PMD_ERR("Invalid MAC(mac_idx=%d) type(%d)",
+ fif->mac_idx, fif->mac_type);
+
+ return -EINVAL;
+}
+
+static int
+dpaa_fq_is_in_kernel(uint32_t fqid,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if ((fqid == fif->fqid_rx_def ||
+ (fqid >= fif->fqid_rx_pcd &&
+ fqid < (fif->fqid_rx_pcd + fif->fqid_rx_pcd_count)) ||
+ fqid == fif->fqid_rx_err ||
+ fqid == fif->fqid_tx_err))
+ return true;
+
+ return false;
+}
+
+static int
+dpaa_vsp_id_is_in_kernel(uint8_t vsp_id,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (vsp_id == fif->base_profile_id)
+ return true;
+
+ return false;
+}
+
+static uint8_t
+dpaa_enqueue_vsp_id(struct fman_if *fif,
+ const struct ioc_fm_pcd_cc_next_enqueue_params_t *eq_param)
+{
+ if (eq_param->override_fqid)
+ return eq_param->new_relative_storage_profile_id;
+
+ return fif->base_profile_id;
+}
- if (mac_idx[fif->mac_idx] != pport->number ||
- mac_type[fif->mac_idx] != pport->type)
- return -1;
+static int
+dpaa_kg_storage_is_in_kernel(struct fman_if *fif,
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (!kg_storage->direct ||
+ (kg_storage->direct &&
+ kg_storage->profile_select.direct_relative_profile_id ==
+ fif->base_profile_id))
+ return true;
- return current_port;
+ return false;
}
-static int dpaa_port_fmc_scheme_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static void
+dpaa_fmc_remove_fq_from_allocated(uint32_t *fqids,
+ uint16_t *rxq_idx, uint32_t rm_fqid)
{
- int idx = fmc->apply_order[apply_idx].index;
uint32_t i;
- if (!fmc->scheme[idx].override_storage_profile &&
- fif->is_shared_mac) {
- DPAA_PMD_WARN("No VSP assigned to scheme %d for sharemac %d!",
- idx, fif->mac_idx);
- DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!");
+ for (i = 0; i < (*rxq_idx); i++) {
+ if (fqids[i] != rm_fqid)
+ continue;
+ DPAA_PMD_WARN("Remove fq(0x%08x) allocated.",
+ rm_fqid);
+ if ((*rxq_idx) > (i + 1)) {
+ memmove(&fqids[i], &fqids[i + 1],
+ ((*rxq_idx) - (i + 1)) * sizeof(uint32_t));
+ }
+ (*rxq_idx)--;
+ break;
}
+}
- if (e_IOC_FM_PCD_DONE ==
- fmc->scheme[idx].next_engine) {
- for (i = 0; i < fmc->scheme[idx]
- .key_ext_and_hash.hash_dist_num_of_fqids; i++) {
- uint32_t fqid = fmc->scheme[idx].base_fqid + i;
- int k, found = 0;
-
- if (fqid == fif->fqid_rx_def ||
- (fqid >= fif->fqid_rx_pcd &&
- fqid < (fif->fqid_rx_pcd +
- fif->fqid_rx_pcd_count))) {
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id !=
- fif->base_profile_id) {
- DPAA_PMD_ERR("Def RXQ must be associated with def VSP on sharemac!");
-
- return -1;
- }
- continue;
+static int
+dpaa_port_fmc_scheme_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ int scheme_idx = fmc->apply_order[apply_idx].index;
+ int k, found = 0;
+ uint32_t i, num_rxq, fqid, rxq_idx_start = *rxq_idx;
+ const struct fm_pcd_kg_scheme_params_t *scheme;
+ const struct ioc_fm_pcd_kg_key_extract_and_hash_params_t *params;
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage;
+ uint8_t vsp_id;
+
+ scheme = &fmc->scheme[scheme_idx];
+ params = &scheme->key_ext_and_hash;
+ num_rxq = params->hash_dist_num_of_fqids;
+ kg_storage = &scheme->storage_profile;
+
+ if (scheme->override_storage_profile && kg_storage->direct)
+ vsp_id = kg_storage->profile_select.direct_relative_profile_id;
+ else
+ vsp_id = fif->base_profile_id;
+
+ if (dpaa_kg_storage_is_in_kernel(fif, kg_storage)) {
+ DPAA_PMD_WARN("Scheme[%d]'s VSP is in kernel",
+ scheme_idx);
+ /* The FQ may be allocated from previous CC or scheme,
+ * find and remove it.
+ */
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ DPAA_PMD_WARN("Removed fqid(0x%08x) of Scheme[%d]",
+ fqid, scheme_idx);
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ if (!dpaa_fq_is_in_kernel(fqid, fif)) {
+ char reason_msg[128];
+ char result_msg[128];
+
+ sprintf(reason_msg,
+ "NOT handled in kernel");
+ sprintf(result_msg,
+ "will DRAIN kernel pool!");
+ DPAA_PMD_WARN("Traffic to FQ(%08x)(%s) %s",
+ fqid, reason_msg, result_msg);
}
+ }
- if (fif->is_shared_mac &&
- !fmc->scheme[idx].override_storage_profile) {
- DPAA_PMD_ERR("RXQ to DPDK must be associated with VSP on sharemac!");
- return -1;
- }
+ return 0;
+ }
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id ==
- fif->base_profile_id) {
- DPAA_PMD_ERR("RXQ can't be associated with default VSP on sharemac!");
+ if (e_IOC_FM_PCD_DONE != scheme->next_engine) {
+ /* Do nothing.*/
+ DPAA_PMD_DEBUG("Will parse scheme[%d]'s next engine(%d)",
+ scheme_idx, scheme->next_engine);
+ return 0;
+ }
- return -1;
- }
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ found = 0;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_DEBUG("Too many queues in FMC policy"
- "%d overflow %d",
- (*rxq_idx), max_nb_rxq);
+ if (dpaa_fq_is_in_kernel(fqid, fif)) {
+ DPAA_PMD_WARN("FQ(0x%08x) is handled in kernel.",
+ fqid);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
- continue;
- }
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
- for (k = 0; k < (*rxq_idx); k++) {
- if (fqids[k] == fqid) {
- found = 1;
- break;
- }
- }
+ break;
+ }
- if (found)
- continue;
- fqids[(*rxq_idx)] = fqid;
- if (fmc->scheme[idx].override_storage_profile) {
- if (fmc->scheme[idx].storage_profile.direct) {
- vspids[(*rxq_idx)] =
- fmc->scheme[idx].storage_profile
- .profile_select
- .direct_relative_profile_id;
- } else {
- vspids[(*rxq_idx)] = -1;
- }
- } else {
- vspids[(*rxq_idx)] = -1;
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
}
- (*rxq_idx)++;
}
+
+ if (found)
+ continue;
+ fqids[(*rxq_idx)] = fqid;
+ vspids[(*rxq_idx)] = vsp_id;
+
+ (*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static int
+dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
{
uint16_t j, k, found = 0;
const struct ioc_keys_params_t *keys_params;
- uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index;
-
- keys_params = &fmc_model->ccnode[cc_idx].keys_params;
+ const struct ioc_fm_pcd_cc_next_engine_params_t *params;
+ uint32_t fqid, cc_idx = fmc->apply_order[apply_idx].index;
+ uint32_t rxq_idx_start = *rxq_idx;
+ uint8_t vsp_id;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
+ keys_params = &fmc->ccnode[cc_idx].keys_params;
for (j = 0; j < keys_params->num_of_keys; ++j) {
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
+
+ break;
+ }
found = 0;
- fqid = keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.new_fqid;
+ params = &keys_params->key_params[j].cc_next_engine_params;
/* We read DPDK queue from last classification rule present in
* FMC policy file. Hence, this check is required here.
@@ -344,15 +463,30 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
* have userspace queue so that it can be used by DPDK
* application.
*/
- if (keys_params->key_params[j].cc_next_engine_params
- .next_engine != e_IOC_FM_PCD_DONE) {
- DPAA_PMD_WARN("FMC CC next engine not support");
+ if (params->next_engine != e_IOC_FM_PCD_DONE) {
+ DPAA_PMD_WARN("CC next engine(%d) not support",
+ params->next_engine);
continue;
}
- if (keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.action !=
+ if (params->params.enqueue_params.action !=
e_IOC_FM_PCD_ENQ_FRAME)
continue;
+
+ fqid = params->params.enqueue_params.new_fqid;
+ vsp_id = dpaa_enqueue_vsp_id(fif,
+ ¶ms->params.enqueue_params);
+ if (dpaa_fq_is_in_kernel(fqid, fif) ||
+ dpaa_vsp_id_is_in_kernel(vsp_id, fif)) {
+ DPAA_PMD_DEBUG("FQ(0x%08x)/VSP(%d) is in kernel.",
+ fqid, vsp_id);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
+
for (k = 0; k < (*rxq_idx); k++) {
if (fqids[k] == fqid) {
found = 1;
@@ -362,38 +496,22 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
if (found)
continue;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
-
fqids[(*rxq_idx)] = fqid;
- vspids[(*rxq_idx)] =
- keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params
- .new_relative_storage_profile_id;
-
- if (vspids[(*rxq_idx)] == fif->base_profile_id &&
- fif->is_shared_mac) {
- DPAA_PMD_ERR("VSP %d can NOT be used on DPDK.",
- vspids[(*rxq_idx)]);
- DPAA_PMD_ERR("It is associated to skb pool of shared interface.");
- return -1;
- }
+ vspids[(*rxq_idx)] = vsp_id;
+
(*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-int dpaa_port_fmc_init(struct fman_if *fif,
- uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
+int
+dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
{
int current_port = -1, ret;
uint16_t rxq_idx = 0;
- const struct fmc_model_t *fmc_model;
+ const struct fmc_model_t *fmc;
uint32_t i;
if (!g_fmc_model) {
@@ -402,14 +520,14 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (!fp) {
DPAA_PMD_ERR("%s not exists", FMC_FILE);
- return -1;
+ return -ENOENT;
}
g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64);
if (!g_fmc_model) {
DPAA_PMD_ERR("FMC memory alloc failed");
fclose(fp);
- return -1;
+ return -ENOBUFS;
}
bytes_read = fread(g_fmc_model,
@@ -419,25 +537,28 @@ int dpaa_port_fmc_init(struct fman_if *fif,
fclose(fp);
rte_free(g_fmc_model);
g_fmc_model = NULL;
- return -1;
+ return -EIO;
}
fclose(fp);
}
- fmc_model = g_fmc_model;
+ fmc = g_fmc_model;
- if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER)
- return -1;
+ if (fmc->format_version != FMC_OUTPUT_FORMAT_VER) {
+ DPAA_PMD_ERR("FMC version(0x%08x) != Supported ver(0x%08x)",
+ fmc->format_version, FMC_OUTPUT_FORMAT_VER);
+ return -EINVAL;
+ }
- for (i = 0; i < fmc_model->apply_order_count; i++) {
- switch (fmc_model->apply_order[i].type) {
+ for (i = 0; i < fmc->apply_order_count; i++) {
+ switch (fmc->apply_order[i].type) {
case fmcengine_start:
break;
case fmcengine_end:
break;
case fmcport_start:
current_port = dpaa_port_fmc_port_parse(fif,
- fmc_model, i);
+ fmc, i);
break;
case fmcport_end:
break;
@@ -445,24 +566,24 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (current_port < 0)
break;
- ret = dpaa_port_fmc_scheme_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq,
- fqids, vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_scheme_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from scheme[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmcccnode:
if (current_port < 0)
break;
- ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq, fqids,
- vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_ccnode_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from cc[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmchtnode:
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3bd35c7a0e..d1338d1654 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -693,13 +693,26 @@ dpaa_rx_cb_atomic(void *event,
}
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
+static inline void
+dpaa_eth_err_queue(struct qman_fq *fq)
{
struct rte_mbuf *mbuf;
struct qman_fq *debug_fq;
int ret, i;
struct qm_dqrr_entry *dq;
struct qm_fd *fd;
+ struct dpaa_if *dpaa_intf;
+
+ dpaa_intf = fq->dpaa_intf;
+ if (fq != &dpaa_intf->rx_queues[0]) {
+ /* Associate error queues to the first RXQ.*/
+ return;
+ }
+
+ if (dpaa_intf->cfg->fman_if->is_shared_mac) {
+ /* Error queues of shared MAC are handled in kernel. */
+ return;
+ }
if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
ret = rte_dpaa_portal_init((void *)0);
@@ -708,7 +721,7 @@ static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
return;
}
}
- for (i = 0; i <= DPAA_DEBUG_FQ_TX_ERROR; i++) {
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
debug_fq = &dpaa_intf->debug_queues[i];
ret = qman_set_vdq(debug_fq, 4, QM_VDQCR_EXACT);
if (ret)
@@ -751,8 +764,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
rte_dpaa_bpid_info = fq->bp_array;
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- if (fq->fqid == ((struct dpaa_if *)fq->dpaa_intf)->rx_queues[0].fqid)
- dpaa_eth_err_queue((struct dpaa_if *)fq->dpaa_intf);
+ dpaa_eth_err_queue(fq);
#endif
if (likely(fq->is_static))
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 09/18] net/dpaa: support Rx/Tx timestamp read
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (7 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
` (11 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch implements Rx/Tx timestamp read operations
for DPAA1 platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 21 +++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 6 ++-
drivers/bus/dpaa/include/fman.h | 18 ++++++-
drivers/net/dpaa/dpaa_ethdev.c | 2 +
drivers/net/dpaa/dpaa_ethdev.h | 17 +++++++
drivers/net/dpaa/dpaa_ptp.c | 43 +++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_rxtx.h | 4 +-
drivers/net/dpaa/meson.build | 1 +
10 files changed, 169 insertions(+), 15 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b136ed191a..4196dd800c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timestamp offload = Y
Basic stats = Y
Extended stats = Y
FW version = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index bf41a3ed96..89786636d9 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2024 NXP
*
*/
@@ -520,6 +520,25 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 124c69edb4..4fc41c1ae9 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020 NXP
+ * Copyright 2017,2020,2022 NXP
*
*/
@@ -565,6 +565,10 @@ fman_if_set_ic_params(struct fman_if *fm_if,
&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
out_be32(fmbm_ricp, val);
+ unsigned int *fmbm_ticp =
+ &((struct tx_bmi_regs *)__if->tx_bmi_map)->fmbm_ticp;
+ out_be32(fmbm_ticp, val);
+
return 0;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3642b43be7..857eef3d2f 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2021 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -292,6 +292,21 @@ struct rx_bmi_regs {
uint32_t fmbm_rdbg; /**< Rx Debug-*/
};
+struct tx_bmi_regs {
+ uint32_t fmbm_tcfg; /**< Tx Configuration*/
+ uint32_t fmbm_tst; /**< Tx Status*/
+ uint32_t fmbm_tda; /**< Tx DMA attributes*/
+ uint32_t fmbm_tfp; /**< Tx FIFO Parameters*/
+ uint32_t fmbm_tfed; /**< Tx Frame End Data*/
+ uint32_t fmbm_ticp; /**< Tx Internal Context Parameters*/
+ uint32_t fmbm_tfdne; /**< Tx Frame Dequeue Next Engine*/
+ uint32_t fmbm_tfca; /**< Tx Frame Attributes*/
+ uint32_t fmbm_tcfqid; /**< Tx Confirmation Frame Queue ID*/
+ uint32_t fmbm_tefqid; /**< Tx Error Frame Queue ID*/
+ uint32_t fmbm_tfene; /**< Tx Frame Enqueue Next Engine*/
+ uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
+ uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
+};
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -380,6 +395,7 @@ struct __fman_if {
uint64_t regs_size;
void *ccsr_map;
void *bmi_map;
+ void *tx_bmi_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bf14d73433..682cb1c77e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,8 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
static bool
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 0a1ceb376a..bbdb0936c0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -151,6 +151,14 @@ struct dpaa_if {
void *netenv_handle;
void *scheme_handle[2];
uint32_t scheme_count;
+ /*stores timestamp of last received packet on dev*/
+ uint64_t rx_timestamp;
+ /*stores timestamp of last received tx confirmation packet on dev*/
+ uint64_t tx_timestamp;
+ /* stores pointer to next tx_conf queue that should be processed,
+ * it corresponds to last packet transmitted
+ */
+ struct qman_fq *next_tx_conf_queue;
void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM];
uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM];
@@ -233,6 +241,15 @@ struct dpaa_if_rx_bmi_stats {
uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
};
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
new file mode 100644
index 0000000000..df6df1ddf2
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2024 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_eth_ctrl.h>
+#include <rte_malloc.h>
+#include <rte_time.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+ if (dpaa_intf->next_tx_conf_queue) {
+ while (!dpaa_intf->tx_timestamp)
+ dpaa_eth_tx_conf(dpaa_intf->next_tx_conf_queue);
+ } else {
+ return -1;
+ }
+ *timestamp = rte_ns_to_timespec(dpaa_intf->tx_timestamp);
+
+ return 0;
+}
+
+int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ *timestamp = rte_ns_to_timespec(dpaa_intf->rx_timestamp);
+ return 0;
+}
+
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d1338d1654..e3b4bb14ab 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2024 NXP
*
*/
@@ -49,7 +49,6 @@
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
- (_fd)->cmd = 0; \
(_fd)->opaque_addr = 0; \
(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
@@ -122,6 +121,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
{
struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
uint64_t prs = *((uintptr_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+ struct rte_ether_hdr *eth_hdr =
+ rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
@@ -241,6 +242,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
if (prs & DPAA_PARSE_VLAN_MASK)
m->ol_flags |= RTE_MBUF_F_RX_VLAN;
/* Packet received without stripping the vlan */
+
+ if (eth_hdr->ether_type == htons(RTE_ETHER_TYPE_1588)) {
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
+ }
}
static inline void dpaa_checksum(struct rte_mbuf *mbuf)
@@ -317,7 +323,7 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
prs->ip_off[0] = mbuf->l2_len;
prs->l4_off = mbuf->l3_len + mbuf->l2_len;
/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
- fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+ fd->cmd |= DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
static inline void
@@ -513,6 +519,7 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
bp_info = DPAA_BPID_TO_POOL_INFO(dqrr[0]->fd.bpid);
ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr[0]->fd));
@@ -554,6 +561,11 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -567,6 +579,7 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
for (i = 0; i < num_bufs; i++) {
fd = &dqrr[i]->fd;
@@ -594,6 +607,11 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -758,6 +776,8 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int num_rx_bufs, ret;
uint32_t vdqcr_flags = 0;
+ struct annotations_t *annot;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -800,6 +820,10 @@ uint16_t dpaa_eth_queue_rx(void *q,
continue;
bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(bufs[num_rx - 1]->buf_addr);
+ dpaa_intf->rx_timestamp = rte_cpu_to_be_64(annot->timestamp);
+ }
qman_dqrr_consume(fq, dq);
} while (fq->flags & QMAN_FQ_STATE_VDQCR);
@@ -1095,6 +1119,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
@@ -1107,6 +1132,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+ if (dpaa_ieee_1588) {
+ dpaa_intf->next_tx_conf_queue = fq_txconf;
+ dpaa_eth_tx_conf(fq_txconf);
+ dpaa_intf->tx_timestamp = 0;
+ }
+
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
@@ -1119,6 +1150,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
(mbuf->data_off & 0x7F) != 0x0)
realloc_mbuf = 1;
+
+ fd_arr[loop].cmd = 0;
+ if (dpaa_ieee_1588) {
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
+ qman_fq_fqid(fq_txconf);
+ fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
+ DPAA_FD_CMD_UPD;
+ }
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
@@ -1176,10 +1215,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
-
- if (dpaa_ieee_1588)
- fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
-
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1208,9 +1243,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
- if (dpaa_ieee_1588)
- dpaa_eth_tx_conf(fq_txconf);
-
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1228,6 +1260,12 @@ dpaa_eth_tx_conf(void *q)
struct qm_dqrr_entry *dq;
int num_tx_conf, ret, dq_num;
uint32_t vdqcr_flags = 0;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qm_dqrr_entry *dqrr;
+ struct dpaa_bp_info *bp_info;
+ struct rte_mbuf *mbuf;
+ void *ptr;
+ struct annotations_t *annot;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -1252,7 +1290,20 @@ dpaa_eth_tx_conf(void *q)
dq = qman_dequeue(fq);
if (!dq)
continue;
+ dqrr = dq;
dq_num++;
+ bp_info = DPAA_BPID_TO_POOL_INFO(dqrr->fd.bpid);
+ ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr->fd));
+ rte_prefetch0((void *)((uint8_t *)ptr
+ + DEFAULT_RX_ICEOF));
+ mbuf = (struct rte_mbuf *)
+ ((char *)ptr - bp_info->meta_data_size);
+
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->tx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
qman_dqrr_consume(fq, dq);
dpaa_free_mbuf(&dq->fd);
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 042602e087..1048e86d41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2021 NXP
+ * Copyright 2017,2020-2022 NXP
*
*/
@@ -260,7 +260,7 @@ struct dpaa_eth_parse_results_t {
struct annotations_t {
uint8_t reserved[DEFAULT_RX_ICEOF];
struct dpaa_eth_parse_results_t parse; /**< Pointer to Parsed result*/
- uint64_t reserved1;
+ uint64_t timestamp;
uint64_t hash; /**< Hash Result */
};
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 42e1f8c2e2..239858adda 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -14,6 +14,7 @@ sources = files(
'dpaa_flow.c',
'dpaa_rxtx.c',
'dpaa_fmc.c',
+ 'dpaa_ptp.c',
)
if cc.has_argument('-Wno-pointer-arith')
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 10/18] net/dpaa: support IEEE 1588 PTP
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (8 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
` (10 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch adds the support for the ethdev APIs
to enable/disable and read/write/adjust IEEE1588
PTP timestamps for DPAA platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/dpaa.rst | 1 +
doc/guides/nics/features/dpaa.ini | 2 +
drivers/bus/dpaa/base/fman/fman.c | 15 ++++++
drivers/bus/dpaa/include/fman.h | 45 +++++++++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 5 ++
drivers/net/dpaa/dpaa_ethdev.h | 16 +++++++
drivers/net/dpaa/dpaa_ptp.c | 80 ++++++++++++++++++++++++++++++-
7 files changed, 162 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index acf4daab02..ea86e6146c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -148,6 +148,7 @@ Features
- Packet type information
- Checksum offload
- Promiscuous mode
+ - IEEE1588 PTP
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 4196dd800c..2c2e79dcb5 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,9 +19,11 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timesync = Y
Timestamp offload = Y
Basic stats = Y
Extended stats = Y
FW version = Y
ARMv8 = Y
Usage doc = Y
+
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 89786636d9..a79b0b75dd 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -28,6 +28,7 @@ u32 fman_dealloc_bufs_mask_lo;
int fman_ccsr_map_fd = -1;
static COMPAT_LIST_HEAD(__ifs);
+void *rtc_map;
/* This is the (const) global variable that callers have read-only access to.
* Internally, we have read-write access directly to __ifs.
@@ -539,6 +540,20 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ if (!rtc_map) {
+ __if->rtc_map = mmap(NULL, FMAN_IEEE_1588_SIZE,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, FMAN_IEEE_1588_OFFSET);
+ if (__if->rtc_map == MAP_FAILED) {
+ pr_err("Can not map FMan RTC regs base\n");
+ _errno = -EINVAL;
+ goto err;
+ }
+ rtc_map = __if->rtc_map;
+ } else {
+ __if->rtc_map = rtc_map;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 857eef3d2f..109c1a4a22 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -64,6 +64,12 @@
#define GROUP_ADDRESS 0x0000010000000000LL
#define HASH_CTRL_ADDR_MASK 0x0000003F
+#define FMAN_RTC_MAX_NUM_OF_ALARMS 3
+#define FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES 4
+#define FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS 3
+#define FMAN_IEEE_1588_OFFSET 0X1AFE000
+#define FMAN_IEEE_1588_SIZE 4096
+
/* Pre definitions of FMAN interface and Bpool structures */
struct __fman_if;
struct fman_if_bpool;
@@ -307,6 +313,44 @@ struct tx_bmi_regs {
uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
};
+
+/* Description FM RTC timer alarm */
+struct t_tmr_alarm {
+ uint32_t tmr_alarm_h;
+ uint32_t tmr_alarm_l;
+};
+
+/* Description FM RTC timer Ex trigger */
+struct t_tmr_ext_trigger {
+ uint32_t tmr_etts_h;
+ uint32_t tmr_etts_l;
+};
+
+struct rtc_regs {
+ uint32_t tmr_id; /* 0x000 Module ID register */
+ uint32_t tmr_id2; /* 0x004 Controller ID register */
+ uint32_t reserved0008[30];
+ uint32_t tmr_ctrl; /* 0x0080 timer control register */
+ uint32_t tmr_tevent; /* 0x0084 timer event register */
+ uint32_t tmr_temask; /* 0x0088 timer event mask register */
+ uint32_t reserved008c[3];
+ uint32_t tmr_cnt_h; /* 0x0098 timer counter high register */
+ uint32_t tmr_cnt_l; /* 0x009c timer counter low register */
+ uint32_t tmr_add; /* 0x00a0 timer drift compensation addend register */
+ uint32_t tmr_acc; /* 0x00a4 timer accumulator register */
+ uint32_t tmr_prsc; /* 0x00a8 timer prescale */
+ uint32_t reserved00ac;
+ uint32_t tmr_off_h; /* 0x00b0 timer offset high */
+ uint32_t tmr_off_l; /* 0x00b4 timer offset low */
+ struct t_tmr_alarm tmr_alarm[FMAN_RTC_MAX_NUM_OF_ALARMS];
+ /* 0x00b8 timer alarm */
+ uint32_t tmr_fiper[FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES];
+ /* 0x00d0 timer fixed period interval */
+ struct t_tmr_ext_trigger tmr_etts[FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS];
+ /* 0x00e0 time stamp general purpose external */
+ uint32_t reserved00f0[4];
+};
+
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -396,6 +440,7 @@ struct __fman_if {
void *ccsr_map;
void *bmi_map;
void *tx_bmi_map;
+ void *rtc_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 682cb1c77e..82d1960356 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,11 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_enable = dpaa_timesync_enable,
+ .timesync_disable = dpaa_timesync_disable,
+ .timesync_read_time = dpaa_timesync_read_time,
+ .timesync_write_time = dpaa_timesync_write_time,
+ .timesync_adjust_time = dpaa_timesync_adjust_time,
.timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
.timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bbdb0936c0..7884cc034c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -245,6 +245,22 @@ int
dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *timestamp);
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta);
+
int
dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
index df6df1ddf2..f9337a9468 100644
--- a/drivers/net/dpaa/dpaa_ptp.c
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -16,7 +16,82 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
-int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = (uint64_t)in_be32(tmr_cnt_l);
+ time |= ((uint64_t)in_be32(tmr_cnt_h) << 32);
+
+ *timestamp = rte_ns_to_timespec(time);
+ return 0;
+}
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *ts)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = rte_timespec_to_ns(ts);
+
+ out_be32(tmr_cnt_l, (uint32_t)time);
+ out_be32(tmr_cnt_h, (uint32_t)(time >> 32));
+
+ return 0;
+}
+
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta)
+{
+ struct timespec ts = {0, 0}, *timestamp = &ts;
+ uint64_t ns;
+
+ dpaa_timesync_read_time(dev, timestamp);
+
+ ns = rte_timespec_to_ns(timestamp);
+ ns += delta;
+ *timestamp = rte_ns_to_timespec(ns);
+
+ dpaa_timesync_write_time(dev, timestamp);
+
+ return 0;
+}
+
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -32,7 +107,8 @@ int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
return 0;
}
-int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused)
{
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 11/18] net/dpaa: implement detailed packet parsing
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (9 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
` (9 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch implements the detailed packet parsing using
the annotation info from the hardware.
decode parser to set RX muf packet type by dpaa_slow_parsing.
Support to identify the IPSec ESP, GRE and SCTP packets.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 35 +++++++-
drivers/net/dpaa/dpaa_rxtx.h | 143 ++++++++++++++-------------------
3 files changed, 93 insertions(+), 86 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 82d1960356..a302b24be6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -411,6 +411,7 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev, size_t *no_of_elements)
RTE_PTYPE_L4_UDP,
RTE_PTYPE_L4_SCTP,
RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_TUNNEL_GRE,
};
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e3b4bb14ab..99fc3f1b43 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -110,11 +110,38 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
#define dpaa_display_frame_info(a, b, c)
#endif
-static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
- uint64_t prs __rte_unused)
+static inline void
+dpaa_slow_parsing(struct rte_mbuf *m,
+ const struct annotations_t *annot)
{
+ const struct dpaa_eth_parse_results_t *parse;
+
DPAA_DP_LOG(DEBUG, "Slow parsing");
- /*TBD:XXX: to be implemented*/
+ parse = &annot->parse;
+
+ if (parse->ethernet)
+ m->packet_type |= RTE_PTYPE_L2_ETHER;
+ if (parse->vlan)
+ m->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
+ if (parse->first_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4;
+ if (parse->first_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6;
+ if (parse->gre)
+ m->packet_type |= RTE_PTYPE_TUNNEL_GRE;
+ if (parse->last_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4_EXT;
+ if (parse->last_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6_EXT;
+ if (parse->l4_type == DPAA_PR_L4_TCP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_TCP;
+ else if (parse->l4_type == DPAA_PR_L4_UDP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_UDP;
+ else if (parse->l4_type == DPAA_PR_L4_IPSEC_TYPE &&
+ !parse->l4_info_err && parse->esp_sum)
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ else if (parse->l4_type == DPAA_PR_L4_SCTP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_SCTP;
}
static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
@@ -228,7 +255,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
break;
/* More switch cases can be added */
default:
- dpaa_slow_parsing(m, prs);
+ dpaa_slow_parsing(m, annot);
}
m->tx_offload = annot->parse.ip_off[0];
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 1048e86d41..215bdeaf7f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2022 NXP
+ * Copyright 2017,2020-2024 NXP
*
*/
@@ -162,98 +162,77 @@
#define DPAA_PKT_L3_LEN_SHIFT 7
+enum dpaa_parse_result_l4_type {
+ DPAA_PR_L4_TCP_TYPE = 1,
+ DPAA_PR_L4_UDP_TYPE = 2,
+ DPAA_PR_L4_IPSEC_TYPE = 3,
+ DPAA_PR_L4_SCTP_TYPE = 4,
+ DPAA_PR_L4_DCCP_TYPE = 5
+};
+
/**
* FMan parse result array
*/
struct dpaa_eth_parse_results_t {
- uint8_t lpid; /**< Logical port id */
- uint8_t shimr; /**< Shim header result */
- union {
- uint16_t l2r; /**< Layer 2 result */
+ uint8_t lpid; /**< Logical port id */
+ uint8_t shimr; /**< Shim header result */
+ union {
+ uint16_t l2r; /**< Layer 2 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t ethernet:1;
- uint16_t vlan:1;
- uint16_t llc_snap:1;
- uint16_t mpls:1;
- uint16_t ppoe_ppp:1;
- uint16_t unused_1:3;
- uint16_t unknown_eth_proto:1;
- uint16_t eth_frame_type:2;
- uint16_t l2r_err:5;
+ uint16_t unused_1:3;
+ uint16_t ppoe_ppp:1;
+ uint16_t mpls:1;
+ uint16_t llc_snap:1;
+ uint16_t vlan:1;
+ uint16_t ethernet:1;
+
+ uint16_t l2r_err:5;
+ uint16_t eth_frame_type:2;
/*00-unicast, 01-multicast, 11-broadcast*/
-#else
- uint16_t l2r_err:5;
- uint16_t eth_frame_type:2;
- uint16_t unknown_eth_proto:1;
- uint16_t unused_1:3;
- uint16_t ppoe_ppp:1;
- uint16_t mpls:1;
- uint16_t llc_snap:1;
- uint16_t vlan:1;
- uint16_t ethernet:1;
-#endif
+ uint16_t unknown_eth_proto:1;
} __rte_packed;
- } __rte_packed;
- union {
- uint16_t l3r; /**< Layer 3 result */
+ } __rte_packed;
+ union {
+ uint16_t l3r; /**< Layer 3 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t first_ipv4:1;
- uint16_t first_ipv6:1;
- uint16_t gre:1;
- uint16_t min_enc:1;
- uint16_t last_ipv4:1;
- uint16_t last_ipv6:1;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t last_ip_err_code:3;
-#else
- uint16_t last_ip_err_code:3;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t last_ipv6:1;
- uint16_t last_ipv4:1;
- uint16_t min_enc:1;
- uint16_t gre:1;
- uint16_t first_ipv6:1;
- uint16_t first_ipv4:1;
-#endif
+ uint16_t unused_2:1;
+ uint16_t l3_err:1;
+ uint16_t last_ipv6:1;
+ uint16_t last_ipv4:1;
+ uint16_t min_enc:1;
+ uint16_t gre:1;
+ uint16_t first_ipv6:1;
+ uint16_t first_ipv4:1;
+
+ uint16_t unused_3:8;
} __rte_packed;
- } __rte_packed;
- union {
- uint8_t l4r; /**< Layer 4 result */
+ } __rte_packed;
+ union {
+ uint8_t l4r; /**< Layer 4 result */
struct{
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint8_t l4_type:3;
- uint8_t l4_info_err:1;
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
-#else
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
- uint8_t l4_info_err:1;
- uint8_t l4_type:3;
-#endif
+ uint8_t l4cv:1;
+ uint8_t unused_4:1;
+ uint8_t ah:1;
+ uint8_t esp_sum:1;
+ uint8_t l4_info_err:1;
+ uint8_t l4_type:3;
} __rte_packed;
- } __rte_packed;
- uint8_t cplan; /**< Classification plan id */
- uint16_t nxthdr; /**< Next Header */
- uint16_t cksum; /**< Checksum */
- uint32_t lcv; /**< LCV */
- uint8_t shim_off[3]; /**< Shim offset */
- uint8_t eth_off; /**< ETH offset */
- uint8_t llc_snap_off; /**< LLC_SNAP offset */
- uint8_t vlan_off[2]; /**< VLAN offset */
- uint8_t etype_off; /**< ETYPE offset */
- uint8_t pppoe_off; /**< PPP offset */
- uint8_t mpls_off[2]; /**< MPLS offset */
- uint8_t ip_off[2]; /**< IP offset */
- uint8_t gre_off; /**< GRE offset */
- uint8_t l4_off; /**< Layer 4 offset */
- uint8_t nxthdr_off; /**< Parser end point */
+ } __rte_packed;
+ uint8_t cplan; /**< Classification plan id */
+ uint16_t nxthdr; /**< Next Header */
+ uint16_t cksum; /**< Checksum */
+ uint32_t lcv; /**< LCV */
+ uint8_t shim_off[3]; /**< Shim offset */
+ uint8_t eth_off; /**< ETH offset */
+ uint8_t llc_snap_off; /**< LLC_SNAP offset */
+ uint8_t vlan_off[2]; /**< VLAN offset */
+ uint8_t etype_off; /**< ETYPE offset */
+ uint8_t pppoe_off; /**< PPP offset */
+ uint8_t mpls_off[2]; /**< MPLS offset */
+ uint8_t ip_off[2]; /**< IP offset */
+ uint8_t gre_off; /**< GRE offset */
+ uint8_t l4_off; /**< Layer 4 offset */
+ uint8_t nxthdr_off; /**< Parser end point */
} __rte_packed;
/* The structure is the Prepended Data to the Frame which is used by FMAN */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 12/18] net/dpaa: enhance DPAA frame display
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (10 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 13/18] net/dpaa: support mempool debug Hemant Agrawal
` (8 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch enhances the received packet debugging capability.
This help displaying the full packet parsing output.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 9 +++
drivers/net/dpaa/dpaa_rxtx.c | 138 +++++++++++++++++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 5 ++
4 files changed, 133 insertions(+), 24 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ea86e6146c..edf7a7e350 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -227,6 +227,11 @@ state during application initialization:
application want to use eventdev with DPAA device.
Currently these queues are not used for LS1023/LS1043 platform by default.
+- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
+
+ This defines the debug flag, whether to dump the detailed frame and packet
+ parsing result for the incoming packets.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a302b24be6..4ead890278 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -2056,6 +2056,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
struct rte_device *dev = eth_dev->device;
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ char *penv;
+#endif
PMD_INIT_FUNC_TRACE();
@@ -2135,6 +2138,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
td_tx_threshold = CGR_RX_PERFQ_THRESH;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ penv = getenv("DPAA_DISPLAY_FRAME_AND_PARSER_RESULT");
+ if (penv)
+ dpaa_force_display_frame_set(atoi(penv));
+#endif
+
/* If congestion control is enabled globally*/
if (num_rx_fqs > 0 && td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 99fc3f1b43..945c84ab10 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -47,6 +47,10 @@
#include <dpaa_of.h>
#include <netcfg.h>
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int s_force_display_frm;
+#endif
+
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
(_fd)->opaque_addr = 0; \
@@ -58,37 +62,122 @@
} while (0)
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set)
+{
+ s_force_display_frm = set;
+}
+
#define DISPLAY_PRINT printf
-static void dpaa_display_frame_info(const struct qm_fd *fd,
- uint32_t fqid, bool rx)
+static void
+dpaa_display_frame_info(const struct qm_fd *fd,
+ uint32_t fqid, bool rx)
{
- int ii;
- char *ptr;
+ int pos, offset = 0;
+ char *ptr, info[1024];
struct annotations_t *annot = rte_dpaa_mem_ptov(fd->addr);
uint8_t format;
+ const struct dpaa_eth_parse_results_t *psr;
- if (!fd->status) {
- /* Do not display correct packets.*/
+ if (!fd->status && !s_force_display_frm) {
+ /* Do not display correct packets unless force display.*/
return;
}
+ psr = &annot->parse;
- format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
- DPAA_FD_FORMAT_SHIFT;
-
- DISPLAY_PRINT("fqid %d bpid %d addr 0x%lx, format %d\r\n",
- fqid, fd->bpid, (unsigned long)fd->addr, fd->format);
- DISPLAY_PRINT("off %d, len %d stat 0x%x\r\n",
- fd->offset, fd->length20, fd->status);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (format == qm_fd_contig)
+ sprintf(info, "simple");
+ else if (format == qm_fd_sg)
+ sprintf(info, "sg");
+ else
+ sprintf(info, "unknown format(%d)", format);
+
+ DISPLAY_PRINT("%s: fqid=%08x, bpid=%d, phy addr=0x%lx ",
+ rx ? "RX" : "TX", fqid, fd->bpid, (unsigned long)fd->addr);
+ DISPLAY_PRINT("format=%s offset=%d, len=%d, stat=0x%x\r\n",
+ info, fd->offset, fd->length20, fd->status);
if (rx) {
- ptr = (char *)&annot->parse;
- DISPLAY_PRINT("RX parser result:\r\n");
- for (ii = 0; ii < (int)sizeof(struct dpaa_eth_parse_results_t);
- ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("Display usual RX parser result:\r\n");
+ if (psr->eth_frame_type == 0)
+ offset += sprintf(&info[offset], "unicast");
+ else if (psr->eth_frame_type == 1)
+ offset += sprintf(&info[offset], "multicast");
+ else if (psr->eth_frame_type == 3)
+ offset += sprintf(&info[offset], "broadcast");
+ else
+ offset += sprintf(&info[offset], "unknown eth type(%d)",
+ psr->eth_frame_type);
+ if (psr->l2r_err) {
+ offset += sprintf(&info[offset], " L2 error(%d)",
+ psr->l2r_err);
+ } else {
+ offset += sprintf(&info[offset], " L2 non error");
}
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("L2: %s, %s, ethernet type:%s\r\n",
+ psr->ethernet ? "is ethernet" : "non ethernet",
+ psr->vlan ? "is vlan" : "non vlan", info);
+
+ offset = 0;
+ DISPLAY_PRINT("L3: %s/%s, %s/%s, %s, %s\r\n",
+ psr->first_ipv4 ? "first IPv4" : "non first IPv4",
+ psr->last_ipv4 ? "last IPv4" : "non last IPv4",
+ psr->first_ipv6 ? "first IPv6" : "non first IPv6",
+ psr->last_ipv6 ? "last IPv6" : "non last IPv6",
+ psr->gre ? "GRE" : "non GRE",
+ psr->l3_err ? "L3 has error" : "L3 non error");
+
+ if (psr->l4_type == DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "tcp");
+ } else if (psr->l4_type == DPAA_PR_L4_UDP_TYPE) {
+ offset += sprintf(&info[offset], "udp");
+ } else if (psr->l4_type == DPAA_PR_L4_IPSEC_TYPE) {
+ offset += sprintf(&info[offset], "IPSec ");
+ if (psr->esp_sum)
+ offset += sprintf(&info[offset], "ESP");
+ if (psr->ah)
+ offset += sprintf(&info[offset], "AH");
+ } else if (psr->l4_type == DPAA_PR_L4_SCTP_TYPE) {
+ offset += sprintf(&info[offset], "sctp");
+ } else if (psr->l4_type == DPAA_PR_L4_DCCP_TYPE) {
+ offset += sprintf(&info[offset], "dccp");
+ } else {
+ offset += sprintf(&info[offset], "unknown l4 type(%d)",
+ psr->l4_type);
+ }
+ DISPLAY_PRINT("L4: type:%s, L4 validation %s\r\n",
+ info, psr->l4cv ? "Performed" : "NOT performed");
+
+ offset = 0;
+ if (psr->ethernet) {
+ offset += sprintf(&info[offset],
+ "Eth offset=%d, ethtype offset=%d, ",
+ psr->eth_off, psr->etype_off);
+ }
+ if (psr->vlan) {
+ offset += sprintf(&info[offset], "vLAN offset=%d, ",
+ psr->vlan_off[0]);
+ }
+ if (psr->first_ipv4 || psr->first_ipv6) {
+ offset += sprintf(&info[offset], "first IP offset=%d, ",
+ psr->ip_off[0]);
+ }
+ if (psr->last_ipv4 || psr->last_ipv6) {
+ offset += sprintf(&info[offset], "last IP offset=%d, ",
+ psr->ip_off[1]);
+ }
+ if (psr->gre) {
+ offset += sprintf(&info[offset], "GRE offset=%d, ",
+ psr->gre_off);
+ }
+ if (psr->l4_type >= DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "L4 offset=%d, ",
+ psr->l4_off);
+ }
+ offset += sprintf(&info[offset], "Next HDR(0x%04x) offset=%d.",
+ rte_be_to_cpu_16(psr->nxthdr), psr->nxthdr_off);
+
+ DISPLAY_PRINT("%s\r\n", info);
}
if (unlikely(format == qm_fd_sg)) {
@@ -99,13 +188,14 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
DISPLAY_PRINT("Frame payload:\r\n");
ptr = (char *)annot;
ptr += fd->offset;
- for (ii = 0; ii < fd->length20; ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
+ for (pos = 0; pos < fd->length20; pos++) {
+ DISPLAY_PRINT("%02x ", ptr[pos]);
+ if (((pos + 1) % 16) == 0)
DISPLAY_PRINT("\n");
}
DISPLAY_PRINT("\n");
}
+
#else
#define dpaa_display_frame_info(a, b, c)
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 215bdeaf7f..392926e286 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -274,4 +274,9 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs);
void dpaa_rx_cb_no_prefetch(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set);
+#endif
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 13/18] net/dpaa: support mempool debug
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (11 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API Hemant Agrawal
` (7 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch adds support to compile time debug the mempool
corruptions in dpaa driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 945c84ab10..d82c6f3be2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -494,6 +494,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->data_len = sg_temp->length;
first_seg->pkt_len = sg_temp->length;
rte_mbuf_refcnt_set(first_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)first_seg),
+ (void **)&first_seg, 1, 1);
+#endif
first_seg->port = ifid;
first_seg->nb_segs = 1;
@@ -511,6 +515,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len += sg_temp->length;
first_seg->nb_segs += 1;
rte_mbuf_refcnt_set(cur_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 1);
+#endif
prev_seg->next = cur_seg;
if (sg_temp->final) {
cur_seg->next = NULL;
@@ -522,6 +530,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len, first_seg->nb_segs);
dpaa_eth_packet_info(first_seg, vaddr);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 1);
+#endif
rte_pktmbuf_free_seg(temp);
return first_seg;
@@ -562,6 +574,10 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
return mbuf;
@@ -676,6 +692,10 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -722,6 +742,10 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -972,6 +996,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
return -1;
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 0);
+#endif
fd->cmd = 0;
fd->opaque_addr = 0;
@@ -1017,6 +1045,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
} else {
sg_temp->bpid =
DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
free_buf[*free_count].seg = cur_seg;
@@ -1074,6 +1106,10 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
* released by BMAN.
*/
DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
buf_to_free[*free_count].seg = mbuf;
@@ -1302,6 +1338,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_CKSUM_OFFLOAD_MASK)
dpaa_unsegmented_checksum(mbuf,
&fd_arr[loop]);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
continue;
}
} else {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (12 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 13/18] net/dpaa: support mempool debug Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-09-22 3:14 ` Ferruh Yigit
2024-08-23 7:32 ` [PATCH v2 15/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
` (6 subsequent siblings)
20 siblings, 1 reply; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
From: Vinod Pullabhatla <vinod.pullabhatla@nxp.com>
Add support to set Tx rate on DPAA platform through PMD APIs
Signed-off-by: Vinod Pullabhatla <vinod.pullabhatla@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
.mailmap | 1 +
drivers/net/dpaa/dpaa_flow.c | 95 +++++++++++++++++++++++++---
drivers/net/dpaa/fmlib/fm_lib.c | 32 +++++++++-
drivers/net/dpaa/fmlib/fm_port_ext.h | 2 +-
drivers/net/dpaa/rte_pmd_dpaa.h | 25 +++++++-
drivers/net/dpaa/version.map | 7 ++
6 files changed, 151 insertions(+), 11 deletions(-)
diff --git a/.mailmap b/.mailmap
index 4a508bafad..cb0fd52404 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1562,6 +1562,7 @@ Vincent Jardin <vincent.jardin@6wind.com>
Vincent Li <vincent.mc.li@gmail.com>
Vincent S. Cojot <vcojot@redhat.com>
Vinh Tran <vinh.t.tran10@gmail.com>
+Vinod Pullabhatla <vinod.pullabhatla@nxp.com>
Vipin Padmam Ramesh <vipinp@vmware.com>
Vipin Varghese <vipin.varghese@amd.com> <vipin.varghese@intel.com>
Vipul Ashri <vipul.ashri@oracle.com>
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 082bd5d014..dfc81e4e43 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -13,6 +13,7 @@
#include <rte_dpaa_logs.h>
#include <fmlib/fm_port_ext.h>
#include <fmlib/fm_vsp_ext.h>
+#include <rte_pmd_dpaa.h>
#define DPAA_MAX_NUM_ETH_DEV 8
@@ -29,6 +30,11 @@ return &scheme_params->param.key_ext_and_hash.extract_array[hdr_idx];
#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \
SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field
+/* FMAN mac indexes mappings (0 is unused, first 8 are for 1G, next for 10G
+ * ports).
+ */
+const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
/* FM global info */
struct dpaa_fm_info {
t_handle fman_handle;
@@ -649,7 +655,7 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
}
-static inline int get_port_type(struct fman_if *fif)
+static inline int get_rx_port_type(struct fman_if *fif)
{
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
@@ -668,6 +674,19 @@ static inline int get_port_type(struct fman_if *fif)
return -1;
}
+static inline int get_tx_port_type(struct fman_if *fif)
+{
+ if (fif->mac_type == fman_mac_1g)
+ return e_FM_PORT_TYPE_TX;
+ else if (fif->mac_type == fman_mac_2_5g)
+ return e_FM_PORT_TYPE_TX_2_5G;
+ else if (fif->mac_type == fman_mac_10g)
+ return e_FM_PORT_TYPE_TX_10G;
+
+ DPAA_PMD_ERR("MAC type unsupported");
+ return -1;
+}
+
static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
uint64_t req_dist_set,
struct fman_if *fif)
@@ -676,17 +695,12 @@ static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
ioc_fm_pcd_net_env_params_t dist_units;
PMD_INIT_FUNC_TRACE();
- /* FMAN mac indexes mappings (0 is unused,
- * first 8 are for 1G, next for 10G ports
- */
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
-
/* Memset FM port params */
memset(&fm_port_params, 0, sizeof(fm_port_params));
/* Set FM port params */
fm_port_params.h_fm = fm_info.fman_handle;
- fm_port_params.port_type = get_port_type(fif);
+ fm_port_params.port_type = get_rx_port_type(fif);
fm_port_params.port_id = mac_idx[fif->mac_idx];
/* FM PORT Open */
@@ -949,7 +963,6 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
{
t_fm_vsp_params vsp_params;
t_fm_buffer_prefix_content buf_prefix_cont;
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
uint8_t idx = mac_idx[fif->mac_idx];
int ret;
@@ -1079,3 +1092,69 @@ int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif)
return E_OK;
}
+
+int rte_pmd_dpaa_port_set_rate_limit(uint16_t port_id, uint16_t burst,
+ uint32_t rate)
+{
+ t_fm_port_rate_limit port_rate_limit;
+ bool port_handle_exists = true;
+ void *handle;
+ uint32_t ret;
+ struct rte_eth_dev *dev;
+ struct dpaa_if *dpaa_intf;
+ struct fman_if *fif;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+ dpaa_intf = dev->data->dev_private;
+ fif = dev->process_private;
+
+ memset(&port_rate_limit, 0, sizeof(port_rate_limit));
+ port_rate_limit.max_burst_size = burst;
+ port_rate_limit.rate_limit = rate;
+
+ DPAA_PMD_DEBUG("Setting Rate Limiter for port:%s Max Burst =%u Max Rate =%u\n",
+ dpaa_intf->name, burst, rate);
+
+ if (!dpaa_intf->port_handle) {
+ t_fm_port_params fm_port_params;
+
+ /* Memset FM port params */
+ memset(&fm_port_params, 0, sizeof(fm_port_params));
+
+ /* Set FM port params */
+ fm_port_params.h_fm = fm_open(0);
+ fm_port_params.port_type = get_tx_port_type(fif);
+ fm_port_params.port_id = mac_idx[fif->mac_idx];
+
+ /* FM PORT Open */
+ handle = fm_port_open(&fm_port_params);
+ fm_close(fm_port_params.h_fm);
+ if (!handle) {
+ DPAA_PMD_ERR("Can't open handle %p\n",
+ fm_info.fman_handle);
+ return -ENODEV;
+ }
+
+ port_handle_exists = false;
+ } else {
+ handle = dpaa_intf->port_handle;
+ }
+
+ if (burst == 0 || rate == 0)
+ ret = fm_port_delete_rate_limit(handle);
+ else
+ ret = fm_port_set_rate_limit(handle, &port_rate_limit);
+
+ if (ret) {
+ DPAA_PMD_ERR("Failed to set rate limit ret = %#x\n", -ret);
+ return -ret;
+ }
+
+ DPAA_PMD_DEBUG("FM_PORT_SetRateLimit ret = %#x\n", -ret);
+
+ if (!port_handle_exists)
+ fm_port_close(handle);
+
+ return 0;
+}
diff --git a/drivers/net/dpaa/fmlib/fm_lib.c b/drivers/net/dpaa/fmlib/fm_lib.c
index 68b519ff8a..4b7dd38496 100644
--- a/drivers/net/dpaa/fmlib/fm_lib.c
+++ b/drivers/net/dpaa/fmlib/fm_lib.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright 2008-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2020,2022 NXP
*/
#include <stdio.h>
@@ -558,3 +558,33 @@ get_device_id(t_handle h_dev)
return (t_handle)p_dev->id;
}
+
+uint32_t
+fm_port_delete_rate_limit(t_handle h_fm_port)
+{
+ t_device *p_dev = (t_device *)h_fm_port;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_dev->fd, FM_PORT_IOC_REMOVE_RATE_LIMIT))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
+
+uint32_t
+fm_port_set_rate_limit(t_handle h_fm_port, t_fm_port_rate_limit *p_rate_limit)
+{
+ t_device *p_dev = (t_device *)h_fm_port;
+
+ _fml_dbg("Calling...\n");
+
+ if (ioctl(p_dev->fd, FM_PORT_IOC_SET_RATE_LIMIT, p_rate_limit))
+ RETURN_ERROR(MINOR, E_INVALID_OPERATION, NO_MSG);
+
+ _fml_dbg("Finishing.\n");
+
+ return E_OK;
+}
diff --git a/drivers/net/dpaa/fmlib/fm_port_ext.h b/drivers/net/dpaa/fmlib/fm_port_ext.h
index bb2e00222e..f1cbf37de3 100644
--- a/drivers/net/dpaa/fmlib/fm_port_ext.h
+++ b/drivers/net/dpaa/fmlib/fm_port_ext.h
@@ -274,7 +274,7 @@ typedef struct ioc_fm_port_congestion_groups_t {
* @Return 0 on success; error code otherwise.
*/
#define FM_PORT_IOC_SET_RATE_LIMIT \
- IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(3), ioc_fm_port_rate_limit_t)
+ _IOW(FM_IOC_TYPE_BASE, FM_PORT_IOC_NUM(3), ioc_fm_port_rate_limit_t)
/*
* @Function fm_port_delete_rate_limit
diff --git a/drivers/net/dpaa/rte_pmd_dpaa.h b/drivers/net/dpaa/rte_pmd_dpaa.h
index ec45633ba2..b48adff570 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa.h
+++ b/drivers/net/dpaa/rte_pmd_dpaa.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2018 NXP
+ * Copyright 2018,2022 NXP
*/
#ifndef _PMD_DPAA_H_
@@ -31,4 +31,27 @@
int
rte_pmd_dpaa_set_tx_loopback(uint16_t port, uint8_t on);
+/**
+ * Set TX rate limit
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param burst
+ * Max burst size(KBytes) of the Ethernet device.
+ * 0 - Disable TX rate limit.
+ * @param rate
+ * Max rate(Kb/sec) of the Ethernet device.
+ * 0 - Disable TX rate limit.
+ * @return
+ * 0 - if successful.
+ * <0 - if failed, with proper error code.
+ *
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ */
+__rte_experimental
+int
+rte_pmd_dpaa_port_set_rate_limit(uint16_t port_id, uint16_t burst,
+ uint32_t rate);
+
#endif /* _PMD_DPAA_H_ */
diff --git a/drivers/net/dpaa/version.map b/drivers/net/dpaa/version.map
index 3fdb63caf3..24a28ce649 100644
--- a/drivers/net/dpaa/version.map
+++ b/drivers/net/dpaa/version.map
@@ -6,6 +6,13 @@ DPDK_25 {
local: *;
};
+EXPERIMENTAL {
+ global:
+
+ # added in 24.11
+ rte_pmd_dpaa_port_set_rate_limit;
+};
+
INTERNAL {
global:
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-08-23 7:32 ` [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API Hemant Agrawal
@ 2024-09-22 3:14 ` Ferruh Yigit
2024-09-22 4:40 ` Hemant Agrawal
0 siblings, 1 reply; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 3:14 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
> diff --git a/drivers/net/dpaa/version.map b/drivers/net/dpaa/version.map
> index 3fdb63caf3..24a28ce649 100644
> --- a/drivers/net/dpaa/version.map
> +++ b/drivers/net/dpaa/version.map
> @@ -6,6 +6,13 @@ DPDK_25 {
> local: *;
> };
>
> +EXPERIMENTAL {
> + global:
> +
> + # added in 24.11
> + rte_pmd_dpaa_port_set_rate_limit;
> +};
> +
> INTERNAL {
> global:
>
PMD specific API needs to be justified, can't we use TM framework for
this, does TM needs to be improved for this support?
What do you think to send the rest of the set without this patch, so
they can progress, and this one can be discussed separately (assuming
there is no dependency).
^ permalink raw reply [flat|nested] 129+ messages in thread
* RE: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-09-22 3:14 ` Ferruh Yigit
@ 2024-09-22 4:40 ` Hemant Agrawal
2024-09-22 13:10 ` Ferruh Yigit
2024-09-22 13:27 ` Ferruh Yigit
0 siblings, 2 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-22 4:40 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
HI Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Sunday, September 22, 2024 8:44 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Cc: ferruh.yigit@intel.com; Vinod Pullabhatla <vinod.pullabhatla@nxp.com>;
> Rohit Raj <rohit.raj@nxp.com>
> Subject: Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
> Importance: High
>
> On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
> > diff --git a/drivers/net/dpaa/version.map
> > b/drivers/net/dpaa/version.map index 3fdb63caf3..24a28ce649 100644
> > --- a/drivers/net/dpaa/version.map
> > +++ b/drivers/net/dpaa/version.map
> > @@ -6,6 +6,13 @@ DPDK_25 {
> > local: *;
> > };
> >
> > +EXPERIMENTAL {
> > + global:
> > +
> > + # added in 24.11
> > + rte_pmd_dpaa_port_set_rate_limit;
> > +};
> > +
> > INTERNAL {
> > global:
> >
>
> PMD specific API needs to be justified, can't we use TM framework for this,
> does TM needs to be improved for this support?
>
> What do you think to send the rest of the set without this patch, so they can
> progress, and this one can be discussed separately (assuming there is no
> dependency).
[Hemant] I think, I replied to your earlier concerns.
We are yet to implement TM framework for DPAA1. But that involves more of egress QoS.
This one is additional capability to limit the ingress port. Kind of policing in Rx side.
However, if you still disagree. Please apply the series without this patch.
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-09-22 4:40 ` Hemant Agrawal
@ 2024-09-22 13:10 ` Ferruh Yigit
2024-09-22 13:27 ` Ferruh Yigit
1 sibling, 0 replies; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 13:10 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
On 9/22/2024 5:40 AM, Hemant Agrawal wrote:
> HI Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Sunday, September 22, 2024 8:44 AM
>> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
>> Cc: ferruh.yigit@intel.com; Vinod Pullabhatla <vinod.pullabhatla@nxp.com>;
>> Rohit Raj <rohit.raj@nxp.com>
>> Subject: Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
>> Importance: High
>>
>> On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
>>> diff --git a/drivers/net/dpaa/version.map
>>> b/drivers/net/dpaa/version.map index 3fdb63caf3..24a28ce649 100644
>>> --- a/drivers/net/dpaa/version.map
>>> +++ b/drivers/net/dpaa/version.map
>>> @@ -6,6 +6,13 @@ DPDK_25 {
>>> local: *;
>>> };
>>>
>>> +EXPERIMENTAL {
>>> + global:
>>> +
>>> + # added in 24.11
>>> + rte_pmd_dpaa_port_set_rate_limit;
>>> +};
>>> +
>>> INTERNAL {
>>> global:
>>>
>>
>> PMD specific API needs to be justified, can't we use TM framework for this,
>> does TM needs to be improved for this support?
>>
>> What do you think to send the rest of the set without this patch, so they can
>> progress, and this one can be discussed separately (assuming there is no
>> dependency).
>
> [Hemant] I think, I replied to your earlier concerns.
>
> We are yet to implement TM framework for DPAA1. But that involves more of egress QoS.
>
> This one is additional capability to limit the ingress port. Kind of policing in Rx side.
>
> However, if you still disagree. Please apply the series without this patch.
>
Let's detect what is missing in ethdev layer for it, and what is the
required effort to cover it in ethdev, first. Based on findings, we can
continue with PMD API as last resort.
I will continue with patch series without this path.
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-09-22 4:40 ` Hemant Agrawal
2024-09-22 13:10 ` Ferruh Yigit
@ 2024-09-22 13:27 ` Ferruh Yigit
2024-09-22 15:23 ` Ferruh Yigit
1 sibling, 1 reply; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 13:27 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
On 9/22/2024 5:40 AM, Hemant Agrawal wrote:
> HI Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Sunday, September 22, 2024 8:44 AM
>> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
>> Cc: ferruh.yigit@intel.com; Vinod Pullabhatla <vinod.pullabhatla@nxp.com>;
>> Rohit Raj <rohit.raj@nxp.com>
>> Subject: Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
>> Importance: High
>>
>> On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
>>> diff --git a/drivers/net/dpaa/version.map
>>> b/drivers/net/dpaa/version.map index 3fdb63caf3..24a28ce649 100644
>>> --- a/drivers/net/dpaa/version.map
>>> +++ b/drivers/net/dpaa/version.map
>>> @@ -6,6 +6,13 @@ DPDK_25 {
>>> local: *;
>>> };
>>>
>>> +EXPERIMENTAL {
>>> + global:
>>> +
>>> + # added in 24.11
>>> + rte_pmd_dpaa_port_set_rate_limit;
>>> +};
>>> +
>>> INTERNAL {
>>> global:
>>>
>>
>> PMD specific API needs to be justified, can't we use TM framework for this,
>> does TM needs to be improved for this support?
>>
>> What do you think to send the rest of the set without this patch, so they can
>> progress, and this one can be discussed separately (assuming there is no
>> dependency).
>
> [Hemant] I think, I replied to your earlier concerns.
>
> We are yet to implement TM framework for DPAA1. But that involves more of egress QoS.
>
> This one is additional capability to limit the ingress port. Kind of policing in Rx side.
>
> However, if you still disagree. Please apply the series without this patch.
>
Let's detect what is missing in ethdev layer for it, and what is the
required effort to cover it in ethdev, first. Based on findings, we can
continue with PMD API as last resort.
I will continue with patch series without this path.
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
2024-09-22 13:27 ` Ferruh Yigit
@ 2024-09-22 15:23 ` Ferruh Yigit
0 siblings, 0 replies; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 15:23 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit, Vinod Pullabhatla, Rohit Raj
On 9/22/2024 2:27 PM, Ferruh Yigit wrote:
> On 9/22/2024 5:40 AM, Hemant Agrawal wrote:
>> HI Ferruh,
>>
>>> -----Original Message-----
>>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>>> Sent: Sunday, September 22, 2024 8:44 AM
>>> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
>>> Cc: ferruh.yigit@intel.com; Vinod Pullabhatla <vinod.pullabhatla@nxp.com>;
>>> Rohit Raj <rohit.raj@nxp.com>
>>> Subject: Re: [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API
>>> Importance: High
>>>
>>> On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
>>>> diff --git a/drivers/net/dpaa/version.map
>>>> b/drivers/net/dpaa/version.map index 3fdb63caf3..24a28ce649 100644
>>>> --- a/drivers/net/dpaa/version.map
>>>> +++ b/drivers/net/dpaa/version.map
>>>> @@ -6,6 +6,13 @@ DPDK_25 {
>>>> local: *;
>>>> };
>>>>
>>>> +EXPERIMENTAL {
>>>> + global:
>>>> +
>>>> + # added in 24.11
>>>> + rte_pmd_dpaa_port_set_rate_limit;
>>>> +};
>>>> +
>>>> INTERNAL {
>>>> global:
>>>>
>>>
>>> PMD specific API needs to be justified, can't we use TM framework for this,
>>> does TM needs to be improved for this support?
>>>
>>> What do you think to send the rest of the set without this patch, so they can
>>> progress, and this one can be discussed separately (assuming there is no
>>> dependency).
>>
>> [Hemant] I think, I replied to your earlier concerns.
>>
>> We are yet to implement TM framework for DPAA1. But that involves more of egress QoS.
>>
>> This one is additional capability to limit the ingress port. Kind of policing in Rx side.
>>
>> However, if you still disagree. Please apply the series without this patch.
>>
>
> Let's detect what is missing in ethdev layer for it, and what is the
> required effort to cover it in ethdev, first. Based on findings, we can
> continue with PMD API as last resort.
>
> I will continue with patch series without this path.
>
Hi Hemant,
I removed the commit 14/18 from set, but it impacts other patches, I can
fix the build but can't verify the functionality, probably better if you
send a new version without patch 14/18.
Btw, while resolving conflict I recognized that patch 15/18 renames
'fman_offline_internal' -> 'fman_offline', and next patch (16/18),
renames it back to 'fman_offline_internal', this creates lots of noise
in both patches, can this be prevented, I will comment on the patch?
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 15/18] bus/dpaa: add OH port mode for dpaa eth
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (13 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 14/18] net/dpaa: add Tx rate limiting DPAA PMD API Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-09-22 15:24 ` Ferruh Yigit
2024-08-23 7:32 ` [PATCH v2 16/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
` (5 subsequent siblings)
20 siblings, 1 reply; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
NXP DPAA architecture supports the concept of DPAA
port as Offline Port - meaning - not connected to an actual MAC.
This is an hardware assited IPC mechanism for communiting between two
applications.
Offline(O/H) port is a type of hardware port which is able to dequeue and
enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute
(PCD) flow and (if configured to do so) enqueues the frame back in a
QMan queue.
The FMan is able to copy the frame into new buffers and enqueue back to the
QMan. This means these ports can be used to send and receive packet
between two applications.
An O/H port Have two queues. One to receive and one to send the packets.
It will loopback all the packets on Tx queue which are received
on Rx queue.
This property is completely driven by the device-tree. During the
DPAA bus scan, based on the platform device properties as in
device-tree, the port can be classified as OH port.
This patch add support in the driver to use dpaa eth port
in OH mode as well with DPDK applications.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 26 ++-
drivers/bus/dpaa/base/fman/fman.c | 261 ++++++++++++++--------
drivers/bus/dpaa/base/fman/fman_hw.c | 24 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/dpaa_bus.c | 23 +-
drivers/bus/dpaa/include/fman.h | 33 ++-
drivers/net/dpaa/dpaa_ethdev.c | 85 ++++++-
drivers/net/dpaa/dpaa_ethdev.h | 6 +
drivers/net/dpaa/dpaa_flow.c | 39 ++--
drivers/net/dpaa/dpaa_fmc.c | 3 +-
10 files changed, 376 insertions(+), 143 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index edf7a7e350..47dcce334c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2017,2020 NXP
+ Copyright 2017,2020-2024 NXP
DPAA Poll Mode Driver
@@ -136,6 +136,8 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
+- PMD also support OH mode, where the port works as a HW assisted
+ virtual port without actually connecting to a Physical MAC.
Features
@@ -149,6 +151,8 @@ Features
- Checksum offload
- Promiscuous mode
- IEEE1588 PTP
+ - OH Port for inter application communication
+
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -326,6 +330,26 @@ FMLIB
`Kernel FMD Driver
<https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
+OH Port
+~~~~~~~
+ Offline(O/H) port is a type of hardware port which is able to dequeue and
+ enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute (PCD)
+ flow and (if configured to do so) enqueues the frame back in a QMan queue.
+
+ The FMan is able to copy the frame into new buffers and enqueue back to the
+ QMan. This means these ports can be used to send and receive packets between two
+ applications as well.
+
+ An O/H port have two queues. One to receive and one to send the packets. It will
+ loopback all the packets on Tx queue which are received on Rx queue.
+
+
+ -------- Tx Packets ---------
+ | App | - - - - - - - - - > | O/H |
+ | | < - - - - - - - - - | Port |
+ -------- Rx Packets ---------
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index a79b0b75dd..f817305ab7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -43,7 +43,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline_internal)
+ if (__if->__if.mac_type == fman_offline)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
@@ -246,26 +246,34 @@ fman_if_init(const struct device_node *dpa_node)
uint64_t port_cell_idx_val = 0;
uint64_t ext_args_cell_idx_val = 0;
- const struct device_node *mac_node = NULL, *tx_node, *ext_args_node;
- const struct device_node *pool_node, *fman_node, *rx_node;
+ const struct device_node *mac_node = NULL, *ext_args_node;
+ const struct device_node *pool_node, *fman_node;
+ const struct device_node *rx_node = NULL, *tx_node = NULL;
+ const struct device_node *oh_node = NULL;
const uint32_t *regs_addr = NULL;
const char *mname, *fname;
const char *dname = dpa_node->full_name;
size_t lenp;
- int _errno, is_shared = 0;
+ int _errno, is_shared = 0, is_offline = 0;
const char *char_prop;
uint32_t na;
if (of_device_is_available(dpa_node) == false)
return 0;
- if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
- !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
+ if (of_device_is_compatible(dpa_node, "fsl,dpa-oh"))
+ is_offline = 1;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-oh") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
return 0;
}
- rprop = "fsl,qman-frame-queues-rx";
- mprop = "fsl,fman-mac";
+ rprop = is_offline ? "fsl,qman-frame-queues-oh" :
+ "fsl,qman-frame-queues-rx";
+ mprop = is_offline ? "fsl,fman-oh-port" :
+ "fsl,fman-mac";
/* Obtain the MAC node used by this interface except macless */
mac_phandle = of_get_property(dpa_node, mprop, &lenp);
@@ -281,27 +289,43 @@ fman_if_init(const struct device_node *dpa_node)
}
mname = mac_node->full_name;
- /* Extract the Rx and Tx ports */
- ports_phandle = of_get_property(mac_node, "fsl,port-handles",
- &lenp);
- if (!ports_phandle)
- ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ if (!is_offline) {
+ /* Extract the Rx and Tx ports */
+ ports_phandle = of_get_property(mac_node, "fsl,port-handles",
&lenp);
- if (!ports_phandle) {
- FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
- mname);
- return -EINVAL;
- }
- assert(lenp == (2 * sizeof(phandle)));
- rx_node = of_find_node_by_phandle(ports_phandle[0]);
- if (!rx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
- return -ENXIO;
- }
- tx_node = of_find_node_by_phandle(ports_phandle[1]);
- if (!tx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
- return -ENXIO;
+ if (!ports_phandle)
+ ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
+ mname);
+ return -EINVAL;
+ }
+ assert(lenp == (2 * sizeof(phandle)));
+ rx_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!rx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
+ tx_node = of_find_node_by_phandle(ports_phandle[1]);
+ if (!tx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
+ return -ENXIO;
+ }
+ } else {
+ /* Extract the OH ports */
+ ports_phandle = of_get_property(dpa_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,fman-oh-port", dname);
+ return -EINVAL;
+ }
+ assert(lenp == (sizeof(phandle)));
+ oh_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!oh_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
}
/* Check if the port is shared interface */
@@ -430,17 +454,19 @@ fman_if_init(const struct device_node *dpa_node)
* Set A2V, OVOM, EBD bits in contextA to allow external
* buffer deallocation by fman.
*/
- fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
- FMAN_V3_CONTEXTA_EN_OVOM;
- fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+ fman_dealloc_bufs_mask_hi = DPAA_FQD_CTX_A_A2_FIELD_VALID |
+ DPAA_FQD_CTX_A_OVERRIDE_OMB;
+ fman_dealloc_bufs_mask_lo = DPAA_FQD_CTX_A2_EBD_BIT;
} else {
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 2.5G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G or offline? */
__if->__if.is_memac = 0;
- if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+ if (is_offline)
+ __if->__if.mac_type = fman_offline;
+ else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
__if->__if.mac_type = fman_mac_10g;
@@ -468,46 +494,81 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /*
- * For MAC ports, we cannot rely on cell-index. In
- * T2080, two of the 10G ports on single FMAN have same
- * duplicate cell-indexes as the other two 10G ports on
- * same FMAN. Hence, we now rely upon addresses of the
- * ports from device tree to deduce the index.
- */
+ if (!is_offline) {
+ /*
+ * For MAC ports, we cannot rely on cell-index. In
+ * T2080, two of the 10G ports on single FMAN have same
+ * duplicate cell-indexes as the other two 10G ports on
+ * same FMAN. Hence, we now rely upon addresses of the
+ * ports from device tree to deduce the index.
+ */
- _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
- if (_errno) {
- FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
- regs_addr_host);
- goto err;
- }
+ _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+ if (_errno) {
+ FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
+ regs_addr_host);
+ goto err;
+ }
+ } else {
+ cell_idx = of_get_property(oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n",
+ oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+ cell_idx_host = of_read_number(cell_idx,
+ lenp / sizeof(phandle));
- /* Extract the MAC address for private and shared interfaces */
- mac_addr = of_get_property(mac_node, "local-mac-address",
- &lenp);
- if (!mac_addr) {
- FMAN_ERR(-EINVAL, "%s: no local-mac-address",
- mname);
- goto err;
+ __if->__if.mac_idx = cell_idx_host;
}
- memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
- /* Extract the channel ID (from tx-port-handle) */
- tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
- &lenp);
- if (!tx_channel_id) {
- FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
- tx_node->full_name);
- goto err;
+ if (!is_offline) {
+ /* Extract the MAC address for private and shared interfaces */
+ mac_addr = of_get_property(mac_node, "local-mac-address",
+ &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address",
+ mname);
+ goto err;
+ }
+ memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the channel ID (from tx-port-handle) */
+ tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
+ } else {
+ /* Extract the channel ID (from mac) */
+ tx_channel_id = of_get_property(mac_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
}
- regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ na = of_n_addr_cells(mac_node);
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ if (!is_offline)
+ regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ else
+ regs_addr = of_get_address(oh_node, 0, &__if->regs_size, NULL);
if (!regs_addr) {
FMAN_ERR(-EINVAL, "of_get_address(%s)", mname);
goto err;
}
- phys_addr = of_translate_address(rx_node, regs_addr);
+
+ if (!is_offline)
+ phys_addr = of_translate_address(rx_node, regs_addr);
+ else
+ phys_addr = of_translate_address(oh_node, regs_addr);
if (!phys_addr) {
FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)",
mname, regs_addr);
@@ -521,23 +582,27 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
- if (!regs_addr) {
- FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
- goto err;
- }
- phys_addr = of_translate_address(tx_node, regs_addr);
- if (!phys_addr) {
- FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
- mname, regs_addr);
- goto err;
- }
- __if->tx_bmi_map = mmap(NULL, __if->regs_size,
- PROT_READ | PROT_WRITE, MAP_SHARED,
- fman_ccsr_map_fd, phys_addr);
- if (__if->tx_bmi_map == MAP_FAILED) {
- FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
- goto err;
+ if (!is_offline) {
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
}
if (!rtc_map) {
@@ -554,11 +619,6 @@ fman_if_init(const struct device_node *dpa_node)
__if->rtc_map = rtc_map;
}
- /* No channel ID for MAC-less */
- assert(lenp == sizeof(*tx_channel_id));
- na = of_n_addr_cells(mac_node);
- __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
-
/* Extract the Rx FQIDs. (Note, the device representation is silly,
* there are "counts" that must always be 1.)
*/
@@ -568,13 +628,26 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /* Check if "fsl,qman-frame-queues-rx" in dtb file is valid entry or
- * not. A valid entry contains at least 4 entries, rx_error_queue,
- * rx_error_queue_count, fqid_rx_def and rx_error_queue_count.
+ /*
+ * Check if "fsl,qman-frame-queues-rx/oh" in dtb file is valid entry or
+ * not.
+ *
+ * A valid rx entry contains either 4 or 6 entries. Mandatory entries
+ * are rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count. Optional entries are fqid_rx_pcd and
+ * fqid_rx_pcd_count.
+ *
+ * A valid oh entry contains 4 entries. Those entries are
+ * rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count.
*/
- assert(lenp >= (4 * sizeof(phandle)));
- na = of_n_addr_cells(mac_node);
+ if (!is_offline)
+ assert(lenp == (4 * sizeof(phandle)) ||
+ lenp == (6 * sizeof(phandle)));
+ else
+ assert(lenp == (4 * sizeof(phandle)));
+
/* Get rid of endianness (issues). Convert to host byte order */
rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
@@ -595,6 +668,9 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.fqid_rx_pcd_count = rx_phandle_host[5];
}
+ if (is_offline)
+ goto oh_init_done;
+
/* Extract the Tx FQIDs */
tx_phandle = of_get_property(dpa_node,
"fsl,qman-frame-queues-tx", &lenp);
@@ -706,6 +782,7 @@ fman_if_init(const struct device_node *dpa_node)
if (is_shared)
__if->__if.is_shared_mac = 1;
+oh_init_done:
fman_if_vsp_init(__if);
/* Parsing of the network interface is complete, add it to the list */
@@ -769,6 +846,10 @@ fman_finish(void)
list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
int _errno;
+ /* No need to disable Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ continue;
+
/* disable Rx and Tx */
if ((__if->__if.mac_type == fman_mac_1g) &&
(!__if->__if.is_memac))
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4fc41c1ae9..1f61ae406b 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020,2022 NXP
+ * Copyright 2017,2020,2022-2023 NXP
*
*/
@@ -88,6 +88,10 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+ /* Add hash mac addr not supported on Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ return 0;
+
eth_addr = ETH_ADDR_TO_UINT64(eth);
if (!(eth_addr & GROUP_ADDRESS))
@@ -109,6 +113,15 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
void *mac_reg =
&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
u32 val = in_be32(mac_reg);
+ int i;
+
+ /* Get mac addr not supported on Offline port */
+ /* Return NULL mac address */
+ if (__if->__if.mac_type == fman_offline) {
+ for (i = 0; i < 6; i++)
+ eth[i] = 0x0;
+ return 0;
+ }
eth[0] = (val & 0x000000ff) >> 0;
eth[1] = (val & 0x0000ff00) >> 8;
@@ -130,6 +143,10 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
+ /* Clear mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return;
+
if (addr_num) {
reg = &((struct memac_regs *)m->ccsr_map)->
mac_addr[addr_num-1].mac_addr_l;
@@ -149,10 +166,13 @@ int
fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
{
struct __fman_if *m = container_of(p, struct __fman_if, __if);
-
void *reg;
u32 val;
+ /* Set mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return 0;
+
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
if (addr_num)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 57d87afcb0..e6a6ed1eb6 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2019,2023 NXP
*
*/
#include <inttypes.h>
@@ -44,6 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
+ (__if->mac_type == fman_offline) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -56,13 +57,15 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
- fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
- fman_if_for_each_bpool(bpool, __if)
- fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
- " size=%"PRId64", addr=0x%"PRIx64")\n",
- bpool->bpid, bpool->count, bpool->size,
- bpool->addr);
+ if (__if->mac_type != fman_offline) {
+ fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+ fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+ fman_if_for_each_bpool(bpool, __if)
+ fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
+ " size=%"PRId64", addr=0x%"PRIx64")\n",
+ bpool->bpid, bpool->count, bpool->size,
+ bpool->addr);
+ }
}
}
#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1f6997c77e..6e4ec90670 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -43,6 +43,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <netcfg.h>
+#include <fman.h>
struct rte_dpaa_bus {
struct rte_bus bus;
@@ -203,9 +204,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
- fman_intf->mac_idx);
- DPAA_BUS_LOG(INFO, "%s netdev added", dev->name);
+ if (fman_intf->mac_type == fman_offline)
+ sprintf(dev->name, "fm%d-oh%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else
+ sprintf(dev->name, "fm%d-mac%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
dev->device.name = dev->name;
dev->device.devargs = dpaa_devargs_lookup(dev);
@@ -441,7 +445,7 @@ static int
rte_dpaa_bus_parse(const char *name, void *out)
{
unsigned int i, j;
- size_t delta;
+ size_t delta, dev_delta;
size_t max_name_len;
/* There are two ways of passing device name, with and without
@@ -458,16 +462,25 @@ rte_dpaa_bus_parse(const char *name, void *out)
delta = 5;
}
+ /* dev_delta points to the dev name (mac/oh/onic). Not valid for
+ * dpaa_sec.
+ */
+ dev_delta = delta + sizeof("fm.-") - 1;
+
if (strncmp("dpaa_sec", &name[delta], 8) == 0) {
if (sscanf(&name[delta], "dpaa_sec-%u", &i) != 1 ||
i < 1 || i > 4)
return -EINVAL;
max_name_len = sizeof("dpaa_sec-.") - 1;
+ } else if (strncmp("oh", &name[dev_delta], 2) == 0) {
+ if (sscanf(&name[delta], "fm%u-oh%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-oh..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
return -EINVAL;
-
max_name_len = sizeof("fm.-mac..") - 1;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 109c1a4a22..377f73bf0d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
*
*/
@@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline_internal = 0,
+ fman_offline = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
@@ -474,11 +474,30 @@ extern int fman_ccsr_map_fd;
#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
#define FMAN_IP_REV_1_MAJOR_SHIFT 8
#define FMAN_V3 0x06
-#define FMAN_V3_CONTEXTA_EN_A2V 0x10000000
-#define FMAN_V3_CONTEXTA_EN_OVOM 0x02000000
-#define FMAN_V3_CONTEXTA_EN_EBD 0x80000000
-#define FMAN_CONTEXTA_DIS_CHECKSUM 0x7ull
-#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+
+#define DPAA_FQD_CTX_A_SHIFT_BITS 24
+#define DPAA_FQD_CTX_B_SHIFT_BITS 24
+
+/* Following flags are used to set in context A hi field of FQD */
+#define DPAA_FQD_CTX_A_OVERRIDE_FQ (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_IGNORE_CMD (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A1_FIELD_VALID (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A2_FIELD_VALID (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A0_FIELD_VALID (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_B0_FIELD_VALID (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_OVERRIDE_OMB (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_RESERVED (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
+/* Following flags are used to set in context A lo field of FQD */
+#define DPAA_FQD_CTX_A2_EBD_BIT (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_EBAD_BIT (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_FWD_BIT (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NL_BIT (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_CWD_BIT (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NENQ_BIT (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_RESERVED_BIT (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_VSPE_BIT (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
extern u16 fman_ip_rev;
extern u32 fman_dealloc_bufs_mask_hi;
extern u32 fman_dealloc_bufs_mask_lo;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ead890278..f8196ddd14 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (!fif->is_shared_mac)
+ if (fif->mac_type != fman_offline)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -314,6 +314,10 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dpaa_write_fm_config_to_file();
}
+ /* Disable interrupt support on offline port*/
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* if the interrupts were configured on this devices*/
if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
@@ -531,6 +535,9 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* Reset link to autoneg */
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
@@ -644,6 +651,11 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
+ } else if (fif->mac_type == fman_offline) {
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -744,7 +756,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -757,6 +770,11 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
+ if (fif->mac_type == fman_offline) {
+ /*Max supported rate for O/H port is 3.75Mpps*/
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ }
}
if (ioctl_version < 2) {
@@ -1077,7 +1095,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1222,8 +1240,11 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->fqid, ret);
}
}
+
/* Enable main queue to receive error packets also by default */
- fman_if_set_err_fqid(fif, rxq->fqid);
+ if (fif->mac_type != fman_offline)
+ fman_if_set_err_fqid(fif, rxq->fqid);
+
return 0;
}
@@ -1372,7 +1393,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1388,7 +1410,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1483,9 +1506,15 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1498,8 +1527,15 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1508,9 +1544,15 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1807,6 +1849,17 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
return ret;
}
+uint8_t fm_default_vsp_id(struct fman_if *fif)
+{
+ /* Avoid being same as base profile which could be used
+ * for kernel interface of shared mac.
+ */
+ if (fif->base_profile_id)
+ return 0;
+ else
+ return DPAA_DEFAULT_RXQ_VSP_ID;
+}
+
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf,
@@ -1842,13 +1895,20 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
} else {
/* no tx-confirmation */
opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ opts.fqd.context_a.hi = DPAA_FQD_CTX_A_OVERRIDE_FQ |
+ fman_dealloc_bufs_mask_hi;
}
- if (fman_ip_rev >= FMAN_V3) {
+ if (fman_ip_rev >= FMAN_V3)
/* Set B0V bit in contextA to set ASPID to 0 */
- opts.fqd.context_a.hi |= 0x04000000;
+ opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
+
+ if (fman_intf->mac_type == fman_offline) {
+ opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
+ opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
+ DPAA_FQD_CTX_B_SHIFT_BITS;
}
+
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
if (cgr_tx) {
@@ -2263,7 +2323,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf, fman_intf);
+ if (fman_intf->mac_type != fman_offline)
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -2294,10 +2355,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac) {
+ if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
- FM_FD_RX_STATUS_ERR_MASK);
+ FM_FD_RX_STATUS_ERR_MASK);
/* Disable RX mode */
fman_if_disable_rx(fman_intf);
/* Disable promiscuous mode */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 7884cc034c..8ec5155cfc 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -121,6 +121,9 @@ enum {
extern struct rte_mempool *dpaa_tx_sg_pool;
extern int dpaa_ieee_1588;
+/* PMD related logs */
+extern int dpaa_logtype_pmd;
+
/* structure to free external and indirect
* buffers.
*/
@@ -266,6 +269,9 @@ dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused);
+uint8_t
+fm_default_vsp_id(struct fman_if *fif);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index dfc81e4e43..b43c3b1b86 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -54,17 +54,6 @@ static struct dpaa_fm_info fm_info;
static struct dpaa_fm_model fm_model;
static const char *fm_log = "/tmp/fmdpdk.bin";
-static inline uint8_t fm_default_vsp_id(struct fman_if *fif)
-{
- /* Avoid being same as base profile which could be used
- * for kernel interface of shared mac.
- */
- if (fif->base_profile_id)
- return 0;
- else
- return DPAA_DEFAULT_RXQ_VSP_ID;
-}
-
static void fm_prev_cleanup(void)
{
uint32_t fman_id = 0, i = 0, devid;
@@ -660,7 +649,9 @@ static inline int get_rx_port_type(struct fman_if *fif)
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
*/
- if (fif->mac_type == fman_mac_1g &&
+ if (fif->mac_type == fman_offline)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
+ else if (fif->mac_type == fman_mac_1g &&
fif->mac_idx >= DPAA_10G_MAC_START_IDX)
return e_FM_PORT_TYPE_RX_10G;
else if (fif->mac_type == fman_mac_1g)
@@ -671,12 +662,14 @@ static inline int get_rx_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_RX_10G;
DPAA_PMD_ERR("MAC type unsupported");
- return -1;
+ return e_FM_PORT_TYPE_DUMMY;
}
static inline int get_tx_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_mac_1g)
+ if (fif->mac_type == fman_offline)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
+ else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_TX;
else if (fif->mac_type == fman_mac_2_5g)
return e_FM_PORT_TYPE_TX_2_5G;
@@ -684,7 +677,7 @@ static inline int get_tx_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_TX_10G;
DPAA_PMD_ERR("MAC type unsupported");
- return -1;
+ return e_FM_PORT_TYPE_DUMMY;
}
static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
@@ -983,17 +976,31 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- vsp_params.port_params.port_id = idx;
+ if (fif->mac_type == fman_offline)
+ vsp_params.port_params.port_id = fif->mac_idx;
+ else
+ vsp_params.port_params.port_id = idx;
+
if (fif->mac_type == fman_mac_1g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
} else if (fif->mac_type == fman_mac_2_5g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
+ } else if (fif->mac_type == fman_offline) {
+ vsp_params.port_params.port_type =
+ e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
} else {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
return -1;
}
+
+ vsp_params.port_params.port_type = get_rx_port_type(fif);
+ if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
+ DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
+ return -1;
+ }
+
vsp_params.ext_buf_pools.num_of_pools_used = 1;
vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index 7dc42f6e23..c9a25a98db 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -215,8 +215,7 @@ dpaa_port_fmc_port_parse(struct fman_if *fif,
if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
pport->number == fif->mac_idx &&
- (fif->mac_type == fman_offline_internal ||
- fif->mac_type == fman_onic))
+ fif->mac_type == fman_offline)
return current_port;
if (fif->mac_type == fman_mac_1g) {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 15/18] bus/dpaa: add OH port mode for dpaa eth
2024-08-23 7:32 ` [PATCH v2 15/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
@ 2024-09-22 15:24 ` Ferruh Yigit
0 siblings, 0 replies; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 15:24 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit, Rohit Raj
On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
> @@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
>
> /* Represents the different flavour of network interface */
> enum fman_mac_type {
> - fman_offline_internal = 0,
> + fman_offline = 0,
> fman_mac_1g,
> fman_mac_10g,
> fman_mac_2_5g,
>
Is this rename required, since next patch renames it back to
'fman_offline_internal'?
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 16/18] bus/dpaa: add ONIC port mode for the DPAA eth
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (14 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 15/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 17/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
` (4 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
The OH ports can also be used by two application, processing contexts
to communicate to each other.
This patch enables this mode for dpaa-eth OH port as ONIC port,
so that application can use the dpaa-eth to communicate to each
other on the same SoC.
Again,this properties is driven by the system device-tree variables.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 33 ++-
drivers/bus/dpaa/base/fman/fman.c | 299 +++++++++++++++++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 20 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 4 +-
drivers/bus/dpaa/dpaa_bus.c | 10 +-
drivers/bus/dpaa/include/fman.h | 15 +-
drivers/net/dpaa/dpaa_ethdev.c | 114 +++++++--
drivers/net/dpaa/dpaa_flow.c | 28 +-
drivers/net/dpaa/dpaa_fmc.c | 3 +-
9 files changed, 467 insertions(+), 59 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 47dcce334c..529d5b74f4 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -136,7 +136,7 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
-- PMD also support OH mode, where the port works as a HW assisted
+- PMD also support OH/ONIC mode, where the port works as a HW assisted
virtual port without actually connecting to a Physical MAC.
@@ -152,7 +152,7 @@ Features
- Promiscuous mode
- IEEE1588 PTP
- OH Port for inter application communication
-
+ - ONIC virtual port support
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -350,6 +350,35 @@ OH Port
-------- Rx Packets ---------
+ONIC
+~~~~
+ To use OH port to communicate between two applications, we can assign Rx port
+ of an O/H port to Application 1 and Tx port to Application 2 so that
+ Application 1 can send packets to Application 2. Similarly, we can assign Tx
+ port of another O/H port to Application 1 and Rx port to Application 2 so that
+ Applicaiton 2 can send packets to Application 1.
+
+ ONIC is logically defined to achieve it. Internally it will use one Rx queue
+ of an O/H port and one Tx queue of another O/H port.
+ For application, it will behave as single O/H port.
+
+ +------+ +------+ +------+ +------+ +------+
+ | | Tx | | Rx | O/H | Tx | | Rx | |
+ | | - - - > | | - - > | Port | - - > | | - - > | |
+ | | | | | 1 | | | | |
+ | | | | +------+ | | | |
+ | App | | ONIC | | ONIC | | App |
+ | 1 | | Port | | Port | | 2 |
+ | | | 1 | +------+ | 2 | | |
+ | | Rx | | Tx | O/H | Rx | | Tx | |
+ | | < - - - | | < - - -| Port | < - - -| | < - - -| |
+ | | | | | 2 | | | | |
+ +------+ +------+ +------+ +------+ +------+
+
+ All the packets received by ONIC port 1 will be send to ONIC port 2 and vice
+ versa. These ports can be used by DPDK applications just like physical ports.
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index f817305ab7..efe6eab4a9 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -43,7 +43,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
@@ -465,7 +465,7 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.is_memac = 0;
if (is_offline)
- __if->__if.mac_type = fman_offline;
+ __if->__if.mac_type = fman_offline_internal;
else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
@@ -791,6 +791,292 @@ fman_if_init(const struct device_node *dpa_node)
dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
__if->__if.mac_idx);
+ /* Don't add OH port to the port list since they will be used by ONIC
+ * ports.
+ */
+ if (!is_offline)
+ list_add_tail(&__if->__if.node, &__ifs);
+
+ return 0;
+err:
+ if_destructor(__if);
+ return _errno;
+}
+
+static int fman_if_init_onic(const struct device_node *dpa_node)
+{
+ struct __fman_if *__if;
+ struct fman_if_bpool *bpool;
+ const phandle *tx_pools_phandle;
+ const phandle *tx_channel_id, *mac_addr, *cell_idx;
+ const phandle *rx_phandle;
+ const struct device_node *pool_node;
+ size_t lenp;
+ int _errno;
+ const phandle *p_onic_oh_nodes = NULL;
+ const struct device_node *rx_oh_node = NULL;
+ const struct device_node *tx_oh_node = NULL;
+ const phandle *p_fman_rx_oh_node = NULL, *p_fman_tx_oh_node = NULL;
+ const struct device_node *fman_rx_oh_node = NULL;
+ const struct device_node *fman_tx_oh_node = NULL;
+ const struct device_node *fman_node;
+ uint32_t na = OF_DEFAULT_NA;
+ uint64_t rx_phandle_host[4] = {0};
+ uint64_t cell_idx_host = 0;
+
+ if (of_device_is_available(dpa_node) == false)
+ return 0;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-generic"))
+ return 0;
+
+ /* Allocate an object for this network interface */
+ __if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+ if (!__if) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+ goto err;
+ }
+ memset(__if, 0, sizeof(*__if));
+
+ INIT_LIST_HEAD(&__if->__if.bpool_list);
+
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
+
+ strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+ __if->node_path[PATH_MAX - 1] = '\0';
+
+ /* Mac node is onic */
+ __if->__if.is_memac = 0;
+ __if->__if.mac_type = fman_onic;
+
+ /* Extract the MAC address for linux peer */
+ mac_addr = of_get_property(dpa_node, "local-mac-address", &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ memcpy(&__if->__if.onic_info.peer_mac, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the Rx port (it's the first of the two port handles)
+ * and get its channel ID.
+ */
+ p_onic_oh_nodes = of_get_property(dpa_node, "fsl,oh-ports", &lenp);
+ if (!p_onic_oh_nodes) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_onic_oh_nodes\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ rx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[0]);
+ if (!rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get rx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_rx_oh_node = of_get_property(rx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_rx_oh_node = of_find_node_by_phandle(*p_fman_rx_oh_node);
+ if (!fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ tx_channel_id = of_get_property(fman_rx_oh_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*tx_channel_id));
+
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ /* Extract the FQs from which oNIC driver in Linux is dequeuing */
+ rx_phandle = of_get_property(rx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ __if->__if.onic_info.rx_start = of_read_number(&rx_phandle[2], na);
+ __if->__if.onic_info.rx_count = of_read_number(&rx_phandle[3], na);
+
+ /* Extract the Rx FQIDs */
+ tx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[1]);
+ if (!tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get tx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_tx_oh_node = of_get_property(tx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_tx_oh_node = of_find_node_by_phandle(*p_fman_tx_oh_node);
+ if (!fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ cell_idx = of_get_property(fman_tx_oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.mac_idx = cell_idx_host;
+
+ fman_node = of_get_parent(fman_tx_oh_node);
+ cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.fman_idx = cell_idx_host;
+
+ rx_phandle = of_get_property(tx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ dpa_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+ rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+ rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+ rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+ assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+
+ __if->__if.fqid_rx_err = rx_phandle_host[0];
+ __if->__if.fqid_rx_def = rx_phandle_host[2];
+
+ /* Don't Extract the Tx FQIDs */
+ __if->__if.fqid_tx_err = 0;
+ __if->__if.fqid_tx_confirm = 0;
+
+ /* Obtain the buffer pool nodes used by Tx OH port */
+ tx_pools_phandle = of_get_property(tx_oh_node, "fsl,bman-buffer-pools",
+ &lenp);
+ if (!tx_pools_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp && !(lenp % sizeof(phandle)));
+
+ /* For each pool, parse the corresponding node and add a pool object to
+ * the interface's "bpool_list".
+ */
+
+ while (lenp) {
+ size_t proplen;
+ const phandle *prop;
+ uint64_t bpool_host[6] = {0};
+
+ /* Allocate an object for the pool */
+ bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+ if (!bpool) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+ goto err;
+ }
+
+ /* Find the pool node */
+ pool_node = of_find_node_by_phandle(*tx_pools_phandle);
+ if (!pool_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+
+ /* Extract the BPID property */
+ prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+ if (!prop) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n",
+ pool_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+ assert(proplen == sizeof(*prop));
+
+ bpool->bpid = of_read_number(prop, na);
+
+ /* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+ * indicates for the Bman driver to seed the pool.
+ * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+ * two are mutually exclusive, so check for either of them.
+ */
+
+ prop = of_get_property(pool_node, "fsl,bpool-cfg", &proplen);
+ if (!prop)
+ prop = of_get_property(pool_node,
+ "fsl,bpool-ethernet-cfg",
+ &proplen);
+ if (!prop) {
+ /* It's OK for there to be no bpool-cfg */
+ bpool->count = bpool->size = bpool->addr = 0;
+ } else {
+ assert(proplen == (6 * sizeof(*prop)));
+
+ bpool_host[0] = of_read_number(&prop[0], na);
+ bpool_host[1] = of_read_number(&prop[1], na);
+ bpool_host[2] = of_read_number(&prop[2], na);
+ bpool_host[3] = of_read_number(&prop[3], na);
+ bpool_host[4] = of_read_number(&prop[4], na);
+ bpool_host[5] = of_read_number(&prop[5], na);
+
+ bpool->count = ((uint64_t)bpool_host[0] << 32) |
+ bpool_host[1];
+ bpool->size = ((uint64_t)bpool_host[2] << 32) |
+ bpool_host[3];
+ bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+ bpool_host[5];
+ }
+
+ /* Parsing of the pool is complete, add it to the interface
+ * list.
+ */
+ list_add_tail(&bpool->node, &__if->__if.bpool_list);
+ lenp -= sizeof(phandle);
+ tx_pools_phandle++;
+ }
+
+ fman_if_vsp_init(__if);
+
+ /* Parsing of the network interface is complete, add it to the list. */
+ DPAA_BUS_DEBUG("Found %s, Tx Channel = %x, FMAN = %x, Port ID = %x",
+ dpa_node->full_name, __if->__if.tx_channel_id,
+ __if->__if.fman_idx, __if->__if.mac_idx);
+
list_add_tail(&__if->__if.node, &__ifs);
return 0;
err:
@@ -830,6 +1116,13 @@ fman_init(void)
}
}
+ for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-generic") {
+ /* it is a oNIC interface */
+ _errno = fman_if_init_onic(dpa_node);
+ if (_errno)
+ FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+ }
+
return 0;
err:
fman_finish();
@@ -847,7 +1140,7 @@ fman_finish(void)
int _errno;
/* No need to disable Offline port */
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
continue;
/* disable Rx and Tx */
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 1f61ae406b..cbb0491d70 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -88,8 +88,9 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- /* Add hash mac addr not supported on Offline port */
- if (__if->__if.mac_type == fman_offline)
+ /* Add hash mac addr not supported on Offline port and onic port */
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic)
return 0;
eth_addr = ETH_ADDR_TO_UINT64(eth);
@@ -115,9 +116,10 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
u32 val = in_be32(mac_reg);
int i;
- /* Get mac addr not supported on Offline port */
+ /* Get mac addr not supported on Offline port and onic port */
/* Return NULL mac address */
- if (__if->__if.mac_type == fman_offline) {
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic) {
for (i = 0; i < 6; i++)
eth[i] = 0x0;
return 0;
@@ -143,8 +145,9 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
- /* Clear mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Clear mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return;
if (addr_num) {
@@ -169,8 +172,9 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
void *reg;
u32 val;
- /* Set mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Set mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return 0;
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index e6a6ed1eb6..ffb37825c2 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_offline) ? "OFFLINE" :
+ (__if->mac_type == fman_offline_internal) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -57,7 +57,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- if (__if->mac_type != fman_offline) {
+ if (__if->mac_type != fman_offline_internal) {
fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
fman_if_for_each_bpool(bpool, __if)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6e4ec90670..a6c89c4514 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -204,9 +204,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- if (fman_intf->mac_type == fman_offline)
+ if (fman_intf->mac_type == fman_offline_internal)
sprintf(dev->name, "fm%d-oh%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else if (fman_intf->mac_type == fman_onic)
+ sprintf(dev->name, "fm%d-onic%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
else
sprintf(dev->name, "fm%d-mac%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
@@ -477,6 +480,11 @@ rte_dpaa_bus_parse(const char *name, void *out)
i >= 2 || j >= 16)
return -EINVAL;
max_name_len = sizeof("fm.-oh..") - 1;
+ } else if (strncmp("onic", &name[dev_delta], 4) == 0) {
+ if (sscanf(&name[delta], "fm%u-onic%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-onic..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 377f73bf0d..01556cf2a8 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline = 0,
+ fman_offline_internal = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
@@ -366,6 +366,16 @@ struct fman_port_qmi_regs {
uint32_t fmqm_pndcc; /**< PortID n Dequeue Confirm Counter */
};
+struct onic_port_cfg {
+ char macless_name[IF_NAME_MAX_LEN];
+ uint32_t rx_start;
+ uint32_t rx_count;
+ uint32_t tx_start;
+ uint32_t tx_count;
+ struct rte_ether_addr src_mac;
+ struct rte_ether_addr peer_mac;
+};
+
/* This struct exports parameters about an Fman network interface, determined
* from the device-tree.
*/
@@ -401,6 +411,9 @@ struct fman_if {
uint32_t fqid_tx_err;
uint32_t fqid_tx_confirm;
+ /* oNIC port info */
+ struct onic_port_cfg onic_info;
+
struct list_head bpool_list;
/* The node for linking this interface into "fman_if_list" */
struct list_head node;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f8196ddd14..133fbd5bc9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -315,7 +316,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* Disable interrupt support on offline port*/
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* if the interrupts were configured on this devices*/
@@ -467,10 +469,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_bmi_stats_enable(fif);
- fman_if_bmi_stats_reset(fif);
- fman_if_enable_rx(fif);
-
+ if (fif->mac_type != fman_onic) {
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
+ }
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
for (i = 0; i < dev->data->nb_tx_queues; i++)
@@ -535,7 +538,8 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* Reset link to autoneg */
@@ -651,11 +655,14 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
- } else if (fif->mac_type == fman_offline) {
+ } else if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
| RTE_ETH_LINK_SPEED_10M
| RTE_ETH_LINK_SPEED_100M_HD
- | RTE_ETH_LINK_SPEED_100M;
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -757,7 +764,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline) {
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -770,7 +778,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
/*Max supported rate for O/H port is 3.75Mpps*/
link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -933,8 +942,16 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_enable(dev->process_private);
return 0;
@@ -942,8 +959,16 @@ static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_disable(dev->process_private);
return 0;
@@ -951,8 +976,15 @@ static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_set_mcast_filter_table(dev->process_private);
return 0;
@@ -960,8 +992,15 @@ static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
@@ -1095,7 +1134,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac && fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1126,8 +1166,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
dpaa_intf->valid = 1;
- DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif), max_rx_pktlen);
+ if (fif->mac_type != fman_onic)
+ DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
@@ -1242,7 +1283,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
/* Enable main queue to receive error packets also by default */
- if (fif->mac_type != fman_offline)
+ if (fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_err_fqid(fif, rxq->fqid);
return 0;
@@ -1394,7 +1436,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1411,7 +1454,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1510,11 +1554,16 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Add MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1531,11 +1580,16 @@ dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
return;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Remove MAC Address not supported on ONIC port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1548,11 +1602,16 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Set MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1903,7 +1962,8 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
- if (fman_intf->mac_type == fman_offline) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
DPAA_FQD_CTX_B_SHIFT_BITS;
@@ -2156,6 +2216,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
if (!num_rx_fqs) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
+ ret = -ENODEV;
+ goto free_rx;
+ }
DPAA_PMD_WARN("%s is not configured by FMC.",
dpaa_intf->name);
}
@@ -2323,7 +2388,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- if (fman_intf->mac_type != fman_offline)
+ if (fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic)
dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
@@ -2355,7 +2421,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
+ if (!fman_intf->is_shared_mac &&
+ fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
FM_FD_RX_STATUS_ERR_MASK);
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index b43c3b1b86..810b187405 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -646,11 +646,15 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_rx_port_type(struct fman_if *fif)
{
+ /* For onic ports, configure the VSP as offline ports so that
+ * kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
*/
- if (fif->mac_type == fman_offline)
- return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
else if (fif->mac_type == fman_mac_1g &&
fif->mac_idx >= DPAA_10G_MAC_START_IDX)
return e_FM_PORT_TYPE_RX_10G;
@@ -667,7 +671,8 @@ static inline int get_rx_port_type(struct fman_if *fif)
static inline int get_tx_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_TX;
@@ -976,25 +981,12 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
vsp_params.port_params.port_id = fif->mac_idx;
else
vsp_params.port_params.port_id = idx;
- if (fif->mac_type == fman_mac_1g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
- } else if (fif->mac_type == fman_mac_2_5g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
- } else if (fif->mac_type == fman_mac_10g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
- } else if (fif->mac_type == fman_offline) {
- vsp_params.port_params.port_type =
- e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
- } else {
- DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
- return -1;
- }
-
vsp_params.port_params.port_type = get_rx_port_type(fif);
if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index c9a25a98db..7dc42f6e23 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -215,7 +215,8 @@ dpaa_port_fmc_port_parse(struct fman_if *fif,
if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
pport->number == fif->mac_idx &&
- fif->mac_type == fman_offline)
+ (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic))
return current_port;
if (fif->mac_type == fman_mac_1g) {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 17/18] net/dpaa: improve the dpaa port cleanup
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (15 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 16/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-08-23 7:32 ` [PATCH v2 18/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
` (3 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
During DPAA cleanup in FMCLESS mode, application can
see segmentation fault in device close API and in DPAA
destructor execution.
Segmentation fault in device close is because driver
reducing the number of queues initialised during
device configuration without releasing the actual queues.
And segmentation fault in DPAA destruction is because
it is trying to access RTE* devices whose memory has
been released in rte_eal_cleanup() call by the application.
This patch improves the behavior.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 33 +++++++++++----------------------
drivers/net/dpaa/dpaa_flow.c | 8 ++++----
2 files changed, 15 insertions(+), 26 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 133fbd5bc9..41ae033c75 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -561,10 +561,10 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (dpaa_intf->cgr_rx) {
for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
}
- rte_free(dpaa_intf->cgr_rx);
- dpaa_intf->cgr_rx = NULL;
/* Release TX congestion Groups */
if (dpaa_intf->cgr_tx) {
for (loop = 0; loop < MAX_DPAA_CORES; loop++)
@@ -578,6 +578,15 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
rte_free(dpaa_intf->tx_queues);
dpaa_intf->tx_queues = NULL;
+ if (dpaa_intf->port_handle) {
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM "
+ "deconfig failed\n");
+ }
+ if (fif->num_profiles) {
+ if (dpaa_port_vsp_cleanup(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n");
+ }
return ret;
}
@@ -2607,26 +2616,6 @@ static void __attribute__((destructor(102))) dpaa_finish(void)
return;
if (!(default_q || fmc_q)) {
- unsigned int i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].dev_ops == &dpaa_devops) {
- struct rte_eth_dev *dev = &rte_eth_devices[i];
- struct dpaa_if *dpaa_intf =
- dev->data->dev_private;
- struct fman_if *fif =
- dev->process_private;
- if (dpaa_intf->port_handle)
- if (dpaa_fm_deconfig(dpaa_intf, fif))
- DPAA_PMD_WARN("DPAA FM "
- "deconfig failed");
- if (fif->num_profiles) {
- if (dpaa_port_vsp_cleanup(dpaa_intf,
- fif))
- DPAA_PMD_WARN("DPAA FM vsp cleanup failed");
- }
- }
- }
if (is_global_init)
if (dpaa_fm_term())
DPAA_PMD_WARN("DPAA FM term failed");
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 810b187405..2240f8d27c 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2019,2021 NXP
+ * Copyright 2017-2019,2021-2023 NXP
*/
/* System headers */
@@ -812,8 +812,6 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
return -1;
}
- dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues;
-
/* Open FM Port and set it in port info */
ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif);
if (ret) {
@@ -822,7 +820,7 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
}
if (fif->num_profiles) {
- for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
dpaa_intf->rx_queues[i].vsp_id =
fm_default_vsp_id(fif);
@@ -1147,6 +1145,8 @@ int rte_pmd_dpaa_port_set_rate_limit(uint16_t port_id, uint16_t burst,
if (ret) {
DPAA_PMD_ERR("Failed to set rate limit ret = %#x\n", -ret);
+ if (!port_handle_exists)
+ fm_port_close(handle);
return -ret;
}
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v2 18/18] net/dpaa: improve dpaa errata A010022 handling
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (16 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 17/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
@ 2024-08-23 7:32 ` Hemant Agrawal
2024-09-22 3:12 ` [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
` (2 subsequent siblings)
20 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-08-23 7:32 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch improves the errata handling for
"RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022"
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d82c6f3be2..1d7efdef88 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1258,6 +1258,35 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
return new_mbufs[0];
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+/* In case the data offset is not multiple of 16,
+ * FMAN can stall because of an errata. So reallocate
+ * the buffer in such case.
+ */
+static inline int
+dpaa_eth_ls1043a_mbuf_realloc(struct rte_mbuf *mbuf)
+{
+ uint64_t len, offset;
+
+ if (dpaa_svr_family != SVR_LS1043A_FAMILY)
+ return 0;
+
+ while (mbuf) {
+ len = mbuf->data_len;
+ offset = mbuf->data_off;
+ if ((mbuf->next &&
+ !rte_is_aligned((void *)len, 16)) ||
+ !rte_is_aligned((void *)offset, 16)) {
+ DPAA_PMD_DEBUG("Errata condition hit");
+
+ return 1;
+ }
+ mbuf = mbuf->next;
+ }
+ return 0;
+}
+#endif
+
uint16_t
dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
@@ -1296,14 +1325,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++) {
mbuf = *(bufs++);
- /* In case the data offset is not multiple of 16,
- * FMAN can stall because of an errata. So reallocate
- * the buffer in such case.
- */
- if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
- (mbuf->data_off & 0x7F) != 0x0)
- realloc_mbuf = 1;
-
fd_arr[loop].cmd = 0;
if (dpaa_ieee_1588) {
fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
@@ -1311,6 +1332,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
DPAA_FD_CMD_UPD;
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+ realloc_mbuf = dpaa_eth_ls1043a_mbuf_realloc(mbuf);
+#endif
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (17 preceding siblings ...)
2024-08-23 7:32 ` [PATCH v2 18/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
@ 2024-09-22 3:12 ` Ferruh Yigit
2024-09-22 4:38 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
20 siblings, 1 reply; 129+ messages in thread
From: Ferruh Yigit @ 2024-09-22 3:12 UTC (permalink / raw)
To: Hemant Agrawal, dev; +Cc: ferruh.yigit
On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
> v2: address review comments
> - improve commit message
> - add documentarion for new functions
> - make IEEE1588 config runtime
>
> This series adds several enhancement to the NXP DPAA Ethernet driver.
>
> Primarily:
> 1. timestamp and IEEE 1588 support
> 2. OH and ONIC based virtual port config in DPAA
> 3. frame display and debugging infra
>
> Gagandeep Singh (3):
> bus/dpaa: fix PFDRs leaks due to FQRNIs
> net/dpaa: support mempool debug
> net/dpaa: improve the dpaa port cleanup
>
> Hemant Agrawal (5):
> bus/dpaa: fix VSP for 1G fm1-mac9 and 10
> bus/dpaa: fix the fman details status
> bus/dpaa: add port buffer manager stats
> net/dpaa: implement detailed packet parsing
> net/dpaa: enhance DPAA frame display
>
> Jun Yang (2):
> net/dpaa: share MAC FMC scheme and CC parse
> net/dpaa: improve dpaa errata A010022 handling
>
> Rohit Raj (3):
> net/dpaa: fix typecasting ch ID to u32
> bus/dpaa: add OH port mode for dpaa eth
> bus/dpaa: add ONIC port mode for the DPAA eth
>
> Vanshika Shukla (4):
> net/dpaa: support Tx confirmation to enable PTP
> net/dpaa: add support to separate Tx conf queues
> net/dpaa: support Rx/Tx timestamp read
> net/dpaa: support IEEE 1588 PTP
>
> Vinod Pullabhatla (1):
> net/dpaa: add Tx rate limiting DPAA PMD API
>
Hi Hemant,
Can you please ack/review series to proceed?
^ permalink raw reply [flat|nested] 129+ messages in thread
* RE: [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes
2024-09-22 3:12 ` [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
@ 2024-09-22 4:38 ` Hemant Agrawal
0 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-22 4:38 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: ferruh.yigit
Series
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Sunday, September 22, 2024 8:43 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Cc: ferruh.yigit@intel.com
> Subject: Re: [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes
>
> On 8/23/2024 8:32 AM, Hemant Agrawal wrote:
> > v2: address review comments
> > - improve commit message
> > - add documentarion for new functions
> > - make IEEE1588 config runtime
> >
> > This series adds several enhancement to the NXP DPAA Ethernet driver.
> >
> > Primarily:
> > 1. timestamp and IEEE 1588 support
> > 2. OH and ONIC based virtual port config in DPAA 3. frame display and
> > debugging infra
> >
> > Gagandeep Singh (3):
> > bus/dpaa: fix PFDRs leaks due to FQRNIs
> > net/dpaa: support mempool debug
> > net/dpaa: improve the dpaa port cleanup
> >
> > Hemant Agrawal (5):
> > bus/dpaa: fix VSP for 1G fm1-mac9 and 10
> > bus/dpaa: fix the fman details status
> > bus/dpaa: add port buffer manager stats
> > net/dpaa: implement detailed packet parsing
> > net/dpaa: enhance DPAA frame display
> >
> > Jun Yang (2):
> > net/dpaa: share MAC FMC scheme and CC parse
> > net/dpaa: improve dpaa errata A010022 handling
> >
> > Rohit Raj (3):
> > net/dpaa: fix typecasting ch ID to u32
> > bus/dpaa: add OH port mode for dpaa eth
> > bus/dpaa: add ONIC port mode for the DPAA eth
> >
> > Vanshika Shukla (4):
> > net/dpaa: support Tx confirmation to enable PTP
> > net/dpaa: add support to separate Tx conf queues
> > net/dpaa: support Rx/Tx timestamp read
> > net/dpaa: support IEEE 1588 PTP
> >
> > Vinod Pullabhatla (1):
> > net/dpaa: add Tx rate limiting DPAA PMD API
> >
>
> Hi Hemant,
>
> Can you please ack/review series to proceed?
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 00/18] NXP DPAA ETH driver enhancement and fixes
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (18 preceding siblings ...)
2024-09-22 3:12 ` [PATCH v2 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
` (17 more replies)
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
20 siblings, 18 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
v3: addressed Ferruh's comments
- dropped Tx rate limit API patch
- added one small bug fix
- fixed removal/add of fman_offline type
v2: address review comments
- improve commit message
- add documentarion for new functions
- make IEEE1588 config runtime
This series adds several enhancement to the NXP DPAA Ethernet driver.
Primarily:
1. timestamp and IEEE 1588 support
2. OH and ONIC based virtual port config in DPAA
3. frame display and debugging infra
Gagandeep Singh (3):
bus/dpaa: fix PFDRs leaks due to FQRNIs
net/dpaa: support mempool debug
net/dpaa: improve the dpaa port cleanup
Hemant Agrawal (5):
bus/dpaa: fix VSP for 1G fm1-mac9 and 10
bus/dpaa: fix the fman details status
bus/dpaa: add port buffer manager stats
net/dpaa: implement detailed packet parsing
net/dpaa: enhance DPAA frame display
Jun Yang (2):
net/dpaa: share MAC FMC scheme and CC parse
net/dpaa: improve dpaa errata A010022 handling
Rohit Raj (3):
net/dpaa: fix typecasting ch ID to u32
bus/dpaa: add OH port mode for dpaa eth
bus/dpaa: add ONIC port mode for the DPAA eth
Vanshika Shukla (5):
net/dpaa: support Tx confirmation to enable PTP
net/dpaa: add support to separate Tx conf queues
net/dpaa: support Rx/Tx timestamp read
net/dpaa: support IEEE 1588 PTP
net/dpaa: fix reallocate_mbuf handling
doc/guides/nics/dpaa.rst | 64 ++-
doc/guides/nics/features/dpaa.ini | 2 +
drivers/bus/dpaa/base/fman/fman.c | 583 +++++++++++++++++++---
drivers/bus/dpaa/base/fman/fman_hw.c | 102 +++-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/base/qbman/qman.c | 46 +-
drivers/bus/dpaa/dpaa_bus.c | 37 +-
drivers/bus/dpaa/include/fman.h | 112 ++++-
drivers/bus/dpaa/include/fsl_fman.h | 12 +
drivers/bus/dpaa/include/fsl_qman.h | 4 +-
drivers/bus/dpaa/version.map | 4 +
drivers/net/dpaa/dpaa_ethdev.c | 428 +++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 68 ++-
drivers/net/dpaa/dpaa_flow.c | 82 +--
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++------
drivers/net/dpaa/dpaa_ptp.c | 118 +++++
drivers/net/dpaa/dpaa_rxtx.c | 378 ++++++++++++--
drivers/net/dpaa/dpaa_rxtx.h | 152 +++---
drivers/net/dpaa/meson.build | 1 +
19 files changed, 2121 insertions(+), 512 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
` (16 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh, stable
From: Gagandeep Singh <g.singh@nxp.com>
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experienace while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 301057723e..9c90ee25a6 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -317,6 +339,7 @@ static int drain_mr_fqrni(struct qm_portal *p)
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 02/18] net/dpaa: fix typecasting ch ID to u32
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
` (15 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj, hemant.agrawal, stable
From: Rohit Raj <rohit.raj@nxp.com>
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecase
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 060b8c678f..1a2de5240f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
` (14 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 41195eb0a7..beeb03dbf2 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 04/18] bus/dpaa: fix the fman details status
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (2 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
` (13 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 05/18] bus/dpaa: add port buffer manager stats
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (3 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
` (12 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
Add BMI statistics and improving the existing extended
statistics
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 61 ++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fman.h | 4 +-
drivers/bus/dpaa/include/fsl_fman.h | 12 ++++++
drivers/bus/dpaa/version.map | 4 ++
drivers/net/dpaa/dpaa_ethdev.c | 46 ++++++++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 12 ++++++
6 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 97e792806f..124c69edb4 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -267,6 +267,67 @@ fman_if_stats_reset(struct fman_if *p)
;
}
+void
+fman_if_bmi_stats_enable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp |= FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_disable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp &= ~FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ int i = 0;
+
+ value[i++] = (u32)in_be32(®s->fmbm_rfrc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfbc);
+ value[i++] = (u32)in_be32(®s->fmbm_rlfc);
+ value[i++] = (u32)in_be32(®s->fmbm_rffc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfdc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfldec);
+ value[i++] = (u32)in_be32(®s->fmbm_rodc);
+ value[i++] = (u32)in_be32(®s->fmbm_rbdc);
+}
+
+void
+fman_if_bmi_stats_reset(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+
+ out_be32(®s->fmbm_rfrc, 0);
+ out_be32(®s->fmbm_rfbc, 0);
+ out_be32(®s->fmbm_rlfc, 0);
+ out_be32(®s->fmbm_rffc, 0);
+ out_be32(®s->fmbm_rfdc, 0);
+ out_be32(®s->fmbm_rfldec, 0);
+ out_be32(®s->fmbm_rodc, 0);
+ out_be32(®s->fmbm_rbdc, 0);
+}
+
void
fman_if_promiscuous_enable(struct fman_if *p)
{
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..60681068ea 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -56,6 +56,8 @@
#define FMAN_PORT_BMI_FIFO_UNITS 0x100
#define FMAN_PORT_IC_OFFSET_UNITS 0x10
+#define FMAN_BMI_COUNTERS_EN 0x80000000
+
#define FMAN_ENABLE_BPOOL_DEPLETION 0xF00000F0
#define HASH_CTRL_MCAST_EN 0x00000100
@@ -260,7 +262,7 @@ struct rx_bmi_regs {
/**< Buffer Manager pool Information-*/
uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
/**< Allocate Counter-*/
- uint32_t reserved0130[8];
+ uint32_t reserved0120[16];
/**< 0x130/0x140 - 0x15F reserved -*/
uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
/**< Congestion Group Map*/
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 20690f8329..5a9750ad0c 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -60,6 +60,18 @@ void fman_if_stats_reset(struct fman_if *p);
__rte_internal
void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+__rte_internal
+void fman_if_bmi_stats_enable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_disable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value);
+
+__rte_internal
+void fman_if_bmi_stats_reset(struct fman_if *p);
+
/* Set ignore pause option for a specific interface */
void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index 3f547f75cf..a17d57632e 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -24,6 +24,10 @@ INTERNAL {
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
+ fman_if_bmi_stats_enable;
+ fman_if_bmi_stats_disable;
+ fman_if_bmi_stats_get_all;
+ fman_if_bmi_stats_reset;
fman_if_clear_mac_addr;
fman_if_disable_rx;
fman_if_discard_rx_errors;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1a2de5240f..90b34e42f2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -131,6 +131,22 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tvlan)},
{"rx_undersized",
offsetof(struct dpaa_if_stats, tund)},
+ {"rx_frame_counter",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfrc)},
+ {"rx_bad_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfbc)},
+ {"rx_large_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rlfc)},
+ {"rx_filter_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rffc)},
+ {"rx_frame_discrad_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfdc)},
+ {"rx_frame_list_dma_err_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfldec)},
+ {"rx_out_of_buffer_discard ",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rodc)},
+ {"rx_buf_diallocate",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rbdc)},
};
static struct rte_dpaa_driver rte_dpaa_pmd;
@@ -430,6 +446,7 @@ static void dpaa_interrupt_handler(void *param)
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
uint16_t i;
PMD_INIT_FUNC_TRACE();
@@ -443,7 +460,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dev->process_private);
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -461,8 +480,10 @@ static int dpaa_eth_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac) {
+ fman_if_bmi_stats_disable(fif);
fman_if_disable_rx(fif);
+ }
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
for (i = 0; i < dev->data->nb_rx_queues; i++)
@@ -769,6 +790,7 @@ static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
fman_if_stats_reset(dev->process_private);
+ fman_if_bmi_stats_reset(dev->process_private);
return 0;
}
@@ -777,8 +799,9 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i = 0, j, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (n < num)
return num;
@@ -789,10 +812,16 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < num; i++) {
+ for (i = 0; i < num - (bmi_count - 1); i++) {
xstats[i].id = i;
xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
}
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < num; i++, j++) {
+ xstats[i].id = i;
+ xstats[i].value = values[j];
+ }
+
return i;
}
@@ -819,8 +848,9 @@ static int
dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t *values, unsigned int n)
{
- unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i, j, stat_cnt = RTE_DIM(dpaa_xstats_strings);
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (!ids) {
if (n < stat_cnt)
@@ -832,10 +862,14 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < stat_cnt; i++)
+ for (i = 0; i < stat_cnt - (bmi_count - 1); i++)
values[i] =
values_copy[dpaa_xstats_strings[i].offset / 8];
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < stat_cnt; i++, j++)
+ values[i] = values_copy[j];
+
return stat_cnt;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b6c61b8b6b..261a5a3ca7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -212,6 +212,18 @@ dpaa_rx_cb_atomic(void *event,
const struct qm_dqrr_entry *dqrr,
void **bufs);
+struct dpaa_if_rx_bmi_stats {
+ uint32_t fmbm_rstc; /**< Rx Statistics Counters*/
+ uint32_t fmbm_rfrc; /**< Rx Frame Counter*/
+ uint32_t fmbm_rfbc; /**< Rx Bad Frames Counter*/
+ uint32_t fmbm_rlfc; /**< Rx Large Frames Counter*/
+ uint32_t fmbm_rffc; /**< Rx Filter Frames Counter*/
+ uint32_t fmbm_rfdc; /**< Rx Frame Discard Counter*/
+ uint32_t fmbm_rfldec; /**< Rx Frames List DMA Error Counter*/
+ uint32_t fmbm_rodc; /**< Rx Out of Buffers Discard nntr*/
+ uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
+};
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 06/18] net/dpaa: support Tx confirmation to enable PTP
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (4 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
` (11 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
TX confirmation provides dedicated confirmation
queues for transmitted packets. These queues are
used by software to get the status and release
transmitted packets buffers.
This patch also changes the IEEE1588 support as devargs
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 124 ++++++++++++++++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 +-
drivers/net/dpaa/dpaa_rxtx.c | 49 +++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 2 +
5 files changed, 154 insertions(+), 28 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index e8402dff52..acf4daab02 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -264,6 +264,9 @@ for details.
Done
testpmd>
+* Use dev arg option ``drv_ieee1588=1`` to enable ieee 1588 support at
+ driver level. e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``
+
FMAN Config
-----------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 90b34e42f2..bba305cfb1 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2020,2022-2024 NXP
*
*/
/* System headers */
@@ -30,6 +30,7 @@
#include <rte_eal.h>
#include <rte_alarm.h>
#include <rte_ether.h>
+#include <rte_kvargs.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
#include <rte_ring.h>
@@ -50,6 +51,7 @@
#include <process.h>
#include <fmlib/fm_ext.h>
+#define DRIVER_IEEE1588 "drv_ieee1588"
#define CHECK_INTERVAL 100 /* 100ms */
#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */
@@ -83,6 +85,7 @@ static uint64_t dev_tx_offloads_nodis =
static int is_global_init;
static int fmc_q = 1; /* Indicates the use of static fmc for distribution */
static int default_q; /* use default queue - FMC is not executed*/
+int dpaa_ieee_1588; /* use to indicate if IEEE 1588 is enabled for the driver */
/* At present we only allow up to 4 push mode queues as default - as each of
* this queue need dedicated portal and we are short of portals.
*/
@@ -1826,9 +1829,15 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_b = 0;
- /* no tx-confirmation */
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
- opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+ if (dpaa_ieee_1588) {
+ opts.fqd.context_a.lo = 0;
+ opts.fqd.context_a.hi = fman_dealloc_bufs_mask_hi;
+ } else {
+ /* no tx-confirmation */
+ opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
+ opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ }
+
if (fman_ip_rev >= FMAN_V3) {
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= 0x04000000;
@@ -1861,9 +1870,10 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
-static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
+ * to support PTP
+ */
+static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
@@ -1872,15 +1882,15 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
ret = qman_reserve_fqid(fqid);
if (ret) {
- DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("Reserve fqid %d failed with ret: %d",
fqid, ret);
return -EINVAL;
}
/* "map" this Rx FQ to one of the interfaces Tx FQID */
- DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+ DPAA_PMD_DEBUG("Creating fq %p, fqid %d", fq, fqid);
ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
if (ret) {
- DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("create fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
@@ -1888,11 +1898,10 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
ret = qman_init_fq(fq, 0, &opts);
if (ret)
- DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("init fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
-#endif
/* Initialise a network interface */
static int
@@ -1927,6 +1936,43 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+static int
+check_devargs_handler(__rte_unused const char *key, const char *value,
+ __rte_unused void *opaque)
+{
+ if (strcmp(value, "1"))
+ return -1;
+
+ return 0;
+}
+
+static int
+dpaa_get_devargs(struct rte_devargs *devargs, const char *key)
+{
+ struct rte_kvargs *kvlist;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (!kvlist)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, key)) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+
+ if (rte_kvargs_process(kvlist, key,
+ check_devargs_handler, NULL) < 0) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+ rte_kvargs_free(kvlist);
+
+ return 1;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1944,6 +1990,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
+ struct rte_device *dev = eth_dev->device;
PMD_INIT_FUNC_TRACE();
@@ -1960,6 +2007,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
+ if (dpaa_get_devargs(dev->devargs, DRIVER_IEEE1588))
+ dpaa_ieee_1588 = 1;
+
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
@@ -2079,6 +2129,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ dpaa_intf->tx_conf_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+ MAX_DPAA_CORES, MAX_CACHELINE);
+ if (!dpaa_intf->tx_conf_queues) {
+ DPAA_PMD_ERR("Failed to alloc mem for TX conf queues\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
/* If congestion control is enabled globally*/
if (td_tx_threshold) {
dpaa_intf->cgr_tx = rte_zmalloc(NULL,
@@ -2115,22 +2173,32 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ if (dpaa_ieee_1588)
#endif
+ {
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
+ fman_intf->fqid_tx_confirm);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
+ }
DPAA_PMD_DEBUG("All frame queues created");
@@ -2388,4 +2456,6 @@ static struct rte_dpaa_driver rte_dpaa_pmd = {
};
RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
+RTE_PMD_REGISTER_PARAM_STRING(net_dpaa,
+ DRIVER_IEEE1588 "=<int>");
RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_pmd, NOTICE);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 261a5a3ca7..b427b29cb6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2024 NXP
*
*/
#ifndef __DPAA_ETHDEV_H__
@@ -112,6 +112,7 @@
#define FMC_FILE "/tmp/fmc.bin"
extern struct rte_mempool *dpaa_tx_sg_pool;
+extern int dpaa_ieee_1588;
/* structure to free external and indirect
* buffers.
@@ -131,6 +132,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c2579d65ee..8593e20200 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1082,6 +1082,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
+ struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
@@ -1162,6 +1165,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
+
+ if (dpaa_ieee_1588)
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
+
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1190,6 +1197,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
+ if (dpaa_ieee_1588)
+ dpaa_eth_tx_conf(fq_txconf);
+
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1200,6 +1210,45 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+void
+dpaa_eth_tx_conf(void *q)
+{
+ struct qman_fq *fq = q;
+ struct qm_dqrr_entry *dq;
+ int num_tx_conf, ret, dq_num;
+ uint32_t vdqcr_flags = 0;
+
+ if (unlikely(rte_dpaa_bpid_info == NULL &&
+ rte_eal_process_type() == RTE_PROC_SECONDARY))
+ rte_dpaa_bpid_info = fq->bp_array;
+
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+ ret = rte_dpaa_portal_init((void *)0);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal");
+ return;
+ }
+ }
+
+ num_tx_conf = DPAA_MAX_DEQUEUE_NUM_FRAMES - 2;
+
+ do {
+ dq_num = 0;
+ ret = qman_set_vdq(fq, num_tx_conf, vdqcr_flags);
+ if (ret)
+ return;
+ do {
+ dq = qman_dequeue(fq);
+ if (!dq)
+ continue;
+ dq_num++;
+ dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ qman_dqrr_consume(fq, dq);
+ dpaa_free_mbuf(&dq->fd);
+ } while (fq->flags & QMAN_FQ_STATE_VDQCR);
+ } while (dq_num == num_tx_conf);
+}
+
uint16_t
dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b2d7c0f2a3..042602e087 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -281,6 +281,8 @@ uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+void dpaa_eth_tx_conf(void *q);
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 07/18] net/dpaa: add support to separate Tx conf queues
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (5 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
` (10 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch separates Tx confirmation queues for kernel
and DPDK so as to support the VSP case.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fsl_qman.h | 4 ++-
drivers/net/dpaa/dpaa_ethdev.c | 45 +++++++++++++++++++++--------
drivers/net/dpaa/dpaa_rxtx.c | 3 +-
3 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index c0677976e8..db14dfb839 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2012 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -1237,6 +1237,8 @@ struct qman_fq {
/* DPDK Interface */
void *dpaa_intf;
+ /*to store tx_conf_queue corresponding to tx_queue*/
+ struct qman_fq *tx_conf_queue;
struct rte_event ev;
/* affined portal in case of static queue */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bba305cfb1..3ee3029729 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1870,9 +1870,30 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
- * to support PTP
- */
+static int
+dpaa_tx_conf_queue_init(struct qman_fq *fq)
+{
+ struct qm_mcc_initfq opts = {0};
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID, fq);
+ if (ret) {
+ DPAA_PMD_ERR("create Tx_conf failed with ret: %d", ret);
+ return ret;
+ }
+
+ opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+ opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+ ret = qman_init_fq(fq, 0, &opts);
+ if (ret)
+ DPAA_PMD_ERR("init Tx_conf fqid %d failed with ret: %d",
+ fq->fqid, ret);
+ return ret;
+}
+
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
@@ -2170,6 +2191,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+
+ if (dpaa_ieee_1588) {
+ ret = dpaa_tx_conf_queue_init(&dpaa_intf->tx_conf_queues[loop]);
+ if (ret)
+ goto free_tx;
+
+ dpaa_intf->tx_conf_queues[loop].dpaa_intf = dpaa_intf;
+ dpaa_intf->tx_queues[loop].tx_conf_queue = &dpaa_intf->tx_conf_queues[loop];
+ }
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
@@ -2190,16 +2220,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
goto free_tx;
}
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
- fman_intf->fqid_tx_confirm);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
- goto free_tx;
- }
- dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
}
-
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 8593e20200..3bd35c7a0e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1083,8 +1083,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
- struct dpaa_if *dpaa_intf = fq->dpaa_intf;
- struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
+ struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 08/18] net/dpaa: share MAC FMC scheme and CC parse
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (6 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
` (9 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
For Shared MAC:
1) Allocate RXQ from VSP scheme. (Virtual Storage Profile)
2) Allocate RXQ from Coarse classifiation (CC) rules to VSP.
2) Remove RXQ allocated which is reconfigured without VSP.
3) Don't alloc default queue and err queues.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 60 +++--
drivers/net/dpaa/dpaa_ethdev.h | 13 +-
drivers/net/dpaa/dpaa_flow.c | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++++++++++++------------
drivers/net/dpaa/dpaa_rxtx.c | 20 +-
6 files changed, 346 insertions(+), 177 deletions(-)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 60681068ea..6b2a1893f9 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -76,6 +76,7 @@ enum fman_mac_type {
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
+ fman_onic,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3ee3029729..bf14d73433 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -255,7 +255,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
DPAA_PMD_ERR("Cannot open IF socket");
return -errno;
}
-
strncpy(ifr.ifr_name, dpaa_intf->name, IFNAMSIZ - 1);
if (ioctl(socket_fd, SIOCGIFMTU, &ifr) < 0) {
@@ -1893,6 +1892,7 @@ dpaa_tx_conf_queue_init(struct qman_fq *fq)
return ret;
}
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
@@ -1923,6 +1923,7 @@ static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
fqid, ret);
return ret;
}
+#endif
/* Initialise a network interface */
static int
@@ -1957,6 +1958,41 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int
+dpaa_error_queue_init(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
+{
+ int i, ret;
+ struct qman_fq *err_queues = dpaa_intf->debug_queues;
+ uint32_t err_fqid = 0;
+
+ if (fman_intf->is_shared_mac) {
+ DPAA_PMD_DEBUG("Shared MAC's err queues are handled in kernel");
+ return 0;
+ }
+
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
+ if (i == DPAA_DEBUG_FQ_RX_ERROR)
+ err_fqid = fman_intf->fqid_rx_err;
+ else if (i == DPAA_DEBUG_FQ_TX_ERROR)
+ err_fqid = fman_intf->fqid_tx_err;
+ else
+ continue;
+ ret = dpaa_def_queue_init(&err_queues[i], err_fqid);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA %s ERROR queue init failed!",
+ i == DPAA_DEBUG_FQ_RX_ERROR ?
+ "RX" : "TX");
+ return ret;
+ }
+ err_queues[i].dpaa_intf = dpaa_intf;
+ }
+
+ return 0;
+}
+#endif
+
static int
check_devargs_handler(__rte_unused const char *key, const char *value,
__rte_unused void *opaque)
@@ -2202,25 +2238,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-
-#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
- if (dpaa_ieee_1588)
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ ret = dpaa_error_queue_init(dpaa_intf, fman_intf);
+ if (ret)
+ goto free_tx;
#endif
- {
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- }
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b427b29cb6..0a1ceb376a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -78,8 +78,11 @@
#define DPAA_IF_RX_CONTEXT_STASH 0
/* Each "debug" FQ is represented by one of these */
-#define DPAA_DEBUG_FQ_RX_ERROR 0
-#define DPAA_DEBUG_FQ_TX_ERROR 1
+enum {
+ DPAA_DEBUG_FQ_RX_ERROR,
+ DPAA_DEBUG_FQ_TX_ERROR,
+ DPAA_DEBUG_FQ_MAX_NUM
+};
#define DPAA_RSS_OFFLOAD_ALL ( \
RTE_ETH_RSS_L2_PAYLOAD | \
@@ -107,6 +110,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
+#define DPAA_1G_MAC_START_IDX 1
+#define DPAA_10G_MAC_START_IDX 9
+#define DPAA_2_5G_MAC_START_IDX DPAA_10G_MAC_START_IDX
+
#define DPAA_DEFAULT_RXQ_VSP_ID 1
#define FMC_FILE "/tmp/fmc.bin"
@@ -134,7 +141,7 @@ struct dpaa_if {
struct qman_fq *tx_queues;
struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
- struct qman_fq debug_queues[2];
+ struct qman_fq debug_queues[DPAA_DEBUG_FQ_MAX_NUM];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 02aca78d05..082bd5d014 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -651,7 +651,13 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_mac_1g)
+ /* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
+ * ports so that kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_mac_1g &&
+ fif->mac_idx >= DPAA_10G_MAC_START_IDX)
+ return e_FM_PORT_TYPE_RX_10G;
+ else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_RX;
else if (fif->mac_type == fman_mac_2_5g)
return e_FM_PORT_TYPE_RX_2_5G;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index f8c9360311..d80ea1010a 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2023 NXP
*/
/* System headers */
@@ -204,139 +204,258 @@ struct fmc_model_t {
struct fmc_model_t *g_fmc_model;
-static int dpaa_port_fmc_port_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx)
+static int
+dpaa_port_fmc_port_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx)
{
int current_port = fmc_model->apply_order[apply_idx].index;
const fmc_port *pport = &fmc_model->port[current_port];
- const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2};
+ uint32_t num;
+
+ if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
+ pport->number == fif->mac_idx &&
+ (fif->mac_type == fman_offline ||
+ fif->mac_type == fman_onic))
+ return current_port;
+
+ if (fif->mac_type == fman_mac_1g) {
+ if (pport->type != e_FM_PORT_TYPE_RX)
+ return -ENODEV;
+ num = pport->number + DPAA_1G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_2_5g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_2_5G)
+ return -ENODEV;
+ num = pport->number + DPAA_2_5G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_10g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_10G)
+ return -ENODEV;
+ num = pport->number + DPAA_10G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ DPAA_PMD_ERR("Invalid MAC(mac_idx=%d) type(%d)",
+ fif->mac_idx, fif->mac_type);
+
+ return -EINVAL;
+}
+
+static int
+dpaa_fq_is_in_kernel(uint32_t fqid,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if ((fqid == fif->fqid_rx_def ||
+ (fqid >= fif->fqid_rx_pcd &&
+ fqid < (fif->fqid_rx_pcd + fif->fqid_rx_pcd_count)) ||
+ fqid == fif->fqid_rx_err ||
+ fqid == fif->fqid_tx_err))
+ return true;
+
+ return false;
+}
+
+static int
+dpaa_vsp_id_is_in_kernel(uint8_t vsp_id,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (vsp_id == fif->base_profile_id)
+ return true;
+
+ return false;
+}
+
+static uint8_t
+dpaa_enqueue_vsp_id(struct fman_if *fif,
+ const struct ioc_fm_pcd_cc_next_enqueue_params_t *eq_param)
+{
+ if (eq_param->override_fqid)
+ return eq_param->new_relative_storage_profile_id;
+
+ return fif->base_profile_id;
+}
- if (mac_idx[fif->mac_idx] != pport->number ||
- mac_type[fif->mac_idx] != pport->type)
- return -1;
+static int
+dpaa_kg_storage_is_in_kernel(struct fman_if *fif,
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (!kg_storage->direct ||
+ (kg_storage->direct &&
+ kg_storage->profile_select.direct_relative_profile_id ==
+ fif->base_profile_id))
+ return true;
- return current_port;
+ return false;
}
-static int dpaa_port_fmc_scheme_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static void
+dpaa_fmc_remove_fq_from_allocated(uint32_t *fqids,
+ uint16_t *rxq_idx, uint32_t rm_fqid)
{
- int idx = fmc->apply_order[apply_idx].index;
uint32_t i;
- if (!fmc->scheme[idx].override_storage_profile &&
- fif->is_shared_mac) {
- DPAA_PMD_WARN("No VSP assigned to scheme %d for sharemac %d!",
- idx, fif->mac_idx);
- DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!");
+ for (i = 0; i < (*rxq_idx); i++) {
+ if (fqids[i] != rm_fqid)
+ continue;
+ DPAA_PMD_WARN("Remove fq(0x%08x) allocated.",
+ rm_fqid);
+ if ((*rxq_idx) > (i + 1)) {
+ memmove(&fqids[i], &fqids[i + 1],
+ ((*rxq_idx) - (i + 1)) * sizeof(uint32_t));
+ }
+ (*rxq_idx)--;
+ break;
}
+}
- if (e_IOC_FM_PCD_DONE ==
- fmc->scheme[idx].next_engine) {
- for (i = 0; i < fmc->scheme[idx]
- .key_ext_and_hash.hash_dist_num_of_fqids; i++) {
- uint32_t fqid = fmc->scheme[idx].base_fqid + i;
- int k, found = 0;
-
- if (fqid == fif->fqid_rx_def ||
- (fqid >= fif->fqid_rx_pcd &&
- fqid < (fif->fqid_rx_pcd +
- fif->fqid_rx_pcd_count))) {
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id !=
- fif->base_profile_id) {
- DPAA_PMD_ERR("Def RXQ must be associated with def VSP on sharemac!");
-
- return -1;
- }
- continue;
+static int
+dpaa_port_fmc_scheme_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ int scheme_idx = fmc->apply_order[apply_idx].index;
+ int k, found = 0;
+ uint32_t i, num_rxq, fqid, rxq_idx_start = *rxq_idx;
+ const struct fm_pcd_kg_scheme_params_t *scheme;
+ const struct ioc_fm_pcd_kg_key_extract_and_hash_params_t *params;
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage;
+ uint8_t vsp_id;
+
+ scheme = &fmc->scheme[scheme_idx];
+ params = &scheme->key_ext_and_hash;
+ num_rxq = params->hash_dist_num_of_fqids;
+ kg_storage = &scheme->storage_profile;
+
+ if (scheme->override_storage_profile && kg_storage->direct)
+ vsp_id = kg_storage->profile_select.direct_relative_profile_id;
+ else
+ vsp_id = fif->base_profile_id;
+
+ if (dpaa_kg_storage_is_in_kernel(fif, kg_storage)) {
+ DPAA_PMD_WARN("Scheme[%d]'s VSP is in kernel",
+ scheme_idx);
+ /* The FQ may be allocated from previous CC or scheme,
+ * find and remove it.
+ */
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ DPAA_PMD_WARN("Removed fqid(0x%08x) of Scheme[%d]",
+ fqid, scheme_idx);
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ if (!dpaa_fq_is_in_kernel(fqid, fif)) {
+ char reason_msg[128];
+ char result_msg[128];
+
+ sprintf(reason_msg,
+ "NOT handled in kernel");
+ sprintf(result_msg,
+ "will DRAIN kernel pool!");
+ DPAA_PMD_WARN("Traffic to FQ(%08x)(%s) %s",
+ fqid, reason_msg, result_msg);
}
+ }
- if (fif->is_shared_mac &&
- !fmc->scheme[idx].override_storage_profile) {
- DPAA_PMD_ERR("RXQ to DPDK must be associated with VSP on sharemac!");
- return -1;
- }
+ return 0;
+ }
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id ==
- fif->base_profile_id) {
- DPAA_PMD_ERR("RXQ can't be associated with default VSP on sharemac!");
+ if (e_IOC_FM_PCD_DONE != scheme->next_engine) {
+ /* Do nothing.*/
+ DPAA_PMD_DEBUG("Will parse scheme[%d]'s next engine(%d)",
+ scheme_idx, scheme->next_engine);
+ return 0;
+ }
- return -1;
- }
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ found = 0;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_DEBUG("Too many queues in FMC policy"
- "%d overflow %d",
- (*rxq_idx), max_nb_rxq);
+ if (dpaa_fq_is_in_kernel(fqid, fif)) {
+ DPAA_PMD_WARN("FQ(0x%08x) is handled in kernel.",
+ fqid);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
- continue;
- }
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
- for (k = 0; k < (*rxq_idx); k++) {
- if (fqids[k] == fqid) {
- found = 1;
- break;
- }
- }
+ break;
+ }
- if (found)
- continue;
- fqids[(*rxq_idx)] = fqid;
- if (fmc->scheme[idx].override_storage_profile) {
- if (fmc->scheme[idx].storage_profile.direct) {
- vspids[(*rxq_idx)] =
- fmc->scheme[idx].storage_profile
- .profile_select
- .direct_relative_profile_id;
- } else {
- vspids[(*rxq_idx)] = -1;
- }
- } else {
- vspids[(*rxq_idx)] = -1;
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
}
- (*rxq_idx)++;
}
+
+ if (found)
+ continue;
+ fqids[(*rxq_idx)] = fqid;
+ vspids[(*rxq_idx)] = vsp_id;
+
+ (*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static int
+dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
{
uint16_t j, k, found = 0;
const struct ioc_keys_params_t *keys_params;
- uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index;
-
- keys_params = &fmc_model->ccnode[cc_idx].keys_params;
+ const struct ioc_fm_pcd_cc_next_engine_params_t *params;
+ uint32_t fqid, cc_idx = fmc->apply_order[apply_idx].index;
+ uint32_t rxq_idx_start = *rxq_idx;
+ uint8_t vsp_id;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
+ keys_params = &fmc->ccnode[cc_idx].keys_params;
for (j = 0; j < keys_params->num_of_keys; ++j) {
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
+
+ break;
+ }
found = 0;
- fqid = keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.new_fqid;
+ params = &keys_params->key_params[j].cc_next_engine_params;
/* We read DPDK queue from last classification rule present in
* FMC policy file. Hence, this check is required here.
@@ -344,15 +463,30 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
* have userspace queue so that it can be used by DPDK
* application.
*/
- if (keys_params->key_params[j].cc_next_engine_params
- .next_engine != e_IOC_FM_PCD_DONE) {
- DPAA_PMD_WARN("FMC CC next engine not support");
+ if (params->next_engine != e_IOC_FM_PCD_DONE) {
+ DPAA_PMD_WARN("CC next engine(%d) not support",
+ params->next_engine);
continue;
}
- if (keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.action !=
+ if (params->params.enqueue_params.action !=
e_IOC_FM_PCD_ENQ_FRAME)
continue;
+
+ fqid = params->params.enqueue_params.new_fqid;
+ vsp_id = dpaa_enqueue_vsp_id(fif,
+ ¶ms->params.enqueue_params);
+ if (dpaa_fq_is_in_kernel(fqid, fif) ||
+ dpaa_vsp_id_is_in_kernel(vsp_id, fif)) {
+ DPAA_PMD_DEBUG("FQ(0x%08x)/VSP(%d) is in kernel.",
+ fqid, vsp_id);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
+
for (k = 0; k < (*rxq_idx); k++) {
if (fqids[k] == fqid) {
found = 1;
@@ -362,38 +496,22 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
if (found)
continue;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
-
fqids[(*rxq_idx)] = fqid;
- vspids[(*rxq_idx)] =
- keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params
- .new_relative_storage_profile_id;
-
- if (vspids[(*rxq_idx)] == fif->base_profile_id &&
- fif->is_shared_mac) {
- DPAA_PMD_ERR("VSP %d can NOT be used on DPDK.",
- vspids[(*rxq_idx)]);
- DPAA_PMD_ERR("It is associated to skb pool of shared interface.");
- return -1;
- }
+ vspids[(*rxq_idx)] = vsp_id;
+
(*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-int dpaa_port_fmc_init(struct fman_if *fif,
- uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
+int
+dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
{
int current_port = -1, ret;
uint16_t rxq_idx = 0;
- const struct fmc_model_t *fmc_model;
+ const struct fmc_model_t *fmc;
uint32_t i;
if (!g_fmc_model) {
@@ -402,14 +520,14 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (!fp) {
DPAA_PMD_ERR("%s not exists", FMC_FILE);
- return -1;
+ return -ENOENT;
}
g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64);
if (!g_fmc_model) {
DPAA_PMD_ERR("FMC memory alloc failed");
fclose(fp);
- return -1;
+ return -ENOBUFS;
}
bytes_read = fread(g_fmc_model,
@@ -419,25 +537,28 @@ int dpaa_port_fmc_init(struct fman_if *fif,
fclose(fp);
rte_free(g_fmc_model);
g_fmc_model = NULL;
- return -1;
+ return -EIO;
}
fclose(fp);
}
- fmc_model = g_fmc_model;
+ fmc = g_fmc_model;
- if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER)
- return -1;
+ if (fmc->format_version != FMC_OUTPUT_FORMAT_VER) {
+ DPAA_PMD_ERR("FMC version(0x%08x) != Supported ver(0x%08x)",
+ fmc->format_version, FMC_OUTPUT_FORMAT_VER);
+ return -EINVAL;
+ }
- for (i = 0; i < fmc_model->apply_order_count; i++) {
- switch (fmc_model->apply_order[i].type) {
+ for (i = 0; i < fmc->apply_order_count; i++) {
+ switch (fmc->apply_order[i].type) {
case fmcengine_start:
break;
case fmcengine_end:
break;
case fmcport_start:
current_port = dpaa_port_fmc_port_parse(fif,
- fmc_model, i);
+ fmc, i);
break;
case fmcport_end:
break;
@@ -445,24 +566,24 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (current_port < 0)
break;
- ret = dpaa_port_fmc_scheme_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq,
- fqids, vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_scheme_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from scheme[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmcccnode:
if (current_port < 0)
break;
- ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq, fqids,
- vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_ccnode_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from cc[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmchtnode:
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3bd35c7a0e..d1338d1654 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -693,13 +693,26 @@ dpaa_rx_cb_atomic(void *event,
}
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
+static inline void
+dpaa_eth_err_queue(struct qman_fq *fq)
{
struct rte_mbuf *mbuf;
struct qman_fq *debug_fq;
int ret, i;
struct qm_dqrr_entry *dq;
struct qm_fd *fd;
+ struct dpaa_if *dpaa_intf;
+
+ dpaa_intf = fq->dpaa_intf;
+ if (fq != &dpaa_intf->rx_queues[0]) {
+ /* Associate error queues to the first RXQ.*/
+ return;
+ }
+
+ if (dpaa_intf->cfg->fman_if->is_shared_mac) {
+ /* Error queues of shared MAC are handled in kernel. */
+ return;
+ }
if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
ret = rte_dpaa_portal_init((void *)0);
@@ -708,7 +721,7 @@ static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
return;
}
}
- for (i = 0; i <= DPAA_DEBUG_FQ_TX_ERROR; i++) {
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
debug_fq = &dpaa_intf->debug_queues[i];
ret = qman_set_vdq(debug_fq, 4, QM_VDQCR_EXACT);
if (ret)
@@ -751,8 +764,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
rte_dpaa_bpid_info = fq->bp_array;
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- if (fq->fqid == ((struct dpaa_if *)fq->dpaa_intf)->rx_queues[0].fqid)
- dpaa_eth_err_queue((struct dpaa_if *)fq->dpaa_intf);
+ dpaa_eth_err_queue(fq);
#endif
if (likely(fq->is_static))
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 09/18] net/dpaa: support Rx/Tx timestamp read
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (7 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
` (8 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch implements Rx/Tx timestamp read operations
for DPAA1 platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 21 +++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 6 ++-
drivers/bus/dpaa/include/fman.h | 18 ++++++-
drivers/net/dpaa/dpaa_ethdev.c | 2 +
drivers/net/dpaa/dpaa_ethdev.h | 17 +++++++
drivers/net/dpaa/dpaa_ptp.c | 42 ++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_rxtx.h | 4 +-
drivers/net/dpaa/meson.build | 1 +
10 files changed, 168 insertions(+), 15 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b136ed191a..4196dd800c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timestamp offload = Y
Basic stats = Y
Extended stats = Y
FW version = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index beeb03dbf2..e39bc8c252 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2024 NXP
*
*/
@@ -520,6 +520,25 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 124c69edb4..4fc41c1ae9 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020 NXP
+ * Copyright 2017,2020,2022 NXP
*
*/
@@ -565,6 +565,10 @@ fman_if_set_ic_params(struct fman_if *fm_if,
&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
out_be32(fmbm_ricp, val);
+ unsigned int *fmbm_ticp =
+ &((struct tx_bmi_regs *)__if->tx_bmi_map)->fmbm_ticp;
+ out_be32(fmbm_ticp, val);
+
return 0;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 6b2a1893f9..09d1ddb897 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2021 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -292,6 +292,21 @@ struct rx_bmi_regs {
uint32_t fmbm_rdbg; /**< Rx Debug-*/
};
+struct tx_bmi_regs {
+ uint32_t fmbm_tcfg; /**< Tx Configuration*/
+ uint32_t fmbm_tst; /**< Tx Status*/
+ uint32_t fmbm_tda; /**< Tx DMA attributes*/
+ uint32_t fmbm_tfp; /**< Tx FIFO Parameters*/
+ uint32_t fmbm_tfed; /**< Tx Frame End Data*/
+ uint32_t fmbm_ticp; /**< Tx Internal Context Parameters*/
+ uint32_t fmbm_tfdne; /**< Tx Frame Dequeue Next Engine*/
+ uint32_t fmbm_tfca; /**< Tx Frame Attributes*/
+ uint32_t fmbm_tcfqid; /**< Tx Confirmation Frame Queue ID*/
+ uint32_t fmbm_tefqid; /**< Tx Error Frame Queue ID*/
+ uint32_t fmbm_tfene; /**< Tx Frame Enqueue Next Engine*/
+ uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
+ uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
+};
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -380,6 +395,7 @@ struct __fman_if {
uint64_t regs_size;
void *ccsr_map;
void *bmi_map;
+ void *tx_bmi_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bf14d73433..682cb1c77e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,8 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
static bool
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 0a1ceb376a..bbdb0936c0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -151,6 +151,14 @@ struct dpaa_if {
void *netenv_handle;
void *scheme_handle[2];
uint32_t scheme_count;
+ /*stores timestamp of last received packet on dev*/
+ uint64_t rx_timestamp;
+ /*stores timestamp of last received tx confirmation packet on dev*/
+ uint64_t tx_timestamp;
+ /* stores pointer to next tx_conf queue that should be processed,
+ * it corresponds to last packet transmitted
+ */
+ struct qman_fq *next_tx_conf_queue;
void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM];
uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM];
@@ -233,6 +241,15 @@ struct dpaa_if_rx_bmi_stats {
uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
};
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
new file mode 100644
index 0000000000..2ecdda6db0
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2024 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_eth_ctrl.h>
+#include <rte_malloc.h>
+#include <rte_time.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+ if (dpaa_intf->next_tx_conf_queue) {
+ while (!dpaa_intf->tx_timestamp)
+ dpaa_eth_tx_conf(dpaa_intf->next_tx_conf_queue);
+ } else {
+ return -1;
+ }
+ *timestamp = rte_ns_to_timespec(dpaa_intf->tx_timestamp);
+
+ return 0;
+}
+
+int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ *timestamp = rte_ns_to_timespec(dpaa_intf->rx_timestamp);
+ return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d1338d1654..e3b4bb14ab 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2024 NXP
*
*/
@@ -49,7 +49,6 @@
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
- (_fd)->cmd = 0; \
(_fd)->opaque_addr = 0; \
(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
@@ -122,6 +121,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
{
struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
uint64_t prs = *((uintptr_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+ struct rte_ether_hdr *eth_hdr =
+ rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
@@ -241,6 +242,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
if (prs & DPAA_PARSE_VLAN_MASK)
m->ol_flags |= RTE_MBUF_F_RX_VLAN;
/* Packet received without stripping the vlan */
+
+ if (eth_hdr->ether_type == htons(RTE_ETHER_TYPE_1588)) {
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
+ }
}
static inline void dpaa_checksum(struct rte_mbuf *mbuf)
@@ -317,7 +323,7 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
prs->ip_off[0] = mbuf->l2_len;
prs->l4_off = mbuf->l3_len + mbuf->l2_len;
/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
- fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+ fd->cmd |= DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
static inline void
@@ -513,6 +519,7 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
bp_info = DPAA_BPID_TO_POOL_INFO(dqrr[0]->fd.bpid);
ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr[0]->fd));
@@ -554,6 +561,11 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -567,6 +579,7 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
for (i = 0; i < num_bufs; i++) {
fd = &dqrr[i]->fd;
@@ -594,6 +607,11 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -758,6 +776,8 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int num_rx_bufs, ret;
uint32_t vdqcr_flags = 0;
+ struct annotations_t *annot;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -800,6 +820,10 @@ uint16_t dpaa_eth_queue_rx(void *q,
continue;
bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(bufs[num_rx - 1]->buf_addr);
+ dpaa_intf->rx_timestamp = rte_cpu_to_be_64(annot->timestamp);
+ }
qman_dqrr_consume(fq, dq);
} while (fq->flags & QMAN_FQ_STATE_VDQCR);
@@ -1095,6 +1119,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
@@ -1107,6 +1132,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+ if (dpaa_ieee_1588) {
+ dpaa_intf->next_tx_conf_queue = fq_txconf;
+ dpaa_eth_tx_conf(fq_txconf);
+ dpaa_intf->tx_timestamp = 0;
+ }
+
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
@@ -1119,6 +1150,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
(mbuf->data_off & 0x7F) != 0x0)
realloc_mbuf = 1;
+
+ fd_arr[loop].cmd = 0;
+ if (dpaa_ieee_1588) {
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
+ qman_fq_fqid(fq_txconf);
+ fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
+ DPAA_FD_CMD_UPD;
+ }
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
@@ -1176,10 +1215,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
-
- if (dpaa_ieee_1588)
- fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
-
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1208,9 +1243,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
- if (dpaa_ieee_1588)
- dpaa_eth_tx_conf(fq_txconf);
-
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1228,6 +1260,12 @@ dpaa_eth_tx_conf(void *q)
struct qm_dqrr_entry *dq;
int num_tx_conf, ret, dq_num;
uint32_t vdqcr_flags = 0;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qm_dqrr_entry *dqrr;
+ struct dpaa_bp_info *bp_info;
+ struct rte_mbuf *mbuf;
+ void *ptr;
+ struct annotations_t *annot;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -1252,7 +1290,20 @@ dpaa_eth_tx_conf(void *q)
dq = qman_dequeue(fq);
if (!dq)
continue;
+ dqrr = dq;
dq_num++;
+ bp_info = DPAA_BPID_TO_POOL_INFO(dqrr->fd.bpid);
+ ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr->fd));
+ rte_prefetch0((void *)((uint8_t *)ptr
+ + DEFAULT_RX_ICEOF));
+ mbuf = (struct rte_mbuf *)
+ ((char *)ptr - bp_info->meta_data_size);
+
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->tx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
qman_dqrr_consume(fq, dq);
dpaa_free_mbuf(&dq->fd);
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 042602e087..1048e86d41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2021 NXP
+ * Copyright 2017,2020-2022 NXP
*
*/
@@ -260,7 +260,7 @@ struct dpaa_eth_parse_results_t {
struct annotations_t {
uint8_t reserved[DEFAULT_RX_ICEOF];
struct dpaa_eth_parse_results_t parse; /**< Pointer to Parsed result*/
- uint64_t reserved1;
+ uint64_t timestamp;
uint64_t hash; /**< Hash Result */
};
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 42e1f8c2e2..239858adda 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -14,6 +14,7 @@ sources = files(
'dpaa_flow.c',
'dpaa_rxtx.c',
'dpaa_fmc.c',
+ 'dpaa_ptp.c',
)
if cc.has_argument('-Wno-pointer-arith')
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 10/18] net/dpaa: support IEEE 1588 PTP
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (8 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
` (7 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch adds the support for the ethdev APIs
to enable/disable and read/write/adjust IEEE1588
PTP timestamps for DPAA platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/dpaa.rst | 1 +
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 15 ++++++
drivers/bus/dpaa/include/fman.h | 45 +++++++++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 5 ++
drivers/net/dpaa/dpaa_ethdev.h | 16 +++++++
drivers/net/dpaa/dpaa_ptp.c | 80 ++++++++++++++++++++++++++++++-
7 files changed, 161 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index acf4daab02..ea86e6146c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -148,6 +148,7 @@ Features
- Packet type information
- Checksum offload
- Promiscuous mode
+ - IEEE1588 PTP
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 4196dd800c..4f31b61de1 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timesync = Y
Timestamp offload = Y
Basic stats = Y
Extended stats = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e39bc8c252..e2b7120237 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -28,6 +28,7 @@ u32 fman_dealloc_bufs_mask_lo;
int fman_ccsr_map_fd = -1;
static COMPAT_LIST_HEAD(__ifs);
+void *rtc_map;
/* This is the (const) global variable that callers have read-only access to.
* Internally, we have read-write access directly to __ifs.
@@ -539,6 +540,20 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ if (!rtc_map) {
+ __if->rtc_map = mmap(NULL, FMAN_IEEE_1588_SIZE,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, FMAN_IEEE_1588_OFFSET);
+ if (__if->rtc_map == MAP_FAILED) {
+ pr_err("Can not map FMan RTC regs base\n");
+ _errno = -EINVAL;
+ goto err;
+ }
+ rtc_map = __if->rtc_map;
+ } else {
+ __if->rtc_map = rtc_map;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 09d1ddb897..e8bc913943 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -64,6 +64,12 @@
#define GROUP_ADDRESS 0x0000010000000000LL
#define HASH_CTRL_ADDR_MASK 0x0000003F
+#define FMAN_RTC_MAX_NUM_OF_ALARMS 3
+#define FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES 4
+#define FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS 3
+#define FMAN_IEEE_1588_OFFSET 0X1AFE000
+#define FMAN_IEEE_1588_SIZE 4096
+
/* Pre definitions of FMAN interface and Bpool structures */
struct __fman_if;
struct fman_if_bpool;
@@ -307,6 +313,44 @@ struct tx_bmi_regs {
uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
};
+
+/* Description FM RTC timer alarm */
+struct t_tmr_alarm {
+ uint32_t tmr_alarm_h;
+ uint32_t tmr_alarm_l;
+};
+
+/* Description FM RTC timer Ex trigger */
+struct t_tmr_ext_trigger {
+ uint32_t tmr_etts_h;
+ uint32_t tmr_etts_l;
+};
+
+struct rtc_regs {
+ uint32_t tmr_id; /* 0x000 Module ID register */
+ uint32_t tmr_id2; /* 0x004 Controller ID register */
+ uint32_t reserved0008[30];
+ uint32_t tmr_ctrl; /* 0x0080 timer control register */
+ uint32_t tmr_tevent; /* 0x0084 timer event register */
+ uint32_t tmr_temask; /* 0x0088 timer event mask register */
+ uint32_t reserved008c[3];
+ uint32_t tmr_cnt_h; /* 0x0098 timer counter high register */
+ uint32_t tmr_cnt_l; /* 0x009c timer counter low register */
+ uint32_t tmr_add; /* 0x00a0 timer drift compensation addend register */
+ uint32_t tmr_acc; /* 0x00a4 timer accumulator register */
+ uint32_t tmr_prsc; /* 0x00a8 timer prescale */
+ uint32_t reserved00ac;
+ uint32_t tmr_off_h; /* 0x00b0 timer offset high */
+ uint32_t tmr_off_l; /* 0x00b4 timer offset low */
+ struct t_tmr_alarm tmr_alarm[FMAN_RTC_MAX_NUM_OF_ALARMS];
+ /* 0x00b8 timer alarm */
+ uint32_t tmr_fiper[FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES];
+ /* 0x00d0 timer fixed period interval */
+ struct t_tmr_ext_trigger tmr_etts[FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS];
+ /* 0x00e0 time stamp general purpose external */
+ uint32_t reserved00f0[4];
+};
+
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -396,6 +440,7 @@ struct __fman_if {
void *ccsr_map;
void *bmi_map;
void *tx_bmi_map;
+ void *rtc_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 682cb1c77e..82d1960356 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,11 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_enable = dpaa_timesync_enable,
+ .timesync_disable = dpaa_timesync_disable,
+ .timesync_read_time = dpaa_timesync_read_time,
+ .timesync_write_time = dpaa_timesync_write_time,
+ .timesync_adjust_time = dpaa_timesync_adjust_time,
.timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
.timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bbdb0936c0..7884cc034c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -245,6 +245,22 @@ int
dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *timestamp);
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta);
+
int
dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
index 2ecdda6db0..48e29e22eb 100644
--- a/drivers/net/dpaa/dpaa_ptp.c
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -16,7 +16,82 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
-int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = (uint64_t)in_be32(tmr_cnt_l);
+ time |= ((uint64_t)in_be32(tmr_cnt_h) << 32);
+
+ *timestamp = rte_ns_to_timespec(time);
+ return 0;
+}
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *ts)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = rte_timespec_to_ns(ts);
+
+ out_be32(tmr_cnt_l, (uint32_t)time);
+ out_be32(tmr_cnt_h, (uint32_t)(time >> 32));
+
+ return 0;
+}
+
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta)
+{
+ struct timespec ts = {0, 0}, *timestamp = &ts;
+ uint64_t ns;
+
+ dpaa_timesync_read_time(dev, timestamp);
+
+ ns = rte_timespec_to_ns(timestamp);
+ ns += delta;
+ *timestamp = rte_ns_to_timespec(ns);
+
+ dpaa_timesync_write_time(dev, timestamp);
+
+ return 0;
+}
+
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -32,7 +107,8 @@ int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
return 0;
}
-int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused)
{
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 11/18] net/dpaa: implement detailed packet parsing
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (9 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
` (6 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch implements the detailed packet parsing using
the annotation info from the hardware.
decode parser to set RX muf packet type by dpaa_slow_parsing.
Support to identify the IPSec ESP, GRE and SCTP packets.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 35 +++++++-
drivers/net/dpaa/dpaa_rxtx.h | 143 ++++++++++++++-------------------
3 files changed, 93 insertions(+), 86 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 82d1960356..a302b24be6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -411,6 +411,7 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev, size_t *no_of_elements)
RTE_PTYPE_L4_UDP,
RTE_PTYPE_L4_SCTP,
RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_TUNNEL_GRE,
};
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e3b4bb14ab..99fc3f1b43 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -110,11 +110,38 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
#define dpaa_display_frame_info(a, b, c)
#endif
-static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
- uint64_t prs __rte_unused)
+static inline void
+dpaa_slow_parsing(struct rte_mbuf *m,
+ const struct annotations_t *annot)
{
+ const struct dpaa_eth_parse_results_t *parse;
+
DPAA_DP_LOG(DEBUG, "Slow parsing");
- /*TBD:XXX: to be implemented*/
+ parse = &annot->parse;
+
+ if (parse->ethernet)
+ m->packet_type |= RTE_PTYPE_L2_ETHER;
+ if (parse->vlan)
+ m->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
+ if (parse->first_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4;
+ if (parse->first_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6;
+ if (parse->gre)
+ m->packet_type |= RTE_PTYPE_TUNNEL_GRE;
+ if (parse->last_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4_EXT;
+ if (parse->last_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6_EXT;
+ if (parse->l4_type == DPAA_PR_L4_TCP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_TCP;
+ else if (parse->l4_type == DPAA_PR_L4_UDP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_UDP;
+ else if (parse->l4_type == DPAA_PR_L4_IPSEC_TYPE &&
+ !parse->l4_info_err && parse->esp_sum)
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ else if (parse->l4_type == DPAA_PR_L4_SCTP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_SCTP;
}
static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
@@ -228,7 +255,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
break;
/* More switch cases can be added */
default:
- dpaa_slow_parsing(m, prs);
+ dpaa_slow_parsing(m, annot);
}
m->tx_offload = annot->parse.ip_off[0];
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 1048e86d41..215bdeaf7f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2022 NXP
+ * Copyright 2017,2020-2024 NXP
*
*/
@@ -162,98 +162,77 @@
#define DPAA_PKT_L3_LEN_SHIFT 7
+enum dpaa_parse_result_l4_type {
+ DPAA_PR_L4_TCP_TYPE = 1,
+ DPAA_PR_L4_UDP_TYPE = 2,
+ DPAA_PR_L4_IPSEC_TYPE = 3,
+ DPAA_PR_L4_SCTP_TYPE = 4,
+ DPAA_PR_L4_DCCP_TYPE = 5
+};
+
/**
* FMan parse result array
*/
struct dpaa_eth_parse_results_t {
- uint8_t lpid; /**< Logical port id */
- uint8_t shimr; /**< Shim header result */
- union {
- uint16_t l2r; /**< Layer 2 result */
+ uint8_t lpid; /**< Logical port id */
+ uint8_t shimr; /**< Shim header result */
+ union {
+ uint16_t l2r; /**< Layer 2 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t ethernet:1;
- uint16_t vlan:1;
- uint16_t llc_snap:1;
- uint16_t mpls:1;
- uint16_t ppoe_ppp:1;
- uint16_t unused_1:3;
- uint16_t unknown_eth_proto:1;
- uint16_t eth_frame_type:2;
- uint16_t l2r_err:5;
+ uint16_t unused_1:3;
+ uint16_t ppoe_ppp:1;
+ uint16_t mpls:1;
+ uint16_t llc_snap:1;
+ uint16_t vlan:1;
+ uint16_t ethernet:1;
+
+ uint16_t l2r_err:5;
+ uint16_t eth_frame_type:2;
/*00-unicast, 01-multicast, 11-broadcast*/
-#else
- uint16_t l2r_err:5;
- uint16_t eth_frame_type:2;
- uint16_t unknown_eth_proto:1;
- uint16_t unused_1:3;
- uint16_t ppoe_ppp:1;
- uint16_t mpls:1;
- uint16_t llc_snap:1;
- uint16_t vlan:1;
- uint16_t ethernet:1;
-#endif
+ uint16_t unknown_eth_proto:1;
} __rte_packed;
- } __rte_packed;
- union {
- uint16_t l3r; /**< Layer 3 result */
+ } __rte_packed;
+ union {
+ uint16_t l3r; /**< Layer 3 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t first_ipv4:1;
- uint16_t first_ipv6:1;
- uint16_t gre:1;
- uint16_t min_enc:1;
- uint16_t last_ipv4:1;
- uint16_t last_ipv6:1;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t last_ip_err_code:3;
-#else
- uint16_t last_ip_err_code:3;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t last_ipv6:1;
- uint16_t last_ipv4:1;
- uint16_t min_enc:1;
- uint16_t gre:1;
- uint16_t first_ipv6:1;
- uint16_t first_ipv4:1;
-#endif
+ uint16_t unused_2:1;
+ uint16_t l3_err:1;
+ uint16_t last_ipv6:1;
+ uint16_t last_ipv4:1;
+ uint16_t min_enc:1;
+ uint16_t gre:1;
+ uint16_t first_ipv6:1;
+ uint16_t first_ipv4:1;
+
+ uint16_t unused_3:8;
} __rte_packed;
- } __rte_packed;
- union {
- uint8_t l4r; /**< Layer 4 result */
+ } __rte_packed;
+ union {
+ uint8_t l4r; /**< Layer 4 result */
struct{
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint8_t l4_type:3;
- uint8_t l4_info_err:1;
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
-#else
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
- uint8_t l4_info_err:1;
- uint8_t l4_type:3;
-#endif
+ uint8_t l4cv:1;
+ uint8_t unused_4:1;
+ uint8_t ah:1;
+ uint8_t esp_sum:1;
+ uint8_t l4_info_err:1;
+ uint8_t l4_type:3;
} __rte_packed;
- } __rte_packed;
- uint8_t cplan; /**< Classification plan id */
- uint16_t nxthdr; /**< Next Header */
- uint16_t cksum; /**< Checksum */
- uint32_t lcv; /**< LCV */
- uint8_t shim_off[3]; /**< Shim offset */
- uint8_t eth_off; /**< ETH offset */
- uint8_t llc_snap_off; /**< LLC_SNAP offset */
- uint8_t vlan_off[2]; /**< VLAN offset */
- uint8_t etype_off; /**< ETYPE offset */
- uint8_t pppoe_off; /**< PPP offset */
- uint8_t mpls_off[2]; /**< MPLS offset */
- uint8_t ip_off[2]; /**< IP offset */
- uint8_t gre_off; /**< GRE offset */
- uint8_t l4_off; /**< Layer 4 offset */
- uint8_t nxthdr_off; /**< Parser end point */
+ } __rte_packed;
+ uint8_t cplan; /**< Classification plan id */
+ uint16_t nxthdr; /**< Next Header */
+ uint16_t cksum; /**< Checksum */
+ uint32_t lcv; /**< LCV */
+ uint8_t shim_off[3]; /**< Shim offset */
+ uint8_t eth_off; /**< ETH offset */
+ uint8_t llc_snap_off; /**< LLC_SNAP offset */
+ uint8_t vlan_off[2]; /**< VLAN offset */
+ uint8_t etype_off; /**< ETYPE offset */
+ uint8_t pppoe_off; /**< PPP offset */
+ uint8_t mpls_off[2]; /**< MPLS offset */
+ uint8_t ip_off[2]; /**< IP offset */
+ uint8_t gre_off; /**< GRE offset */
+ uint8_t l4_off; /**< Layer 4 offset */
+ uint8_t nxthdr_off; /**< Parser end point */
} __rte_packed;
/* The structure is the Prepended Data to the Frame which is used by FMAN */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 12/18] net/dpaa: enhance DPAA frame display
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (10 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 13/18] net/dpaa: support mempool debug Hemant Agrawal
` (5 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch enhances the received packet debugging capability.
This help displaying the full packet parsing output.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 9 +++
drivers/net/dpaa/dpaa_rxtx.c | 138 +++++++++++++++++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 5 ++
4 files changed, 133 insertions(+), 24 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ea86e6146c..edf7a7e350 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -227,6 +227,11 @@ state during application initialization:
application want to use eventdev with DPAA device.
Currently these queues are not used for LS1023/LS1043 platform by default.
+- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
+
+ This defines the debug flag, whether to dump the detailed frame and packet
+ parsing result for the incoming packets.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a302b24be6..4ead890278 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -2056,6 +2056,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
struct rte_device *dev = eth_dev->device;
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ char *penv;
+#endif
PMD_INIT_FUNC_TRACE();
@@ -2135,6 +2138,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
td_tx_threshold = CGR_RX_PERFQ_THRESH;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ penv = getenv("DPAA_DISPLAY_FRAME_AND_PARSER_RESULT");
+ if (penv)
+ dpaa_force_display_frame_set(atoi(penv));
+#endif
+
/* If congestion control is enabled globally*/
if (num_rx_fqs > 0 && td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 99fc3f1b43..945c84ab10 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -47,6 +47,10 @@
#include <dpaa_of.h>
#include <netcfg.h>
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int s_force_display_frm;
+#endif
+
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
(_fd)->opaque_addr = 0; \
@@ -58,37 +62,122 @@
} while (0)
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set)
+{
+ s_force_display_frm = set;
+}
+
#define DISPLAY_PRINT printf
-static void dpaa_display_frame_info(const struct qm_fd *fd,
- uint32_t fqid, bool rx)
+static void
+dpaa_display_frame_info(const struct qm_fd *fd,
+ uint32_t fqid, bool rx)
{
- int ii;
- char *ptr;
+ int pos, offset = 0;
+ char *ptr, info[1024];
struct annotations_t *annot = rte_dpaa_mem_ptov(fd->addr);
uint8_t format;
+ const struct dpaa_eth_parse_results_t *psr;
- if (!fd->status) {
- /* Do not display correct packets.*/
+ if (!fd->status && !s_force_display_frm) {
+ /* Do not display correct packets unless force display.*/
return;
}
+ psr = &annot->parse;
- format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
- DPAA_FD_FORMAT_SHIFT;
-
- DISPLAY_PRINT("fqid %d bpid %d addr 0x%lx, format %d\r\n",
- fqid, fd->bpid, (unsigned long)fd->addr, fd->format);
- DISPLAY_PRINT("off %d, len %d stat 0x%x\r\n",
- fd->offset, fd->length20, fd->status);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (format == qm_fd_contig)
+ sprintf(info, "simple");
+ else if (format == qm_fd_sg)
+ sprintf(info, "sg");
+ else
+ sprintf(info, "unknown format(%d)", format);
+
+ DISPLAY_PRINT("%s: fqid=%08x, bpid=%d, phy addr=0x%lx ",
+ rx ? "RX" : "TX", fqid, fd->bpid, (unsigned long)fd->addr);
+ DISPLAY_PRINT("format=%s offset=%d, len=%d, stat=0x%x\r\n",
+ info, fd->offset, fd->length20, fd->status);
if (rx) {
- ptr = (char *)&annot->parse;
- DISPLAY_PRINT("RX parser result:\r\n");
- for (ii = 0; ii < (int)sizeof(struct dpaa_eth_parse_results_t);
- ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("Display usual RX parser result:\r\n");
+ if (psr->eth_frame_type == 0)
+ offset += sprintf(&info[offset], "unicast");
+ else if (psr->eth_frame_type == 1)
+ offset += sprintf(&info[offset], "multicast");
+ else if (psr->eth_frame_type == 3)
+ offset += sprintf(&info[offset], "broadcast");
+ else
+ offset += sprintf(&info[offset], "unknown eth type(%d)",
+ psr->eth_frame_type);
+ if (psr->l2r_err) {
+ offset += sprintf(&info[offset], " L2 error(%d)",
+ psr->l2r_err);
+ } else {
+ offset += sprintf(&info[offset], " L2 non error");
}
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("L2: %s, %s, ethernet type:%s\r\n",
+ psr->ethernet ? "is ethernet" : "non ethernet",
+ psr->vlan ? "is vlan" : "non vlan", info);
+
+ offset = 0;
+ DISPLAY_PRINT("L3: %s/%s, %s/%s, %s, %s\r\n",
+ psr->first_ipv4 ? "first IPv4" : "non first IPv4",
+ psr->last_ipv4 ? "last IPv4" : "non last IPv4",
+ psr->first_ipv6 ? "first IPv6" : "non first IPv6",
+ psr->last_ipv6 ? "last IPv6" : "non last IPv6",
+ psr->gre ? "GRE" : "non GRE",
+ psr->l3_err ? "L3 has error" : "L3 non error");
+
+ if (psr->l4_type == DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "tcp");
+ } else if (psr->l4_type == DPAA_PR_L4_UDP_TYPE) {
+ offset += sprintf(&info[offset], "udp");
+ } else if (psr->l4_type == DPAA_PR_L4_IPSEC_TYPE) {
+ offset += sprintf(&info[offset], "IPSec ");
+ if (psr->esp_sum)
+ offset += sprintf(&info[offset], "ESP");
+ if (psr->ah)
+ offset += sprintf(&info[offset], "AH");
+ } else if (psr->l4_type == DPAA_PR_L4_SCTP_TYPE) {
+ offset += sprintf(&info[offset], "sctp");
+ } else if (psr->l4_type == DPAA_PR_L4_DCCP_TYPE) {
+ offset += sprintf(&info[offset], "dccp");
+ } else {
+ offset += sprintf(&info[offset], "unknown l4 type(%d)",
+ psr->l4_type);
+ }
+ DISPLAY_PRINT("L4: type:%s, L4 validation %s\r\n",
+ info, psr->l4cv ? "Performed" : "NOT performed");
+
+ offset = 0;
+ if (psr->ethernet) {
+ offset += sprintf(&info[offset],
+ "Eth offset=%d, ethtype offset=%d, ",
+ psr->eth_off, psr->etype_off);
+ }
+ if (psr->vlan) {
+ offset += sprintf(&info[offset], "vLAN offset=%d, ",
+ psr->vlan_off[0]);
+ }
+ if (psr->first_ipv4 || psr->first_ipv6) {
+ offset += sprintf(&info[offset], "first IP offset=%d, ",
+ psr->ip_off[0]);
+ }
+ if (psr->last_ipv4 || psr->last_ipv6) {
+ offset += sprintf(&info[offset], "last IP offset=%d, ",
+ psr->ip_off[1]);
+ }
+ if (psr->gre) {
+ offset += sprintf(&info[offset], "GRE offset=%d, ",
+ psr->gre_off);
+ }
+ if (psr->l4_type >= DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "L4 offset=%d, ",
+ psr->l4_off);
+ }
+ offset += sprintf(&info[offset], "Next HDR(0x%04x) offset=%d.",
+ rte_be_to_cpu_16(psr->nxthdr), psr->nxthdr_off);
+
+ DISPLAY_PRINT("%s\r\n", info);
}
if (unlikely(format == qm_fd_sg)) {
@@ -99,13 +188,14 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
DISPLAY_PRINT("Frame payload:\r\n");
ptr = (char *)annot;
ptr += fd->offset;
- for (ii = 0; ii < fd->length20; ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
+ for (pos = 0; pos < fd->length20; pos++) {
+ DISPLAY_PRINT("%02x ", ptr[pos]);
+ if (((pos + 1) % 16) == 0)
DISPLAY_PRINT("\n");
}
DISPLAY_PRINT("\n");
}
+
#else
#define dpaa_display_frame_info(a, b, c)
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 215bdeaf7f..392926e286 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -274,4 +274,9 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs);
void dpaa_rx_cb_no_prefetch(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set);
+#endif
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 13/18] net/dpaa: support mempool debug
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (11 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
` (4 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch adds support to compile time debug the mempool
corruptions in dpaa driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 945c84ab10..d82c6f3be2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -494,6 +494,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->data_len = sg_temp->length;
first_seg->pkt_len = sg_temp->length;
rte_mbuf_refcnt_set(first_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)first_seg),
+ (void **)&first_seg, 1, 1);
+#endif
first_seg->port = ifid;
first_seg->nb_segs = 1;
@@ -511,6 +515,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len += sg_temp->length;
first_seg->nb_segs += 1;
rte_mbuf_refcnt_set(cur_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 1);
+#endif
prev_seg->next = cur_seg;
if (sg_temp->final) {
cur_seg->next = NULL;
@@ -522,6 +530,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len, first_seg->nb_segs);
dpaa_eth_packet_info(first_seg, vaddr);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 1);
+#endif
rte_pktmbuf_free_seg(temp);
return first_seg;
@@ -562,6 +574,10 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
return mbuf;
@@ -676,6 +692,10 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -722,6 +742,10 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -972,6 +996,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
return -1;
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 0);
+#endif
fd->cmd = 0;
fd->opaque_addr = 0;
@@ -1017,6 +1045,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
} else {
sg_temp->bpid =
DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
free_buf[*free_count].seg = cur_seg;
@@ -1074,6 +1106,10 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
* released by BMAN.
*/
DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
buf_to_free[*free_count].seg = mbuf;
@@ -1302,6 +1338,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_CKSUM_OFFLOAD_MASK)
dpaa_unsegmented_checksum(mbuf,
&fd_arr[loop]);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
continue;
}
} else {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 14/18] bus/dpaa: add OH port mode for dpaa eth
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (12 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 13/18] net/dpaa: support mempool debug Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
` (3 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
NXP DPAA architecture supports the concept of DPAA
port as Offline Port - meaning - not connected to an actual MAC.
This is an hardware assited IPC mechanism for communiting between two
applications.
Offline(O/H) port is a type of hardware port which is able to dequeue and
enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute
(PCD) flow and (if configured to do so) enqueues the frame back in a
QMan queue.
The FMan is able to copy the frame into new buffers and enqueue back to the
QMan. This means these ports can be used to send and receive packet
between two applications.
An O/H port Have two queues. One to receive and one to send the packets.
It will loopback all the packets on Tx queue which are received
on Rx queue.
This property is completely driven by the device-tree. During the
DPAA bus scan, based on the platform device properties as in
device-tree, the port can be classified as OH port.
This patch add support in the driver to use dpaa eth port
in OH mode as well with DPDK applications.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 26 ++-
drivers/bus/dpaa/base/fman/fman.c | 259 ++++++++++++++--------
drivers/bus/dpaa/base/fman/fman_hw.c | 24 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/dpaa_bus.c | 23 +-
drivers/bus/dpaa/include/fman.h | 31 ++-
drivers/net/dpaa/dpaa_ethdev.c | 85 ++++++-
drivers/net/dpaa/dpaa_ethdev.h | 6 +
drivers/net/dpaa/dpaa_flow.c | 66 ++++--
9 files changed, 393 insertions(+), 146 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index edf7a7e350..47dcce334c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2017,2020 NXP
+ Copyright 2017,2020-2024 NXP
DPAA Poll Mode Driver
@@ -136,6 +136,8 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
+- PMD also support OH mode, where the port works as a HW assisted
+ virtual port without actually connecting to a Physical MAC.
Features
@@ -149,6 +151,8 @@ Features
- Checksum offload
- Promiscuous mode
- IEEE1588 PTP
+ - OH Port for inter application communication
+
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -326,6 +330,26 @@ FMLIB
`Kernel FMD Driver
<https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
+OH Port
+~~~~~~~
+ Offline(O/H) port is a type of hardware port which is able to dequeue and
+ enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute (PCD)
+ flow and (if configured to do so) enqueues the frame back in a QMan queue.
+
+ The FMan is able to copy the frame into new buffers and enqueue back to the
+ QMan. This means these ports can be used to send and receive packets between two
+ applications as well.
+
+ An O/H port have two queues. One to receive and one to send the packets. It will
+ loopback all the packets on Tx queue which are received on Rx queue.
+
+
+ -------- Tx Packets ---------
+ | App | - - - - - - - - - > | O/H |
+ | | < - - - - - - - - - | Port |
+ -------- Rx Packets ---------
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e2b7120237..f817305ab7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -246,26 +246,34 @@ fman_if_init(const struct device_node *dpa_node)
uint64_t port_cell_idx_val = 0;
uint64_t ext_args_cell_idx_val = 0;
- const struct device_node *mac_node = NULL, *tx_node, *ext_args_node;
- const struct device_node *pool_node, *fman_node, *rx_node;
+ const struct device_node *mac_node = NULL, *ext_args_node;
+ const struct device_node *pool_node, *fman_node;
+ const struct device_node *rx_node = NULL, *tx_node = NULL;
+ const struct device_node *oh_node = NULL;
const uint32_t *regs_addr = NULL;
const char *mname, *fname;
const char *dname = dpa_node->full_name;
size_t lenp;
- int _errno, is_shared = 0;
+ int _errno, is_shared = 0, is_offline = 0;
const char *char_prop;
uint32_t na;
if (of_device_is_available(dpa_node) == false)
return 0;
- if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
- !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
+ if (of_device_is_compatible(dpa_node, "fsl,dpa-oh"))
+ is_offline = 1;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-oh") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
return 0;
}
- rprop = "fsl,qman-frame-queues-rx";
- mprop = "fsl,fman-mac";
+ rprop = is_offline ? "fsl,qman-frame-queues-oh" :
+ "fsl,qman-frame-queues-rx";
+ mprop = is_offline ? "fsl,fman-oh-port" :
+ "fsl,fman-mac";
/* Obtain the MAC node used by this interface except macless */
mac_phandle = of_get_property(dpa_node, mprop, &lenp);
@@ -281,27 +289,43 @@ fman_if_init(const struct device_node *dpa_node)
}
mname = mac_node->full_name;
- /* Extract the Rx and Tx ports */
- ports_phandle = of_get_property(mac_node, "fsl,port-handles",
- &lenp);
- if (!ports_phandle)
- ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ if (!is_offline) {
+ /* Extract the Rx and Tx ports */
+ ports_phandle = of_get_property(mac_node, "fsl,port-handles",
&lenp);
- if (!ports_phandle) {
- FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
- mname);
- return -EINVAL;
- }
- assert(lenp == (2 * sizeof(phandle)));
- rx_node = of_find_node_by_phandle(ports_phandle[0]);
- if (!rx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
- return -ENXIO;
- }
- tx_node = of_find_node_by_phandle(ports_phandle[1]);
- if (!tx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
- return -ENXIO;
+ if (!ports_phandle)
+ ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
+ mname);
+ return -EINVAL;
+ }
+ assert(lenp == (2 * sizeof(phandle)));
+ rx_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!rx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
+ tx_node = of_find_node_by_phandle(ports_phandle[1]);
+ if (!tx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
+ return -ENXIO;
+ }
+ } else {
+ /* Extract the OH ports */
+ ports_phandle = of_get_property(dpa_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,fman-oh-port", dname);
+ return -EINVAL;
+ }
+ assert(lenp == (sizeof(phandle)));
+ oh_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!oh_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
}
/* Check if the port is shared interface */
@@ -430,17 +454,19 @@ fman_if_init(const struct device_node *dpa_node)
* Set A2V, OVOM, EBD bits in contextA to allow external
* buffer deallocation by fman.
*/
- fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
- FMAN_V3_CONTEXTA_EN_OVOM;
- fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+ fman_dealloc_bufs_mask_hi = DPAA_FQD_CTX_A_A2_FIELD_VALID |
+ DPAA_FQD_CTX_A_OVERRIDE_OMB;
+ fman_dealloc_bufs_mask_lo = DPAA_FQD_CTX_A2_EBD_BIT;
} else {
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 2.5G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G or offline? */
__if->__if.is_memac = 0;
- if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+ if (is_offline)
+ __if->__if.mac_type = fman_offline;
+ else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
__if->__if.mac_type = fman_mac_10g;
@@ -468,46 +494,81 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /*
- * For MAC ports, we cannot rely on cell-index. In
- * T2080, two of the 10G ports on single FMAN have same
- * duplicate cell-indexes as the other two 10G ports on
- * same FMAN. Hence, we now rely upon addresses of the
- * ports from device tree to deduce the index.
- */
+ if (!is_offline) {
+ /*
+ * For MAC ports, we cannot rely on cell-index. In
+ * T2080, two of the 10G ports on single FMAN have same
+ * duplicate cell-indexes as the other two 10G ports on
+ * same FMAN. Hence, we now rely upon addresses of the
+ * ports from device tree to deduce the index.
+ */
- _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
- if (_errno) {
- FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
- regs_addr_host);
- goto err;
- }
+ _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+ if (_errno) {
+ FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
+ regs_addr_host);
+ goto err;
+ }
+ } else {
+ cell_idx = of_get_property(oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n",
+ oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+ cell_idx_host = of_read_number(cell_idx,
+ lenp / sizeof(phandle));
- /* Extract the MAC address for private and shared interfaces */
- mac_addr = of_get_property(mac_node, "local-mac-address",
- &lenp);
- if (!mac_addr) {
- FMAN_ERR(-EINVAL, "%s: no local-mac-address",
- mname);
- goto err;
+ __if->__if.mac_idx = cell_idx_host;
}
- memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
- /* Extract the channel ID (from tx-port-handle) */
- tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
- &lenp);
- if (!tx_channel_id) {
- FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
- tx_node->full_name);
- goto err;
+ if (!is_offline) {
+ /* Extract the MAC address for private and shared interfaces */
+ mac_addr = of_get_property(mac_node, "local-mac-address",
+ &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address",
+ mname);
+ goto err;
+ }
+ memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the channel ID (from tx-port-handle) */
+ tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
+ } else {
+ /* Extract the channel ID (from mac) */
+ tx_channel_id = of_get_property(mac_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
}
- regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ na = of_n_addr_cells(mac_node);
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ if (!is_offline)
+ regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ else
+ regs_addr = of_get_address(oh_node, 0, &__if->regs_size, NULL);
if (!regs_addr) {
FMAN_ERR(-EINVAL, "of_get_address(%s)", mname);
goto err;
}
- phys_addr = of_translate_address(rx_node, regs_addr);
+
+ if (!is_offline)
+ phys_addr = of_translate_address(rx_node, regs_addr);
+ else
+ phys_addr = of_translate_address(oh_node, regs_addr);
if (!phys_addr) {
FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)",
mname, regs_addr);
@@ -521,23 +582,27 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
- if (!regs_addr) {
- FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
- goto err;
- }
- phys_addr = of_translate_address(tx_node, regs_addr);
- if (!phys_addr) {
- FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
- mname, regs_addr);
- goto err;
- }
- __if->tx_bmi_map = mmap(NULL, __if->regs_size,
- PROT_READ | PROT_WRITE, MAP_SHARED,
- fman_ccsr_map_fd, phys_addr);
- if (__if->tx_bmi_map == MAP_FAILED) {
- FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
- goto err;
+ if (!is_offline) {
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
}
if (!rtc_map) {
@@ -554,11 +619,6 @@ fman_if_init(const struct device_node *dpa_node)
__if->rtc_map = rtc_map;
}
- /* No channel ID for MAC-less */
- assert(lenp == sizeof(*tx_channel_id));
- na = of_n_addr_cells(mac_node);
- __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
-
/* Extract the Rx FQIDs. (Note, the device representation is silly,
* there are "counts" that must always be 1.)
*/
@@ -568,13 +628,26 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /* Check if "fsl,qman-frame-queues-rx" in dtb file is valid entry or
- * not. A valid entry contains at least 4 entries, rx_error_queue,
- * rx_error_queue_count, fqid_rx_def and rx_error_queue_count.
+ /*
+ * Check if "fsl,qman-frame-queues-rx/oh" in dtb file is valid entry or
+ * not.
+ *
+ * A valid rx entry contains either 4 or 6 entries. Mandatory entries
+ * are rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count. Optional entries are fqid_rx_pcd and
+ * fqid_rx_pcd_count.
+ *
+ * A valid oh entry contains 4 entries. Those entries are
+ * rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count.
*/
- assert(lenp >= (4 * sizeof(phandle)));
- na = of_n_addr_cells(mac_node);
+ if (!is_offline)
+ assert(lenp == (4 * sizeof(phandle)) ||
+ lenp == (6 * sizeof(phandle)));
+ else
+ assert(lenp == (4 * sizeof(phandle)));
+
/* Get rid of endianness (issues). Convert to host byte order */
rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
@@ -595,6 +668,9 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.fqid_rx_pcd_count = rx_phandle_host[5];
}
+ if (is_offline)
+ goto oh_init_done;
+
/* Extract the Tx FQIDs */
tx_phandle = of_get_property(dpa_node,
"fsl,qman-frame-queues-tx", &lenp);
@@ -706,6 +782,7 @@ fman_if_init(const struct device_node *dpa_node)
if (is_shared)
__if->__if.is_shared_mac = 1;
+oh_init_done:
fman_if_vsp_init(__if);
/* Parsing of the network interface is complete, add it to the list */
@@ -769,6 +846,10 @@ fman_finish(void)
list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
int _errno;
+ /* No need to disable Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ continue;
+
/* disable Rx and Tx */
if ((__if->__if.mac_type == fman_mac_1g) &&
(!__if->__if.is_memac))
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4fc41c1ae9..1f61ae406b 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020,2022 NXP
+ * Copyright 2017,2020,2022-2023 NXP
*
*/
@@ -88,6 +88,10 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+ /* Add hash mac addr not supported on Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ return 0;
+
eth_addr = ETH_ADDR_TO_UINT64(eth);
if (!(eth_addr & GROUP_ADDRESS))
@@ -109,6 +113,15 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
void *mac_reg =
&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
u32 val = in_be32(mac_reg);
+ int i;
+
+ /* Get mac addr not supported on Offline port */
+ /* Return NULL mac address */
+ if (__if->__if.mac_type == fman_offline) {
+ for (i = 0; i < 6; i++)
+ eth[i] = 0x0;
+ return 0;
+ }
eth[0] = (val & 0x000000ff) >> 0;
eth[1] = (val & 0x0000ff00) >> 8;
@@ -130,6 +143,10 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
+ /* Clear mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return;
+
if (addr_num) {
reg = &((struct memac_regs *)m->ccsr_map)->
mac_addr[addr_num-1].mac_addr_l;
@@ -149,10 +166,13 @@ int
fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
{
struct __fman_if *m = container_of(p, struct __fman_if, __if);
-
void *reg;
u32 val;
+ /* Set mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return 0;
+
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
if (addr_num)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 57d87afcb0..e6a6ed1eb6 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2019,2023 NXP
*
*/
#include <inttypes.h>
@@ -44,6 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
+ (__if->mac_type == fman_offline) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -56,13 +57,15 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
- fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
- fman_if_for_each_bpool(bpool, __if)
- fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
- " size=%"PRId64", addr=0x%"PRIx64")\n",
- bpool->bpid, bpool->count, bpool->size,
- bpool->addr);
+ if (__if->mac_type != fman_offline) {
+ fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+ fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+ fman_if_for_each_bpool(bpool, __if)
+ fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
+ " size=%"PRId64", addr=0x%"PRIx64")\n",
+ bpool->bpid, bpool->count, bpool->size,
+ bpool->addr);
+ }
}
}
#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1f6997c77e..6e4ec90670 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -43,6 +43,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <netcfg.h>
+#include <fman.h>
struct rte_dpaa_bus {
struct rte_bus bus;
@@ -203,9 +204,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
- fman_intf->mac_idx);
- DPAA_BUS_LOG(INFO, "%s netdev added", dev->name);
+ if (fman_intf->mac_type == fman_offline)
+ sprintf(dev->name, "fm%d-oh%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else
+ sprintf(dev->name, "fm%d-mac%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
dev->device.name = dev->name;
dev->device.devargs = dpaa_devargs_lookup(dev);
@@ -441,7 +445,7 @@ static int
rte_dpaa_bus_parse(const char *name, void *out)
{
unsigned int i, j;
- size_t delta;
+ size_t delta, dev_delta;
size_t max_name_len;
/* There are two ways of passing device name, with and without
@@ -458,16 +462,25 @@ rte_dpaa_bus_parse(const char *name, void *out)
delta = 5;
}
+ /* dev_delta points to the dev name (mac/oh/onic). Not valid for
+ * dpaa_sec.
+ */
+ dev_delta = delta + sizeof("fm.-") - 1;
+
if (strncmp("dpaa_sec", &name[delta], 8) == 0) {
if (sscanf(&name[delta], "dpaa_sec-%u", &i) != 1 ||
i < 1 || i > 4)
return -EINVAL;
max_name_len = sizeof("dpaa_sec-.") - 1;
+ } else if (strncmp("oh", &name[dev_delta], 2) == 0) {
+ if (sscanf(&name[delta], "fm%u-oh%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-oh..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
return -EINVAL;
-
max_name_len = sizeof("fm.-mac..") - 1;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index e8bc913943..377f73bf0d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
*
*/
@@ -474,11 +474,30 @@ extern int fman_ccsr_map_fd;
#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
#define FMAN_IP_REV_1_MAJOR_SHIFT 8
#define FMAN_V3 0x06
-#define FMAN_V3_CONTEXTA_EN_A2V 0x10000000
-#define FMAN_V3_CONTEXTA_EN_OVOM 0x02000000
-#define FMAN_V3_CONTEXTA_EN_EBD 0x80000000
-#define FMAN_CONTEXTA_DIS_CHECKSUM 0x7ull
-#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+
+#define DPAA_FQD_CTX_A_SHIFT_BITS 24
+#define DPAA_FQD_CTX_B_SHIFT_BITS 24
+
+/* Following flags are used to set in context A hi field of FQD */
+#define DPAA_FQD_CTX_A_OVERRIDE_FQ (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_IGNORE_CMD (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A1_FIELD_VALID (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A2_FIELD_VALID (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A0_FIELD_VALID (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_B0_FIELD_VALID (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_OVERRIDE_OMB (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_RESERVED (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
+/* Following flags are used to set in context A lo field of FQD */
+#define DPAA_FQD_CTX_A2_EBD_BIT (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_EBAD_BIT (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_FWD_BIT (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NL_BIT (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_CWD_BIT (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NENQ_BIT (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_RESERVED_BIT (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_VSPE_BIT (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
extern u16 fman_ip_rev;
extern u32 fman_dealloc_bufs_mask_hi;
extern u32 fman_dealloc_bufs_mask_lo;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ead890278..f8196ddd14 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (!fif->is_shared_mac)
+ if (fif->mac_type != fman_offline)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -314,6 +314,10 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dpaa_write_fm_config_to_file();
}
+ /* Disable interrupt support on offline port*/
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* if the interrupts were configured on this devices*/
if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
@@ -531,6 +535,9 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* Reset link to autoneg */
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
@@ -644,6 +651,11 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
+ } else if (fif->mac_type == fman_offline) {
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -744,7 +756,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -757,6 +770,11 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
+ if (fif->mac_type == fman_offline) {
+ /*Max supported rate for O/H port is 3.75Mpps*/
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ }
}
if (ioctl_version < 2) {
@@ -1077,7 +1095,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1222,8 +1240,11 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->fqid, ret);
}
}
+
/* Enable main queue to receive error packets also by default */
- fman_if_set_err_fqid(fif, rxq->fqid);
+ if (fif->mac_type != fman_offline)
+ fman_if_set_err_fqid(fif, rxq->fqid);
+
return 0;
}
@@ -1372,7 +1393,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1388,7 +1410,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1483,9 +1506,15 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1498,8 +1527,15 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1508,9 +1544,15 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1807,6 +1849,17 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
return ret;
}
+uint8_t fm_default_vsp_id(struct fman_if *fif)
+{
+ /* Avoid being same as base profile which could be used
+ * for kernel interface of shared mac.
+ */
+ if (fif->base_profile_id)
+ return 0;
+ else
+ return DPAA_DEFAULT_RXQ_VSP_ID;
+}
+
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf,
@@ -1842,13 +1895,20 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
} else {
/* no tx-confirmation */
opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ opts.fqd.context_a.hi = DPAA_FQD_CTX_A_OVERRIDE_FQ |
+ fman_dealloc_bufs_mask_hi;
}
- if (fman_ip_rev >= FMAN_V3) {
+ if (fman_ip_rev >= FMAN_V3)
/* Set B0V bit in contextA to set ASPID to 0 */
- opts.fqd.context_a.hi |= 0x04000000;
+ opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
+
+ if (fman_intf->mac_type == fman_offline) {
+ opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
+ opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
+ DPAA_FQD_CTX_B_SHIFT_BITS;
}
+
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
if (cgr_tx) {
@@ -2263,7 +2323,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf, fman_intf);
+ if (fman_intf->mac_type != fman_offline)
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -2294,10 +2355,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac) {
+ if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
- FM_FD_RX_STATUS_ERR_MASK);
+ FM_FD_RX_STATUS_ERR_MASK);
/* Disable RX mode */
fman_if_disable_rx(fman_intf);
/* Disable promiscuous mode */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 7884cc034c..8ec5155cfc 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -121,6 +121,9 @@ enum {
extern struct rte_mempool *dpaa_tx_sg_pool;
extern int dpaa_ieee_1588;
+/* PMD related logs */
+extern int dpaa_logtype_pmd;
+
/* structure to free external and indirect
* buffers.
*/
@@ -266,6 +269,9 @@ dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused);
+uint8_t
+fm_default_vsp_id(struct fman_if *fif);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 082bd5d014..97879b8e4c 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2019,2021 NXP
+ * Copyright 2017-2019,2021-2024 NXP
*/
/* System headers */
@@ -29,6 +29,11 @@ return &scheme_params->param.key_ext_and_hash.extract_array[hdr_idx];
#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \
SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field
+/* FMAN mac indexes mappings (0 is unused, first 8 are for 1G, next for 10G
+ * ports).
+ */
+const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
/* FM global info */
struct dpaa_fm_info {
t_handle fman_handle;
@@ -48,17 +53,6 @@ static struct dpaa_fm_info fm_info;
static struct dpaa_fm_model fm_model;
static const char *fm_log = "/tmp/fmdpdk.bin";
-static inline uint8_t fm_default_vsp_id(struct fman_if *fif)
-{
- /* Avoid being same as base profile which could be used
- * for kernel interface of shared mac.
- */
- if (fif->base_profile_id)
- return 0;
- else
- return DPAA_DEFAULT_RXQ_VSP_ID;
-}
-
static void fm_prev_cleanup(void)
{
uint32_t fman_id = 0, i = 0, devid;
@@ -649,12 +643,15 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
}
-static inline int get_port_type(struct fman_if *fif)
+static inline int get_rx_port_type(struct fman_if *fif)
{
+
+ if (fif->mac_type == fman_offline_internal)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
*/
- if (fif->mac_type == fman_mac_1g &&
+ else if (fif->mac_type == fman_mac_1g &&
fif->mac_idx >= DPAA_10G_MAC_START_IDX)
return e_FM_PORT_TYPE_RX_10G;
else if (fif->mac_type == fman_mac_1g)
@@ -665,7 +662,22 @@ static inline int get_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_RX_10G;
DPAA_PMD_ERR("MAC type unsupported");
- return -1;
+ return e_FM_PORT_TYPE_DUMMY;
+}
+
+static inline int get_tx_port_type(struct fman_if *fif)
+{
+ if (fif->mac_type == fman_offline_internal)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
+ else if (fif->mac_type == fman_mac_1g)
+ return e_FM_PORT_TYPE_TX;
+ else if (fif->mac_type == fman_mac_2_5g)
+ return e_FM_PORT_TYPE_TX_2_5G;
+ else if (fif->mac_type == fman_mac_10g)
+ return e_FM_PORT_TYPE_TX_10G;
+
+ DPAA_PMD_ERR("MAC type unsupported");
+ return e_FM_PORT_TYPE_DUMMY;
}
static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
@@ -676,17 +688,12 @@ static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
ioc_fm_pcd_net_env_params_t dist_units;
PMD_INIT_FUNC_TRACE();
- /* FMAN mac indexes mappings (0 is unused,
- * first 8 are for 1G, next for 10G ports
- */
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
-
/* Memset FM port params */
memset(&fm_port_params, 0, sizeof(fm_port_params));
/* Set FM port params */
fm_port_params.h_fm = fm_info.fman_handle;
- fm_port_params.port_type = get_port_type(fif);
+ fm_port_params.port_type = get_rx_port_type(fif);
fm_port_params.port_id = mac_idx[fif->mac_idx];
/* FM PORT Open */
@@ -949,7 +956,6 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
{
t_fm_vsp_params vsp_params;
t_fm_buffer_prefix_content buf_prefix_cont;
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
uint8_t idx = mac_idx[fif->mac_idx];
int ret;
@@ -970,17 +976,31 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- vsp_params.port_params.port_id = idx;
+ if (fif->mac_type == fman_offline_internal)
+ vsp_params.port_params.port_id = fif->mac_idx;
+ else
+ vsp_params.port_params.port_id = idx;
+
if (fif->mac_type == fman_mac_1g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
} else if (fif->mac_type == fman_mac_2_5g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
+ } else if (fif->mac_type == fman_offline) {
+ vsp_params.port_params.port_type =
+ e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
} else {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
return -1;
}
+
+ vsp_params.port_params.port_type = get_rx_port_type(fif);
+ if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
+ DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
+ return -1;
+ }
+
vsp_params.ext_buf_pools.num_of_pools_used = 1;
vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 15/18] bus/dpaa: add ONIC port mode for the DPAA eth
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (13 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
` (2 subsequent siblings)
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
The OH ports can also be used by two application, processing contexts
to communicate to each other.
This patch enables this mode for dpaa-eth OH port as ONIC port,
so that application can use the dpaa-eth to communicate to each
other on the same SoC.
Again,this properties is driven by the system device-tree variables.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 33 ++-
drivers/bus/dpaa/base/fman/fman.c | 299 +++++++++++++++++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 20 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 4 +-
drivers/bus/dpaa/dpaa_bus.c | 16 +-
drivers/bus/dpaa/include/fman.h | 15 +-
drivers/net/dpaa/dpaa_ethdev.c | 114 +++++++--
drivers/net/dpaa/dpaa_flow.c | 27 +-
drivers/net/dpaa/dpaa_fmc.c | 2 +-
9 files changed, 469 insertions(+), 61 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 47dcce334c..529d5b74f4 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -136,7 +136,7 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
-- PMD also support OH mode, where the port works as a HW assisted
+- PMD also support OH/ONIC mode, where the port works as a HW assisted
virtual port without actually connecting to a Physical MAC.
@@ -152,7 +152,7 @@ Features
- Promiscuous mode
- IEEE1588 PTP
- OH Port for inter application communication
-
+ - ONIC virtual port support
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -350,6 +350,35 @@ OH Port
-------- Rx Packets ---------
+ONIC
+~~~~
+ To use OH port to communicate between two applications, we can assign Rx port
+ of an O/H port to Application 1 and Tx port to Application 2 so that
+ Application 1 can send packets to Application 2. Similarly, we can assign Tx
+ port of another O/H port to Application 1 and Rx port to Application 2 so that
+ Applicaiton 2 can send packets to Application 1.
+
+ ONIC is logically defined to achieve it. Internally it will use one Rx queue
+ of an O/H port and one Tx queue of another O/H port.
+ For application, it will behave as single O/H port.
+
+ +------+ +------+ +------+ +------+ +------+
+ | | Tx | | Rx | O/H | Tx | | Rx | |
+ | | - - - > | | - - > | Port | - - > | | - - > | |
+ | | | | | 1 | | | | |
+ | | | | +------+ | | | |
+ | App | | ONIC | | ONIC | | App |
+ | 1 | | Port | | Port | | 2 |
+ | | | 1 | +------+ | 2 | | |
+ | | Rx | | Tx | O/H | Rx | | Tx | |
+ | | < - - - | | < - - -| Port | < - - -| | < - - -| |
+ | | | | | 2 | | | | |
+ +------+ +------+ +------+ +------+ +------+
+
+ All the packets received by ONIC port 1 will be send to ONIC port 2 and vice
+ versa. These ports can be used by DPDK applications just like physical ports.
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index f817305ab7..efe6eab4a9 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -43,7 +43,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
@@ -465,7 +465,7 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.is_memac = 0;
if (is_offline)
- __if->__if.mac_type = fman_offline;
+ __if->__if.mac_type = fman_offline_internal;
else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
@@ -791,6 +791,292 @@ fman_if_init(const struct device_node *dpa_node)
dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
__if->__if.mac_idx);
+ /* Don't add OH port to the port list since they will be used by ONIC
+ * ports.
+ */
+ if (!is_offline)
+ list_add_tail(&__if->__if.node, &__ifs);
+
+ return 0;
+err:
+ if_destructor(__if);
+ return _errno;
+}
+
+static int fman_if_init_onic(const struct device_node *dpa_node)
+{
+ struct __fman_if *__if;
+ struct fman_if_bpool *bpool;
+ const phandle *tx_pools_phandle;
+ const phandle *tx_channel_id, *mac_addr, *cell_idx;
+ const phandle *rx_phandle;
+ const struct device_node *pool_node;
+ size_t lenp;
+ int _errno;
+ const phandle *p_onic_oh_nodes = NULL;
+ const struct device_node *rx_oh_node = NULL;
+ const struct device_node *tx_oh_node = NULL;
+ const phandle *p_fman_rx_oh_node = NULL, *p_fman_tx_oh_node = NULL;
+ const struct device_node *fman_rx_oh_node = NULL;
+ const struct device_node *fman_tx_oh_node = NULL;
+ const struct device_node *fman_node;
+ uint32_t na = OF_DEFAULT_NA;
+ uint64_t rx_phandle_host[4] = {0};
+ uint64_t cell_idx_host = 0;
+
+ if (of_device_is_available(dpa_node) == false)
+ return 0;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-generic"))
+ return 0;
+
+ /* Allocate an object for this network interface */
+ __if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+ if (!__if) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+ goto err;
+ }
+ memset(__if, 0, sizeof(*__if));
+
+ INIT_LIST_HEAD(&__if->__if.bpool_list);
+
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
+
+ strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+ __if->node_path[PATH_MAX - 1] = '\0';
+
+ /* Mac node is onic */
+ __if->__if.is_memac = 0;
+ __if->__if.mac_type = fman_onic;
+
+ /* Extract the MAC address for linux peer */
+ mac_addr = of_get_property(dpa_node, "local-mac-address", &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ memcpy(&__if->__if.onic_info.peer_mac, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the Rx port (it's the first of the two port handles)
+ * and get its channel ID.
+ */
+ p_onic_oh_nodes = of_get_property(dpa_node, "fsl,oh-ports", &lenp);
+ if (!p_onic_oh_nodes) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_onic_oh_nodes\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ rx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[0]);
+ if (!rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get rx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_rx_oh_node = of_get_property(rx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_rx_oh_node = of_find_node_by_phandle(*p_fman_rx_oh_node);
+ if (!fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ tx_channel_id = of_get_property(fman_rx_oh_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*tx_channel_id));
+
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ /* Extract the FQs from which oNIC driver in Linux is dequeuing */
+ rx_phandle = of_get_property(rx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ __if->__if.onic_info.rx_start = of_read_number(&rx_phandle[2], na);
+ __if->__if.onic_info.rx_count = of_read_number(&rx_phandle[3], na);
+
+ /* Extract the Rx FQIDs */
+ tx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[1]);
+ if (!tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get tx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_tx_oh_node = of_get_property(tx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_tx_oh_node = of_find_node_by_phandle(*p_fman_tx_oh_node);
+ if (!fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ cell_idx = of_get_property(fman_tx_oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.mac_idx = cell_idx_host;
+
+ fman_node = of_get_parent(fman_tx_oh_node);
+ cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.fman_idx = cell_idx_host;
+
+ rx_phandle = of_get_property(tx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ dpa_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+ rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+ rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+ rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+ assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+
+ __if->__if.fqid_rx_err = rx_phandle_host[0];
+ __if->__if.fqid_rx_def = rx_phandle_host[2];
+
+ /* Don't Extract the Tx FQIDs */
+ __if->__if.fqid_tx_err = 0;
+ __if->__if.fqid_tx_confirm = 0;
+
+ /* Obtain the buffer pool nodes used by Tx OH port */
+ tx_pools_phandle = of_get_property(tx_oh_node, "fsl,bman-buffer-pools",
+ &lenp);
+ if (!tx_pools_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp && !(lenp % sizeof(phandle)));
+
+ /* For each pool, parse the corresponding node and add a pool object to
+ * the interface's "bpool_list".
+ */
+
+ while (lenp) {
+ size_t proplen;
+ const phandle *prop;
+ uint64_t bpool_host[6] = {0};
+
+ /* Allocate an object for the pool */
+ bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+ if (!bpool) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+ goto err;
+ }
+
+ /* Find the pool node */
+ pool_node = of_find_node_by_phandle(*tx_pools_phandle);
+ if (!pool_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+
+ /* Extract the BPID property */
+ prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+ if (!prop) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n",
+ pool_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+ assert(proplen == sizeof(*prop));
+
+ bpool->bpid = of_read_number(prop, na);
+
+ /* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+ * indicates for the Bman driver to seed the pool.
+ * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+ * two are mutually exclusive, so check for either of them.
+ */
+
+ prop = of_get_property(pool_node, "fsl,bpool-cfg", &proplen);
+ if (!prop)
+ prop = of_get_property(pool_node,
+ "fsl,bpool-ethernet-cfg",
+ &proplen);
+ if (!prop) {
+ /* It's OK for there to be no bpool-cfg */
+ bpool->count = bpool->size = bpool->addr = 0;
+ } else {
+ assert(proplen == (6 * sizeof(*prop)));
+
+ bpool_host[0] = of_read_number(&prop[0], na);
+ bpool_host[1] = of_read_number(&prop[1], na);
+ bpool_host[2] = of_read_number(&prop[2], na);
+ bpool_host[3] = of_read_number(&prop[3], na);
+ bpool_host[4] = of_read_number(&prop[4], na);
+ bpool_host[5] = of_read_number(&prop[5], na);
+
+ bpool->count = ((uint64_t)bpool_host[0] << 32) |
+ bpool_host[1];
+ bpool->size = ((uint64_t)bpool_host[2] << 32) |
+ bpool_host[3];
+ bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+ bpool_host[5];
+ }
+
+ /* Parsing of the pool is complete, add it to the interface
+ * list.
+ */
+ list_add_tail(&bpool->node, &__if->__if.bpool_list);
+ lenp -= sizeof(phandle);
+ tx_pools_phandle++;
+ }
+
+ fman_if_vsp_init(__if);
+
+ /* Parsing of the network interface is complete, add it to the list. */
+ DPAA_BUS_DEBUG("Found %s, Tx Channel = %x, FMAN = %x, Port ID = %x",
+ dpa_node->full_name, __if->__if.tx_channel_id,
+ __if->__if.fman_idx, __if->__if.mac_idx);
+
list_add_tail(&__if->__if.node, &__ifs);
return 0;
err:
@@ -830,6 +1116,13 @@ fman_init(void)
}
}
+ for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-generic") {
+ /* it is a oNIC interface */
+ _errno = fman_if_init_onic(dpa_node);
+ if (_errno)
+ FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+ }
+
return 0;
err:
fman_finish();
@@ -847,7 +1140,7 @@ fman_finish(void)
int _errno;
/* No need to disable Offline port */
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
continue;
/* disable Rx and Tx */
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 1f61ae406b..cbb0491d70 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -88,8 +88,9 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- /* Add hash mac addr not supported on Offline port */
- if (__if->__if.mac_type == fman_offline)
+ /* Add hash mac addr not supported on Offline port and onic port */
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic)
return 0;
eth_addr = ETH_ADDR_TO_UINT64(eth);
@@ -115,9 +116,10 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
u32 val = in_be32(mac_reg);
int i;
- /* Get mac addr not supported on Offline port */
+ /* Get mac addr not supported on Offline port and onic port */
/* Return NULL mac address */
- if (__if->__if.mac_type == fman_offline) {
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic) {
for (i = 0; i < 6; i++)
eth[i] = 0x0;
return 0;
@@ -143,8 +145,9 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
- /* Clear mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Clear mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return;
if (addr_num) {
@@ -169,8 +172,9 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
void *reg;
u32 val;
- /* Set mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Set mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return 0;
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index e6a6ed1eb6..ffb37825c2 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_offline) ? "OFFLINE" :
+ (__if->mac_type == fman_offline_internal) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -57,7 +57,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- if (__if->mac_type != fman_offline) {
+ if (__if->mac_type != fman_offline_internal) {
fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
fman_if_for_each_bpool(bpool, __if)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6e4ec90670..9ffbe07c93 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -171,8 +171,10 @@ dpaa_create_device_list(void)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
+ rte_dpaa_bus.device_count = 0;
+
/* Creating Ethernet Devices */
- for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+ for (i = 0; dpaa_netcfg && (i < dpaa_netcfg->num_ethports); i++) {
dev = calloc(1, sizeof(struct rte_dpaa_device));
if (!dev) {
DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
@@ -204,9 +206,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- if (fman_intf->mac_type == fman_offline)
+ if (fman_intf->mac_type == fman_offline_internal)
sprintf(dev->name, "fm%d-oh%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else if (fman_intf->mac_type == fman_onic)
+ sprintf(dev->name, "fm%d-onic%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
else
sprintf(dev->name, "fm%d-mac%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
@@ -216,7 +221,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
- rte_dpaa_bus.device_count = i;
+ rte_dpaa_bus.device_count += i;
/* Unlike case of ETH, RTE_LIBRTE_DPAA_MAX_CRYPTODEV SEC devices are
* constantly created only if "sec" property is found in the device
@@ -477,6 +482,11 @@ rte_dpaa_bus_parse(const char *name, void *out)
i >= 2 || j >= 16)
return -EINVAL;
max_name_len = sizeof("fm.-oh..") - 1;
+ } else if (strncmp("onic", &name[dev_delta], 4) == 0) {
+ if (sscanf(&name[delta], "fm%u-onic%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-onic..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 377f73bf0d..01556cf2a8 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline = 0,
+ fman_offline_internal = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
@@ -366,6 +366,16 @@ struct fman_port_qmi_regs {
uint32_t fmqm_pndcc; /**< PortID n Dequeue Confirm Counter */
};
+struct onic_port_cfg {
+ char macless_name[IF_NAME_MAX_LEN];
+ uint32_t rx_start;
+ uint32_t rx_count;
+ uint32_t tx_start;
+ uint32_t tx_count;
+ struct rte_ether_addr src_mac;
+ struct rte_ether_addr peer_mac;
+};
+
/* This struct exports parameters about an Fman network interface, determined
* from the device-tree.
*/
@@ -401,6 +411,9 @@ struct fman_if {
uint32_t fqid_tx_err;
uint32_t fqid_tx_confirm;
+ /* oNIC port info */
+ struct onic_port_cfg onic_info;
+
struct list_head bpool_list;
/* The node for linking this interface into "fman_if_list" */
struct list_head node;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f8196ddd14..133fbd5bc9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -315,7 +316,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* Disable interrupt support on offline port*/
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* if the interrupts were configured on this devices*/
@@ -467,10 +469,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_bmi_stats_enable(fif);
- fman_if_bmi_stats_reset(fif);
- fman_if_enable_rx(fif);
-
+ if (fif->mac_type != fman_onic) {
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
+ }
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
for (i = 0; i < dev->data->nb_tx_queues; i++)
@@ -535,7 +538,8 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* Reset link to autoneg */
@@ -651,11 +655,14 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
- } else if (fif->mac_type == fman_offline) {
+ } else if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
| RTE_ETH_LINK_SPEED_10M
| RTE_ETH_LINK_SPEED_100M_HD
- | RTE_ETH_LINK_SPEED_100M;
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -757,7 +764,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline) {
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -770,7 +778,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
/*Max supported rate for O/H port is 3.75Mpps*/
link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -933,8 +942,16 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_enable(dev->process_private);
return 0;
@@ -942,8 +959,16 @@ static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_disable(dev->process_private);
return 0;
@@ -951,8 +976,15 @@ static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_set_mcast_filter_table(dev->process_private);
return 0;
@@ -960,8 +992,15 @@ static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
@@ -1095,7 +1134,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac && fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1126,8 +1166,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
dpaa_intf->valid = 1;
- DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif), max_rx_pktlen);
+ if (fif->mac_type != fman_onic)
+ DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
@@ -1242,7 +1283,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
/* Enable main queue to receive error packets also by default */
- if (fif->mac_type != fman_offline)
+ if (fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_err_fqid(fif, rxq->fqid);
return 0;
@@ -1394,7 +1436,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1411,7 +1454,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1510,11 +1554,16 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Add MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1531,11 +1580,16 @@ dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
return;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Remove MAC Address not supported on ONIC port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1548,11 +1602,16 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Set MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1903,7 +1962,8 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
- if (fman_intf->mac_type == fman_offline) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
DPAA_FQD_CTX_B_SHIFT_BITS;
@@ -2156,6 +2216,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
if (!num_rx_fqs) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
+ ret = -ENODEV;
+ goto free_rx;
+ }
DPAA_PMD_WARN("%s is not configured by FMC.",
dpaa_intf->name);
}
@@ -2323,7 +2388,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- if (fman_intf->mac_type != fman_offline)
+ if (fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic)
dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
@@ -2355,7 +2421,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
+ if (!fman_intf->is_shared_mac &&
+ fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
FM_FD_RX_STATUS_ERR_MASK);
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 97879b8e4c..b9cd11efa1 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -645,8 +645,11 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_rx_port_type(struct fman_if *fif)
{
-
- if (fif->mac_type == fman_offline_internal)
+ /* For onic ports, configure the VSP as offline ports so that
+ * kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
@@ -667,7 +670,8 @@ static inline int get_rx_port_type(struct fman_if *fif)
static inline int get_tx_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_offline_internal)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_TX;
@@ -976,25 +980,12 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- if (fif->mac_type == fman_offline_internal)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
vsp_params.port_params.port_id = fif->mac_idx;
else
vsp_params.port_params.port_id = idx;
- if (fif->mac_type == fman_mac_1g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
- } else if (fif->mac_type == fman_mac_2_5g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
- } else if (fif->mac_type == fman_mac_10g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
- } else if (fif->mac_type == fman_offline) {
- vsp_params.port_params.port_type =
- e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
- } else {
- DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
- return -1;
- }
-
vsp_params.port_params.port_type = get_rx_port_type(fif);
if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index d80ea1010a..7dc42f6e23 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -215,7 +215,7 @@ dpaa_port_fmc_port_parse(struct fman_if *fif,
if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
pport->number == fif->mac_idx &&
- (fif->mac_type == fman_offline ||
+ (fif->mac_type == fman_offline_internal ||
fif->mac_type == fman_onic))
return current_port;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 16/18] net/dpaa: improve the dpaa port cleanup
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (14 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
During DPAA cleanup in FMCLESS mode, application can
see segmentation fault in device close API and in DPAA
destructor execution.
Segmentation fault in device close is because driver
reducing the number of queues initialised during
device configuration without releasing the actual queues.
And segmentation fault in DPAA destruction is because
it is trying to access RTE* devices whose memory has
been released in rte_eal_cleanup() call by the application.
This patch improves the behavior.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 33 +++++++++++----------------------
drivers/net/dpaa/dpaa_flow.c | 5 ++---
2 files changed, 13 insertions(+), 25 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 133fbd5bc9..41ae033c75 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -561,10 +561,10 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (dpaa_intf->cgr_rx) {
for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
}
- rte_free(dpaa_intf->cgr_rx);
- dpaa_intf->cgr_rx = NULL;
/* Release TX congestion Groups */
if (dpaa_intf->cgr_tx) {
for (loop = 0; loop < MAX_DPAA_CORES; loop++)
@@ -578,6 +578,15 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
rte_free(dpaa_intf->tx_queues);
dpaa_intf->tx_queues = NULL;
+ if (dpaa_intf->port_handle) {
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM "
+ "deconfig failed\n");
+ }
+ if (fif->num_profiles) {
+ if (dpaa_port_vsp_cleanup(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n");
+ }
return ret;
}
@@ -2607,26 +2616,6 @@ static void __attribute__((destructor(102))) dpaa_finish(void)
return;
if (!(default_q || fmc_q)) {
- unsigned int i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].dev_ops == &dpaa_devops) {
- struct rte_eth_dev *dev = &rte_eth_devices[i];
- struct dpaa_if *dpaa_intf =
- dev->data->dev_private;
- struct fman_if *fif =
- dev->process_private;
- if (dpaa_intf->port_handle)
- if (dpaa_fm_deconfig(dpaa_intf, fif))
- DPAA_PMD_WARN("DPAA FM "
- "deconfig failed");
- if (fif->num_profiles) {
- if (dpaa_port_vsp_cleanup(dpaa_intf,
- fif))
- DPAA_PMD_WARN("DPAA FM vsp cleanup failed");
- }
- }
- }
if (is_global_init)
if (dpaa_fm_term())
DPAA_PMD_WARN("DPAA FM term failed");
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index b9cd11efa1..c01d8eaca1 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -13,6 +13,7 @@
#include <rte_dpaa_logs.h>
#include <fmlib/fm_port_ext.h>
#include <fmlib/fm_vsp_ext.h>
+#include <rte_pmd_dpaa.h>
#define DPAA_MAX_NUM_ETH_DEV 8
@@ -811,8 +812,6 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
return -1;
}
- dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues;
-
/* Open FM Port and set it in port info */
ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif);
if (ret) {
@@ -821,7 +820,7 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
}
if (fif->num_profiles) {
- for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
dpaa_intf->rx_queues[i].vsp_id =
fm_default_vsp_id(fif);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 17/18] net/dpaa: improve dpaa errata A010022 handling
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (15 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
2024-09-30 10:29 ` [PATCH v3 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch improves the errata handling for
"RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022"
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d82c6f3be2..1d7efdef88 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1258,6 +1258,35 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
return new_mbufs[0];
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+/* In case the data offset is not multiple of 16,
+ * FMAN can stall because of an errata. So reallocate
+ * the buffer in such case.
+ */
+static inline int
+dpaa_eth_ls1043a_mbuf_realloc(struct rte_mbuf *mbuf)
+{
+ uint64_t len, offset;
+
+ if (dpaa_svr_family != SVR_LS1043A_FAMILY)
+ return 0;
+
+ while (mbuf) {
+ len = mbuf->data_len;
+ offset = mbuf->data_off;
+ if ((mbuf->next &&
+ !rte_is_aligned((void *)len, 16)) ||
+ !rte_is_aligned((void *)offset, 16)) {
+ DPAA_PMD_DEBUG("Errata condition hit");
+
+ return 1;
+ }
+ mbuf = mbuf->next;
+ }
+ return 0;
+}
+#endif
+
uint16_t
dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
@@ -1296,14 +1325,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++) {
mbuf = *(bufs++);
- /* In case the data offset is not multiple of 16,
- * FMAN can stall because of an errata. So reallocate
- * the buffer in such case.
- */
- if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
- (mbuf->data_off & 0x7F) != 0x0)
- realloc_mbuf = 1;
-
fd_arr[loop].cmd = 0;
if (dpaa_ieee_1588) {
fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
@@ -1311,6 +1332,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
DPAA_FD_CMD_UPD;
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+ realloc_mbuf = dpaa_eth_ls1043a_mbuf_realloc(mbuf);
+#endif
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v3 18/18] net/dpaa: fix reallocate_mbuf handling
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
` (16 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
@ 2024-09-30 10:29 ` Hemant Agrawal
17 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 10:29 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla, stable
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch fixes the bug in the reallocate_mbuf code
handling. The source location is corrected when copying
the data in the new mbuf.
Fixes: f8c7a17a48c9 ("net/dpaa: support Tx scatter gather for non-DPAA buffer")
Cc: stable@dpdk.org
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 1d7efdef88..247e7b92ba 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1223,7 +1223,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
/* Copy the data */
data = rte_pktmbuf_append(new_mbufs[0], bytes_to_copy);
- rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(mbuf,
+ rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(temp_mbuf,
void *, offset1), bytes_to_copy);
/* Set new offsets and the temp buffers */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes
2024-08-23 7:32 ` [PATCH v2 00/18] " Hemant Agrawal
` (19 preceding siblings ...)
2024-09-30 10:29 ` [PATCH v3 " Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
` (19 more replies)
20 siblings, 20 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
v4: fix clan compilation issues
v3: addressed Ferruh's comments
- dropped Tx rate limit API patch
- added one small bug fix
- fixed removal/add of fman_offline type
v2: address review comments
- improve commit message
- add documentarion for new functions
- make IEEE1588 config runtime
This series adds several enhancement to the NXP DPAA Ethernet driver.
Primarily:
1. timestamp and IEEE 1588 support
2. OH and ONIC based virtual port config in DPAA
3. frame display and debugging infra
Gagandeep Singh (3):
bus/dpaa: fix PFDRs leaks due to FQRNIs
net/dpaa: support mempool debug
net/dpaa: improve the dpaa port cleanup
Hemant Agrawal (5):
bus/dpaa: fix VSP for 1G fm1-mac9 and 10
bus/dpaa: fix the fman details status
bus/dpaa: add port buffer manager stats
net/dpaa: implement detailed packet parsing
net/dpaa: enhance DPAA frame display
Jun Yang (2):
net/dpaa: share MAC FMC scheme and CC parse
net/dpaa: improve dpaa errata A010022 handling
Rohit Raj (3):
net/dpaa: fix typecasting ch ID to u32
bus/dpaa: add OH port mode for dpaa eth
bus/dpaa: add ONIC port mode for the DPAA eth
Vanshika Shukla (5):
net/dpaa: support Tx confirmation to enable PTP
net/dpaa: add support to separate Tx conf queues
net/dpaa: support Rx/Tx timestamp read
net/dpaa: support IEEE 1588 PTP
net/dpaa: fix reallocate_mbuf handling
doc/guides/nics/dpaa.rst | 64 ++-
doc/guides/nics/features/dpaa.ini | 2 +
drivers/bus/dpaa/base/fman/fman.c | 583 +++++++++++++++++++---
drivers/bus/dpaa/base/fman/fman_hw.c | 102 +++-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/base/qbman/qman.c | 46 +-
drivers/bus/dpaa/dpaa_bus.c | 37 +-
drivers/bus/dpaa/include/fman.h | 112 ++++-
drivers/bus/dpaa/include/fsl_fman.h | 12 +
drivers/bus/dpaa/include/fsl_qman.h | 4 +-
drivers/bus/dpaa/version.map | 4 +
drivers/net/dpaa/dpaa_ethdev.c | 428 +++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 68 ++-
drivers/net/dpaa/dpaa_flow.c | 66 +--
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++------
drivers/net/dpaa/dpaa_ptp.c | 118 +++++
drivers/net/dpaa/dpaa_rxtx.c | 378 ++++++++++++--
drivers/net/dpaa/dpaa_rxtx.h | 152 +++---
drivers/net/dpaa/meson.build | 1 +
19 files changed, 2105 insertions(+), 512 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
` (18 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh, stable
From: Gagandeep Singh <g.singh@nxp.com>
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experienace while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 301057723e..9c90ee25a6 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -317,6 +339,7 @@ static int drain_mr_fqrni(struct qm_portal *p)
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 02/18] net/dpaa: fix typecasting ch ID to u32
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
` (17 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj, hemant.agrawal, stable
From: Rohit Raj <rohit.raj@nxp.com>
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecase
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 060b8c678f..1a2de5240f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
` (16 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 41195eb0a7..beeb03dbf2 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 04/18] bus/dpaa: fix the fman details status
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (2 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
` (15 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 05/18] bus/dpaa: add port buffer manager stats
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (3 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
` (14 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
Add BMI statistics and improving the existing extended
statistics
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 61 ++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fman.h | 4 +-
drivers/bus/dpaa/include/fsl_fman.h | 12 ++++++
drivers/bus/dpaa/version.map | 4 ++
drivers/net/dpaa/dpaa_ethdev.c | 46 ++++++++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 12 ++++++
6 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 97e792806f..124c69edb4 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -267,6 +267,67 @@ fman_if_stats_reset(struct fman_if *p)
;
}
+void
+fman_if_bmi_stats_enable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp |= FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_disable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp &= ~FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ int i = 0;
+
+ value[i++] = (u32)in_be32(®s->fmbm_rfrc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfbc);
+ value[i++] = (u32)in_be32(®s->fmbm_rlfc);
+ value[i++] = (u32)in_be32(®s->fmbm_rffc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfdc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfldec);
+ value[i++] = (u32)in_be32(®s->fmbm_rodc);
+ value[i++] = (u32)in_be32(®s->fmbm_rbdc);
+}
+
+void
+fman_if_bmi_stats_reset(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+
+ out_be32(®s->fmbm_rfrc, 0);
+ out_be32(®s->fmbm_rfbc, 0);
+ out_be32(®s->fmbm_rlfc, 0);
+ out_be32(®s->fmbm_rffc, 0);
+ out_be32(®s->fmbm_rfdc, 0);
+ out_be32(®s->fmbm_rfldec, 0);
+ out_be32(®s->fmbm_rodc, 0);
+ out_be32(®s->fmbm_rbdc, 0);
+}
+
void
fman_if_promiscuous_enable(struct fman_if *p)
{
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..60681068ea 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -56,6 +56,8 @@
#define FMAN_PORT_BMI_FIFO_UNITS 0x100
#define FMAN_PORT_IC_OFFSET_UNITS 0x10
+#define FMAN_BMI_COUNTERS_EN 0x80000000
+
#define FMAN_ENABLE_BPOOL_DEPLETION 0xF00000F0
#define HASH_CTRL_MCAST_EN 0x00000100
@@ -260,7 +262,7 @@ struct rx_bmi_regs {
/**< Buffer Manager pool Information-*/
uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
/**< Allocate Counter-*/
- uint32_t reserved0130[8];
+ uint32_t reserved0120[16];
/**< 0x130/0x140 - 0x15F reserved -*/
uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
/**< Congestion Group Map*/
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 20690f8329..5a9750ad0c 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -60,6 +60,18 @@ void fman_if_stats_reset(struct fman_if *p);
__rte_internal
void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+__rte_internal
+void fman_if_bmi_stats_enable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_disable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value);
+
+__rte_internal
+void fman_if_bmi_stats_reset(struct fman_if *p);
+
/* Set ignore pause option for a specific interface */
void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index 3f547f75cf..a17d57632e 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -24,6 +24,10 @@ INTERNAL {
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
+ fman_if_bmi_stats_enable;
+ fman_if_bmi_stats_disable;
+ fman_if_bmi_stats_get_all;
+ fman_if_bmi_stats_reset;
fman_if_clear_mac_addr;
fman_if_disable_rx;
fman_if_discard_rx_errors;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1a2de5240f..90b34e42f2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -131,6 +131,22 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tvlan)},
{"rx_undersized",
offsetof(struct dpaa_if_stats, tund)},
+ {"rx_frame_counter",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfrc)},
+ {"rx_bad_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfbc)},
+ {"rx_large_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rlfc)},
+ {"rx_filter_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rffc)},
+ {"rx_frame_discrad_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfdc)},
+ {"rx_frame_list_dma_err_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfldec)},
+ {"rx_out_of_buffer_discard ",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rodc)},
+ {"rx_buf_diallocate",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rbdc)},
};
static struct rte_dpaa_driver rte_dpaa_pmd;
@@ -430,6 +446,7 @@ static void dpaa_interrupt_handler(void *param)
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
uint16_t i;
PMD_INIT_FUNC_TRACE();
@@ -443,7 +460,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dev->process_private);
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -461,8 +480,10 @@ static int dpaa_eth_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac) {
+ fman_if_bmi_stats_disable(fif);
fman_if_disable_rx(fif);
+ }
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
for (i = 0; i < dev->data->nb_rx_queues; i++)
@@ -769,6 +790,7 @@ static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
fman_if_stats_reset(dev->process_private);
+ fman_if_bmi_stats_reset(dev->process_private);
return 0;
}
@@ -777,8 +799,9 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i = 0, j, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (n < num)
return num;
@@ -789,10 +812,16 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < num; i++) {
+ for (i = 0; i < num - (bmi_count - 1); i++) {
xstats[i].id = i;
xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
}
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < num; i++, j++) {
+ xstats[i].id = i;
+ xstats[i].value = values[j];
+ }
+
return i;
}
@@ -819,8 +848,9 @@ static int
dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t *values, unsigned int n)
{
- unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i, j, stat_cnt = RTE_DIM(dpaa_xstats_strings);
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (!ids) {
if (n < stat_cnt)
@@ -832,10 +862,14 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < stat_cnt; i++)
+ for (i = 0; i < stat_cnt - (bmi_count - 1); i++)
values[i] =
values_copy[dpaa_xstats_strings[i].offset / 8];
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < stat_cnt; i++, j++)
+ values[i] = values_copy[j];
+
return stat_cnt;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b6c61b8b6b..261a5a3ca7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -212,6 +212,18 @@ dpaa_rx_cb_atomic(void *event,
const struct qm_dqrr_entry *dqrr,
void **bufs);
+struct dpaa_if_rx_bmi_stats {
+ uint32_t fmbm_rstc; /**< Rx Statistics Counters*/
+ uint32_t fmbm_rfrc; /**< Rx Frame Counter*/
+ uint32_t fmbm_rfbc; /**< Rx Bad Frames Counter*/
+ uint32_t fmbm_rlfc; /**< Rx Large Frames Counter*/
+ uint32_t fmbm_rffc; /**< Rx Filter Frames Counter*/
+ uint32_t fmbm_rfdc; /**< Rx Frame Discard Counter*/
+ uint32_t fmbm_rfldec; /**< Rx Frames List DMA Error Counter*/
+ uint32_t fmbm_rodc; /**< Rx Out of Buffers Discard nntr*/
+ uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
+};
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 06/18] net/dpaa: support Tx confirmation to enable PTP
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (4 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
` (13 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
TX confirmation provides dedicated confirmation
queues for transmitted packets. These queues are
used by software to get the status and release
transmitted packets buffers.
This patch also changes the IEEE1588 support as devargs
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 124 ++++++++++++++++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 +-
drivers/net/dpaa/dpaa_rxtx.c | 49 +++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 2 +
5 files changed, 154 insertions(+), 28 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index e8402dff52..acf4daab02 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -264,6 +264,9 @@ for details.
Done
testpmd>
+* Use dev arg option ``drv_ieee1588=1`` to enable ieee 1588 support at
+ driver level. e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``
+
FMAN Config
-----------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 90b34e42f2..bba305cfb1 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2020,2022-2024 NXP
*
*/
/* System headers */
@@ -30,6 +30,7 @@
#include <rte_eal.h>
#include <rte_alarm.h>
#include <rte_ether.h>
+#include <rte_kvargs.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
#include <rte_ring.h>
@@ -50,6 +51,7 @@
#include <process.h>
#include <fmlib/fm_ext.h>
+#define DRIVER_IEEE1588 "drv_ieee1588"
#define CHECK_INTERVAL 100 /* 100ms */
#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */
@@ -83,6 +85,7 @@ static uint64_t dev_tx_offloads_nodis =
static int is_global_init;
static int fmc_q = 1; /* Indicates the use of static fmc for distribution */
static int default_q; /* use default queue - FMC is not executed*/
+int dpaa_ieee_1588; /* use to indicate if IEEE 1588 is enabled for the driver */
/* At present we only allow up to 4 push mode queues as default - as each of
* this queue need dedicated portal and we are short of portals.
*/
@@ -1826,9 +1829,15 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_b = 0;
- /* no tx-confirmation */
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
- opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+ if (dpaa_ieee_1588) {
+ opts.fqd.context_a.lo = 0;
+ opts.fqd.context_a.hi = fman_dealloc_bufs_mask_hi;
+ } else {
+ /* no tx-confirmation */
+ opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
+ opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ }
+
if (fman_ip_rev >= FMAN_V3) {
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= 0x04000000;
@@ -1861,9 +1870,10 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
-static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
+ * to support PTP
+ */
+static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
@@ -1872,15 +1882,15 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
ret = qman_reserve_fqid(fqid);
if (ret) {
- DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("Reserve fqid %d failed with ret: %d",
fqid, ret);
return -EINVAL;
}
/* "map" this Rx FQ to one of the interfaces Tx FQID */
- DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+ DPAA_PMD_DEBUG("Creating fq %p, fqid %d", fq, fqid);
ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
if (ret) {
- DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("create fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
@@ -1888,11 +1898,10 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
ret = qman_init_fq(fq, 0, &opts);
if (ret)
- DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("init fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
-#endif
/* Initialise a network interface */
static int
@@ -1927,6 +1936,43 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+static int
+check_devargs_handler(__rte_unused const char *key, const char *value,
+ __rte_unused void *opaque)
+{
+ if (strcmp(value, "1"))
+ return -1;
+
+ return 0;
+}
+
+static int
+dpaa_get_devargs(struct rte_devargs *devargs, const char *key)
+{
+ struct rte_kvargs *kvlist;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (!kvlist)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, key)) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+
+ if (rte_kvargs_process(kvlist, key,
+ check_devargs_handler, NULL) < 0) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+ rte_kvargs_free(kvlist);
+
+ return 1;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1944,6 +1990,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
+ struct rte_device *dev = eth_dev->device;
PMD_INIT_FUNC_TRACE();
@@ -1960,6 +2007,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
+ if (dpaa_get_devargs(dev->devargs, DRIVER_IEEE1588))
+ dpaa_ieee_1588 = 1;
+
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
@@ -2079,6 +2129,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ dpaa_intf->tx_conf_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+ MAX_DPAA_CORES, MAX_CACHELINE);
+ if (!dpaa_intf->tx_conf_queues) {
+ DPAA_PMD_ERR("Failed to alloc mem for TX conf queues\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
/* If congestion control is enabled globally*/
if (td_tx_threshold) {
dpaa_intf->cgr_tx = rte_zmalloc(NULL,
@@ -2115,22 +2173,32 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ if (dpaa_ieee_1588)
#endif
+ {
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
+ fman_intf->fqid_tx_confirm);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
+ }
DPAA_PMD_DEBUG("All frame queues created");
@@ -2388,4 +2456,6 @@ static struct rte_dpaa_driver rte_dpaa_pmd = {
};
RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
+RTE_PMD_REGISTER_PARAM_STRING(net_dpaa,
+ DRIVER_IEEE1588 "=<int>");
RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_pmd, NOTICE);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 261a5a3ca7..b427b29cb6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2024 NXP
*
*/
#ifndef __DPAA_ETHDEV_H__
@@ -112,6 +112,7 @@
#define FMC_FILE "/tmp/fmc.bin"
extern struct rte_mempool *dpaa_tx_sg_pool;
+extern int dpaa_ieee_1588;
/* structure to free external and indirect
* buffers.
@@ -131,6 +132,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c2579d65ee..8593e20200 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1082,6 +1082,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
+ struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
@@ -1162,6 +1165,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
+
+ if (dpaa_ieee_1588)
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
+
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1190,6 +1197,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
+ if (dpaa_ieee_1588)
+ dpaa_eth_tx_conf(fq_txconf);
+
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1200,6 +1210,45 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+void
+dpaa_eth_tx_conf(void *q)
+{
+ struct qman_fq *fq = q;
+ struct qm_dqrr_entry *dq;
+ int num_tx_conf, ret, dq_num;
+ uint32_t vdqcr_flags = 0;
+
+ if (unlikely(rte_dpaa_bpid_info == NULL &&
+ rte_eal_process_type() == RTE_PROC_SECONDARY))
+ rte_dpaa_bpid_info = fq->bp_array;
+
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+ ret = rte_dpaa_portal_init((void *)0);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal");
+ return;
+ }
+ }
+
+ num_tx_conf = DPAA_MAX_DEQUEUE_NUM_FRAMES - 2;
+
+ do {
+ dq_num = 0;
+ ret = qman_set_vdq(fq, num_tx_conf, vdqcr_flags);
+ if (ret)
+ return;
+ do {
+ dq = qman_dequeue(fq);
+ if (!dq)
+ continue;
+ dq_num++;
+ dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ qman_dqrr_consume(fq, dq);
+ dpaa_free_mbuf(&dq->fd);
+ } while (fq->flags & QMAN_FQ_STATE_VDQCR);
+ } while (dq_num == num_tx_conf);
+}
+
uint16_t
dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b2d7c0f2a3..042602e087 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -281,6 +281,8 @@ uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+void dpaa_eth_tx_conf(void *q);
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 07/18] net/dpaa: add support to separate Tx conf queues
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (5 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
` (12 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch separates Tx confirmation queues for kernel
and DPDK so as to support the VSP case.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fsl_qman.h | 4 ++-
drivers/net/dpaa/dpaa_ethdev.c | 45 +++++++++++++++++++++--------
drivers/net/dpaa/dpaa_rxtx.c | 3 +-
3 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index c0677976e8..db14dfb839 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2012 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -1237,6 +1237,8 @@ struct qman_fq {
/* DPDK Interface */
void *dpaa_intf;
+ /*to store tx_conf_queue corresponding to tx_queue*/
+ struct qman_fq *tx_conf_queue;
struct rte_event ev;
/* affined portal in case of static queue */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bba305cfb1..3ee3029729 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1870,9 +1870,30 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
- * to support PTP
- */
+static int
+dpaa_tx_conf_queue_init(struct qman_fq *fq)
+{
+ struct qm_mcc_initfq opts = {0};
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID, fq);
+ if (ret) {
+ DPAA_PMD_ERR("create Tx_conf failed with ret: %d", ret);
+ return ret;
+ }
+
+ opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+ opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+ ret = qman_init_fq(fq, 0, &opts);
+ if (ret)
+ DPAA_PMD_ERR("init Tx_conf fqid %d failed with ret: %d",
+ fq->fqid, ret);
+ return ret;
+}
+
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
@@ -2170,6 +2191,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+
+ if (dpaa_ieee_1588) {
+ ret = dpaa_tx_conf_queue_init(&dpaa_intf->tx_conf_queues[loop]);
+ if (ret)
+ goto free_tx;
+
+ dpaa_intf->tx_conf_queues[loop].dpaa_intf = dpaa_intf;
+ dpaa_intf->tx_queues[loop].tx_conf_queue = &dpaa_intf->tx_conf_queues[loop];
+ }
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
@@ -2190,16 +2220,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
goto free_tx;
}
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
- fman_intf->fqid_tx_confirm);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
- goto free_tx;
- }
- dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
}
-
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 8593e20200..3bd35c7a0e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1083,8 +1083,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
- struct dpaa_if *dpaa_intf = fq->dpaa_intf;
- struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
+ struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 08/18] net/dpaa: share MAC FMC scheme and CC parse
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (6 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
` (11 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
For Shared MAC:
1) Allocate RXQ from VSP scheme. (Virtual Storage Profile)
2) Allocate RXQ from Coarse classifiation (CC) rules to VSP.
2) Remove RXQ allocated which is reconfigured without VSP.
3) Don't alloc default queue and err queues.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 60 +++--
drivers/net/dpaa/dpaa_ethdev.h | 13 +-
drivers/net/dpaa/dpaa_flow.c | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++++++++++++------------
drivers/net/dpaa/dpaa_rxtx.c | 20 +-
6 files changed, 346 insertions(+), 177 deletions(-)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 60681068ea..6b2a1893f9 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -76,6 +76,7 @@ enum fman_mac_type {
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
+ fman_onic,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3ee3029729..bf14d73433 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -255,7 +255,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
DPAA_PMD_ERR("Cannot open IF socket");
return -errno;
}
-
strncpy(ifr.ifr_name, dpaa_intf->name, IFNAMSIZ - 1);
if (ioctl(socket_fd, SIOCGIFMTU, &ifr) < 0) {
@@ -1893,6 +1892,7 @@ dpaa_tx_conf_queue_init(struct qman_fq *fq)
return ret;
}
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
@@ -1923,6 +1923,7 @@ static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
fqid, ret);
return ret;
}
+#endif
/* Initialise a network interface */
static int
@@ -1957,6 +1958,41 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int
+dpaa_error_queue_init(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
+{
+ int i, ret;
+ struct qman_fq *err_queues = dpaa_intf->debug_queues;
+ uint32_t err_fqid = 0;
+
+ if (fman_intf->is_shared_mac) {
+ DPAA_PMD_DEBUG("Shared MAC's err queues are handled in kernel");
+ return 0;
+ }
+
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
+ if (i == DPAA_DEBUG_FQ_RX_ERROR)
+ err_fqid = fman_intf->fqid_rx_err;
+ else if (i == DPAA_DEBUG_FQ_TX_ERROR)
+ err_fqid = fman_intf->fqid_tx_err;
+ else
+ continue;
+ ret = dpaa_def_queue_init(&err_queues[i], err_fqid);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA %s ERROR queue init failed!",
+ i == DPAA_DEBUG_FQ_RX_ERROR ?
+ "RX" : "TX");
+ return ret;
+ }
+ err_queues[i].dpaa_intf = dpaa_intf;
+ }
+
+ return 0;
+}
+#endif
+
static int
check_devargs_handler(__rte_unused const char *key, const char *value,
__rte_unused void *opaque)
@@ -2202,25 +2238,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-
-#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
- if (dpaa_ieee_1588)
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ ret = dpaa_error_queue_init(dpaa_intf, fman_intf);
+ if (ret)
+ goto free_tx;
#endif
- {
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- }
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b427b29cb6..0a1ceb376a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -78,8 +78,11 @@
#define DPAA_IF_RX_CONTEXT_STASH 0
/* Each "debug" FQ is represented by one of these */
-#define DPAA_DEBUG_FQ_RX_ERROR 0
-#define DPAA_DEBUG_FQ_TX_ERROR 1
+enum {
+ DPAA_DEBUG_FQ_RX_ERROR,
+ DPAA_DEBUG_FQ_TX_ERROR,
+ DPAA_DEBUG_FQ_MAX_NUM
+};
#define DPAA_RSS_OFFLOAD_ALL ( \
RTE_ETH_RSS_L2_PAYLOAD | \
@@ -107,6 +110,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
+#define DPAA_1G_MAC_START_IDX 1
+#define DPAA_10G_MAC_START_IDX 9
+#define DPAA_2_5G_MAC_START_IDX DPAA_10G_MAC_START_IDX
+
#define DPAA_DEFAULT_RXQ_VSP_ID 1
#define FMC_FILE "/tmp/fmc.bin"
@@ -134,7 +141,7 @@ struct dpaa_if {
struct qman_fq *tx_queues;
struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
- struct qman_fq debug_queues[2];
+ struct qman_fq debug_queues[DPAA_DEBUG_FQ_MAX_NUM];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 02aca78d05..082bd5d014 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -651,7 +651,13 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_mac_1g)
+ /* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
+ * ports so that kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_mac_1g &&
+ fif->mac_idx >= DPAA_10G_MAC_START_IDX)
+ return e_FM_PORT_TYPE_RX_10G;
+ else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_RX;
else if (fif->mac_type == fman_mac_2_5g)
return e_FM_PORT_TYPE_RX_2_5G;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index f8c9360311..d80ea1010a 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2023 NXP
*/
/* System headers */
@@ -204,139 +204,258 @@ struct fmc_model_t {
struct fmc_model_t *g_fmc_model;
-static int dpaa_port_fmc_port_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx)
+static int
+dpaa_port_fmc_port_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx)
{
int current_port = fmc_model->apply_order[apply_idx].index;
const fmc_port *pport = &fmc_model->port[current_port];
- const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2};
+ uint32_t num;
+
+ if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
+ pport->number == fif->mac_idx &&
+ (fif->mac_type == fman_offline ||
+ fif->mac_type == fman_onic))
+ return current_port;
+
+ if (fif->mac_type == fman_mac_1g) {
+ if (pport->type != e_FM_PORT_TYPE_RX)
+ return -ENODEV;
+ num = pport->number + DPAA_1G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_2_5g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_2_5G)
+ return -ENODEV;
+ num = pport->number + DPAA_2_5G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_10g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_10G)
+ return -ENODEV;
+ num = pport->number + DPAA_10G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ DPAA_PMD_ERR("Invalid MAC(mac_idx=%d) type(%d)",
+ fif->mac_idx, fif->mac_type);
+
+ return -EINVAL;
+}
+
+static int
+dpaa_fq_is_in_kernel(uint32_t fqid,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if ((fqid == fif->fqid_rx_def ||
+ (fqid >= fif->fqid_rx_pcd &&
+ fqid < (fif->fqid_rx_pcd + fif->fqid_rx_pcd_count)) ||
+ fqid == fif->fqid_rx_err ||
+ fqid == fif->fqid_tx_err))
+ return true;
+
+ return false;
+}
+
+static int
+dpaa_vsp_id_is_in_kernel(uint8_t vsp_id,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (vsp_id == fif->base_profile_id)
+ return true;
+
+ return false;
+}
+
+static uint8_t
+dpaa_enqueue_vsp_id(struct fman_if *fif,
+ const struct ioc_fm_pcd_cc_next_enqueue_params_t *eq_param)
+{
+ if (eq_param->override_fqid)
+ return eq_param->new_relative_storage_profile_id;
+
+ return fif->base_profile_id;
+}
- if (mac_idx[fif->mac_idx] != pport->number ||
- mac_type[fif->mac_idx] != pport->type)
- return -1;
+static int
+dpaa_kg_storage_is_in_kernel(struct fman_if *fif,
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (!kg_storage->direct ||
+ (kg_storage->direct &&
+ kg_storage->profile_select.direct_relative_profile_id ==
+ fif->base_profile_id))
+ return true;
- return current_port;
+ return false;
}
-static int dpaa_port_fmc_scheme_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static void
+dpaa_fmc_remove_fq_from_allocated(uint32_t *fqids,
+ uint16_t *rxq_idx, uint32_t rm_fqid)
{
- int idx = fmc->apply_order[apply_idx].index;
uint32_t i;
- if (!fmc->scheme[idx].override_storage_profile &&
- fif->is_shared_mac) {
- DPAA_PMD_WARN("No VSP assigned to scheme %d for sharemac %d!",
- idx, fif->mac_idx);
- DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!");
+ for (i = 0; i < (*rxq_idx); i++) {
+ if (fqids[i] != rm_fqid)
+ continue;
+ DPAA_PMD_WARN("Remove fq(0x%08x) allocated.",
+ rm_fqid);
+ if ((*rxq_idx) > (i + 1)) {
+ memmove(&fqids[i], &fqids[i + 1],
+ ((*rxq_idx) - (i + 1)) * sizeof(uint32_t));
+ }
+ (*rxq_idx)--;
+ break;
}
+}
- if (e_IOC_FM_PCD_DONE ==
- fmc->scheme[idx].next_engine) {
- for (i = 0; i < fmc->scheme[idx]
- .key_ext_and_hash.hash_dist_num_of_fqids; i++) {
- uint32_t fqid = fmc->scheme[idx].base_fqid + i;
- int k, found = 0;
-
- if (fqid == fif->fqid_rx_def ||
- (fqid >= fif->fqid_rx_pcd &&
- fqid < (fif->fqid_rx_pcd +
- fif->fqid_rx_pcd_count))) {
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id !=
- fif->base_profile_id) {
- DPAA_PMD_ERR("Def RXQ must be associated with def VSP on sharemac!");
-
- return -1;
- }
- continue;
+static int
+dpaa_port_fmc_scheme_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ int scheme_idx = fmc->apply_order[apply_idx].index;
+ int k, found = 0;
+ uint32_t i, num_rxq, fqid, rxq_idx_start = *rxq_idx;
+ const struct fm_pcd_kg_scheme_params_t *scheme;
+ const struct ioc_fm_pcd_kg_key_extract_and_hash_params_t *params;
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage;
+ uint8_t vsp_id;
+
+ scheme = &fmc->scheme[scheme_idx];
+ params = &scheme->key_ext_and_hash;
+ num_rxq = params->hash_dist_num_of_fqids;
+ kg_storage = &scheme->storage_profile;
+
+ if (scheme->override_storage_profile && kg_storage->direct)
+ vsp_id = kg_storage->profile_select.direct_relative_profile_id;
+ else
+ vsp_id = fif->base_profile_id;
+
+ if (dpaa_kg_storage_is_in_kernel(fif, kg_storage)) {
+ DPAA_PMD_WARN("Scheme[%d]'s VSP is in kernel",
+ scheme_idx);
+ /* The FQ may be allocated from previous CC or scheme,
+ * find and remove it.
+ */
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ DPAA_PMD_WARN("Removed fqid(0x%08x) of Scheme[%d]",
+ fqid, scheme_idx);
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ if (!dpaa_fq_is_in_kernel(fqid, fif)) {
+ char reason_msg[128];
+ char result_msg[128];
+
+ sprintf(reason_msg,
+ "NOT handled in kernel");
+ sprintf(result_msg,
+ "will DRAIN kernel pool!");
+ DPAA_PMD_WARN("Traffic to FQ(%08x)(%s) %s",
+ fqid, reason_msg, result_msg);
}
+ }
- if (fif->is_shared_mac &&
- !fmc->scheme[idx].override_storage_profile) {
- DPAA_PMD_ERR("RXQ to DPDK must be associated with VSP on sharemac!");
- return -1;
- }
+ return 0;
+ }
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id ==
- fif->base_profile_id) {
- DPAA_PMD_ERR("RXQ can't be associated with default VSP on sharemac!");
+ if (e_IOC_FM_PCD_DONE != scheme->next_engine) {
+ /* Do nothing.*/
+ DPAA_PMD_DEBUG("Will parse scheme[%d]'s next engine(%d)",
+ scheme_idx, scheme->next_engine);
+ return 0;
+ }
- return -1;
- }
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ found = 0;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_DEBUG("Too many queues in FMC policy"
- "%d overflow %d",
- (*rxq_idx), max_nb_rxq);
+ if (dpaa_fq_is_in_kernel(fqid, fif)) {
+ DPAA_PMD_WARN("FQ(0x%08x) is handled in kernel.",
+ fqid);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
- continue;
- }
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
- for (k = 0; k < (*rxq_idx); k++) {
- if (fqids[k] == fqid) {
- found = 1;
- break;
- }
- }
+ break;
+ }
- if (found)
- continue;
- fqids[(*rxq_idx)] = fqid;
- if (fmc->scheme[idx].override_storage_profile) {
- if (fmc->scheme[idx].storage_profile.direct) {
- vspids[(*rxq_idx)] =
- fmc->scheme[idx].storage_profile
- .profile_select
- .direct_relative_profile_id;
- } else {
- vspids[(*rxq_idx)] = -1;
- }
- } else {
- vspids[(*rxq_idx)] = -1;
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
}
- (*rxq_idx)++;
}
+
+ if (found)
+ continue;
+ fqids[(*rxq_idx)] = fqid;
+ vspids[(*rxq_idx)] = vsp_id;
+
+ (*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static int
+dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
{
uint16_t j, k, found = 0;
const struct ioc_keys_params_t *keys_params;
- uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index;
-
- keys_params = &fmc_model->ccnode[cc_idx].keys_params;
+ const struct ioc_fm_pcd_cc_next_engine_params_t *params;
+ uint32_t fqid, cc_idx = fmc->apply_order[apply_idx].index;
+ uint32_t rxq_idx_start = *rxq_idx;
+ uint8_t vsp_id;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
+ keys_params = &fmc->ccnode[cc_idx].keys_params;
for (j = 0; j < keys_params->num_of_keys; ++j) {
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
+
+ break;
+ }
found = 0;
- fqid = keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.new_fqid;
+ params = &keys_params->key_params[j].cc_next_engine_params;
/* We read DPDK queue from last classification rule present in
* FMC policy file. Hence, this check is required here.
@@ -344,15 +463,30 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
* have userspace queue so that it can be used by DPDK
* application.
*/
- if (keys_params->key_params[j].cc_next_engine_params
- .next_engine != e_IOC_FM_PCD_DONE) {
- DPAA_PMD_WARN("FMC CC next engine not support");
+ if (params->next_engine != e_IOC_FM_PCD_DONE) {
+ DPAA_PMD_WARN("CC next engine(%d) not support",
+ params->next_engine);
continue;
}
- if (keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.action !=
+ if (params->params.enqueue_params.action !=
e_IOC_FM_PCD_ENQ_FRAME)
continue;
+
+ fqid = params->params.enqueue_params.new_fqid;
+ vsp_id = dpaa_enqueue_vsp_id(fif,
+ ¶ms->params.enqueue_params);
+ if (dpaa_fq_is_in_kernel(fqid, fif) ||
+ dpaa_vsp_id_is_in_kernel(vsp_id, fif)) {
+ DPAA_PMD_DEBUG("FQ(0x%08x)/VSP(%d) is in kernel.",
+ fqid, vsp_id);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
+
for (k = 0; k < (*rxq_idx); k++) {
if (fqids[k] == fqid) {
found = 1;
@@ -362,38 +496,22 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
if (found)
continue;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
-
fqids[(*rxq_idx)] = fqid;
- vspids[(*rxq_idx)] =
- keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params
- .new_relative_storage_profile_id;
-
- if (vspids[(*rxq_idx)] == fif->base_profile_id &&
- fif->is_shared_mac) {
- DPAA_PMD_ERR("VSP %d can NOT be used on DPDK.",
- vspids[(*rxq_idx)]);
- DPAA_PMD_ERR("It is associated to skb pool of shared interface.");
- return -1;
- }
+ vspids[(*rxq_idx)] = vsp_id;
+
(*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-int dpaa_port_fmc_init(struct fman_if *fif,
- uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
+int
+dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
{
int current_port = -1, ret;
uint16_t rxq_idx = 0;
- const struct fmc_model_t *fmc_model;
+ const struct fmc_model_t *fmc;
uint32_t i;
if (!g_fmc_model) {
@@ -402,14 +520,14 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (!fp) {
DPAA_PMD_ERR("%s not exists", FMC_FILE);
- return -1;
+ return -ENOENT;
}
g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64);
if (!g_fmc_model) {
DPAA_PMD_ERR("FMC memory alloc failed");
fclose(fp);
- return -1;
+ return -ENOBUFS;
}
bytes_read = fread(g_fmc_model,
@@ -419,25 +537,28 @@ int dpaa_port_fmc_init(struct fman_if *fif,
fclose(fp);
rte_free(g_fmc_model);
g_fmc_model = NULL;
- return -1;
+ return -EIO;
}
fclose(fp);
}
- fmc_model = g_fmc_model;
+ fmc = g_fmc_model;
- if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER)
- return -1;
+ if (fmc->format_version != FMC_OUTPUT_FORMAT_VER) {
+ DPAA_PMD_ERR("FMC version(0x%08x) != Supported ver(0x%08x)",
+ fmc->format_version, FMC_OUTPUT_FORMAT_VER);
+ return -EINVAL;
+ }
- for (i = 0; i < fmc_model->apply_order_count; i++) {
- switch (fmc_model->apply_order[i].type) {
+ for (i = 0; i < fmc->apply_order_count; i++) {
+ switch (fmc->apply_order[i].type) {
case fmcengine_start:
break;
case fmcengine_end:
break;
case fmcport_start:
current_port = dpaa_port_fmc_port_parse(fif,
- fmc_model, i);
+ fmc, i);
break;
case fmcport_end:
break;
@@ -445,24 +566,24 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (current_port < 0)
break;
- ret = dpaa_port_fmc_scheme_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq,
- fqids, vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_scheme_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from scheme[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmcccnode:
if (current_port < 0)
break;
- ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq, fqids,
- vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_ccnode_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from cc[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmchtnode:
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3bd35c7a0e..d1338d1654 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -693,13 +693,26 @@ dpaa_rx_cb_atomic(void *event,
}
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
+static inline void
+dpaa_eth_err_queue(struct qman_fq *fq)
{
struct rte_mbuf *mbuf;
struct qman_fq *debug_fq;
int ret, i;
struct qm_dqrr_entry *dq;
struct qm_fd *fd;
+ struct dpaa_if *dpaa_intf;
+
+ dpaa_intf = fq->dpaa_intf;
+ if (fq != &dpaa_intf->rx_queues[0]) {
+ /* Associate error queues to the first RXQ.*/
+ return;
+ }
+
+ if (dpaa_intf->cfg->fman_if->is_shared_mac) {
+ /* Error queues of shared MAC are handled in kernel. */
+ return;
+ }
if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
ret = rte_dpaa_portal_init((void *)0);
@@ -708,7 +721,7 @@ static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
return;
}
}
- for (i = 0; i <= DPAA_DEBUG_FQ_TX_ERROR; i++) {
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
debug_fq = &dpaa_intf->debug_queues[i];
ret = qman_set_vdq(debug_fq, 4, QM_VDQCR_EXACT);
if (ret)
@@ -751,8 +764,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
rte_dpaa_bpid_info = fq->bp_array;
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- if (fq->fqid == ((struct dpaa_if *)fq->dpaa_intf)->rx_queues[0].fqid)
- dpaa_eth_err_queue((struct dpaa_if *)fq->dpaa_intf);
+ dpaa_eth_err_queue(fq);
#endif
if (likely(fq->is_static))
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 09/18] net/dpaa: support Rx/Tx timestamp read
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (7 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
` (10 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch implements Rx/Tx timestamp read operations
for DPAA1 platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 21 +++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 6 ++-
drivers/bus/dpaa/include/fman.h | 18 ++++++-
drivers/net/dpaa/dpaa_ethdev.c | 2 +
drivers/net/dpaa/dpaa_ethdev.h | 17 +++++++
drivers/net/dpaa/dpaa_ptp.c | 42 ++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_rxtx.h | 4 +-
drivers/net/dpaa/meson.build | 1 +
10 files changed, 168 insertions(+), 15 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b136ed191a..4196dd800c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timestamp offload = Y
Basic stats = Y
Extended stats = Y
FW version = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index beeb03dbf2..e39bc8c252 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2024 NXP
*
*/
@@ -520,6 +520,25 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 124c69edb4..4fc41c1ae9 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020 NXP
+ * Copyright 2017,2020,2022 NXP
*
*/
@@ -565,6 +565,10 @@ fman_if_set_ic_params(struct fman_if *fm_if,
&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
out_be32(fmbm_ricp, val);
+ unsigned int *fmbm_ticp =
+ &((struct tx_bmi_regs *)__if->tx_bmi_map)->fmbm_ticp;
+ out_be32(fmbm_ticp, val);
+
return 0;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 6b2a1893f9..09d1ddb897 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2021 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -292,6 +292,21 @@ struct rx_bmi_regs {
uint32_t fmbm_rdbg; /**< Rx Debug-*/
};
+struct tx_bmi_regs {
+ uint32_t fmbm_tcfg; /**< Tx Configuration*/
+ uint32_t fmbm_tst; /**< Tx Status*/
+ uint32_t fmbm_tda; /**< Tx DMA attributes*/
+ uint32_t fmbm_tfp; /**< Tx FIFO Parameters*/
+ uint32_t fmbm_tfed; /**< Tx Frame End Data*/
+ uint32_t fmbm_ticp; /**< Tx Internal Context Parameters*/
+ uint32_t fmbm_tfdne; /**< Tx Frame Dequeue Next Engine*/
+ uint32_t fmbm_tfca; /**< Tx Frame Attributes*/
+ uint32_t fmbm_tcfqid; /**< Tx Confirmation Frame Queue ID*/
+ uint32_t fmbm_tefqid; /**< Tx Error Frame Queue ID*/
+ uint32_t fmbm_tfene; /**< Tx Frame Enqueue Next Engine*/
+ uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
+ uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
+};
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -380,6 +395,7 @@ struct __fman_if {
uint64_t regs_size;
void *ccsr_map;
void *bmi_map;
+ void *tx_bmi_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bf14d73433..682cb1c77e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,8 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
static bool
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 0a1ceb376a..bbdb0936c0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -151,6 +151,14 @@ struct dpaa_if {
void *netenv_handle;
void *scheme_handle[2];
uint32_t scheme_count;
+ /*stores timestamp of last received packet on dev*/
+ uint64_t rx_timestamp;
+ /*stores timestamp of last received tx confirmation packet on dev*/
+ uint64_t tx_timestamp;
+ /* stores pointer to next tx_conf queue that should be processed,
+ * it corresponds to last packet transmitted
+ */
+ struct qman_fq *next_tx_conf_queue;
void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM];
uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM];
@@ -233,6 +241,15 @@ struct dpaa_if_rx_bmi_stats {
uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
};
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
new file mode 100644
index 0000000000..2ecdda6db0
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2024 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_eth_ctrl.h>
+#include <rte_malloc.h>
+#include <rte_time.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+ if (dpaa_intf->next_tx_conf_queue) {
+ while (!dpaa_intf->tx_timestamp)
+ dpaa_eth_tx_conf(dpaa_intf->next_tx_conf_queue);
+ } else {
+ return -1;
+ }
+ *timestamp = rte_ns_to_timespec(dpaa_intf->tx_timestamp);
+
+ return 0;
+}
+
+int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ *timestamp = rte_ns_to_timespec(dpaa_intf->rx_timestamp);
+ return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d1338d1654..e3b4bb14ab 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2024 NXP
*
*/
@@ -49,7 +49,6 @@
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
- (_fd)->cmd = 0; \
(_fd)->opaque_addr = 0; \
(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
@@ -122,6 +121,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
{
struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
uint64_t prs = *((uintptr_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+ struct rte_ether_hdr *eth_hdr =
+ rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
@@ -241,6 +242,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
if (prs & DPAA_PARSE_VLAN_MASK)
m->ol_flags |= RTE_MBUF_F_RX_VLAN;
/* Packet received without stripping the vlan */
+
+ if (eth_hdr->ether_type == htons(RTE_ETHER_TYPE_1588)) {
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
+ }
}
static inline void dpaa_checksum(struct rte_mbuf *mbuf)
@@ -317,7 +323,7 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
prs->ip_off[0] = mbuf->l2_len;
prs->l4_off = mbuf->l3_len + mbuf->l2_len;
/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
- fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+ fd->cmd |= DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
static inline void
@@ -513,6 +519,7 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
bp_info = DPAA_BPID_TO_POOL_INFO(dqrr[0]->fd.bpid);
ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr[0]->fd));
@@ -554,6 +561,11 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -567,6 +579,7 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
for (i = 0; i < num_bufs; i++) {
fd = &dqrr[i]->fd;
@@ -594,6 +607,11 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -758,6 +776,8 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int num_rx_bufs, ret;
uint32_t vdqcr_flags = 0;
+ struct annotations_t *annot;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -800,6 +820,10 @@ uint16_t dpaa_eth_queue_rx(void *q,
continue;
bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(bufs[num_rx - 1]->buf_addr);
+ dpaa_intf->rx_timestamp = rte_cpu_to_be_64(annot->timestamp);
+ }
qman_dqrr_consume(fq, dq);
} while (fq->flags & QMAN_FQ_STATE_VDQCR);
@@ -1095,6 +1119,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
@@ -1107,6 +1132,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+ if (dpaa_ieee_1588) {
+ dpaa_intf->next_tx_conf_queue = fq_txconf;
+ dpaa_eth_tx_conf(fq_txconf);
+ dpaa_intf->tx_timestamp = 0;
+ }
+
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
@@ -1119,6 +1150,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
(mbuf->data_off & 0x7F) != 0x0)
realloc_mbuf = 1;
+
+ fd_arr[loop].cmd = 0;
+ if (dpaa_ieee_1588) {
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
+ qman_fq_fqid(fq_txconf);
+ fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
+ DPAA_FD_CMD_UPD;
+ }
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
@@ -1176,10 +1215,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
-
- if (dpaa_ieee_1588)
- fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
-
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1208,9 +1243,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
- if (dpaa_ieee_1588)
- dpaa_eth_tx_conf(fq_txconf);
-
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1228,6 +1260,12 @@ dpaa_eth_tx_conf(void *q)
struct qm_dqrr_entry *dq;
int num_tx_conf, ret, dq_num;
uint32_t vdqcr_flags = 0;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qm_dqrr_entry *dqrr;
+ struct dpaa_bp_info *bp_info;
+ struct rte_mbuf *mbuf;
+ void *ptr;
+ struct annotations_t *annot;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -1252,7 +1290,20 @@ dpaa_eth_tx_conf(void *q)
dq = qman_dequeue(fq);
if (!dq)
continue;
+ dqrr = dq;
dq_num++;
+ bp_info = DPAA_BPID_TO_POOL_INFO(dqrr->fd.bpid);
+ ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr->fd));
+ rte_prefetch0((void *)((uint8_t *)ptr
+ + DEFAULT_RX_ICEOF));
+ mbuf = (struct rte_mbuf *)
+ ((char *)ptr - bp_info->meta_data_size);
+
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->tx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
qman_dqrr_consume(fq, dq);
dpaa_free_mbuf(&dq->fd);
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 042602e087..1048e86d41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2021 NXP
+ * Copyright 2017,2020-2022 NXP
*
*/
@@ -260,7 +260,7 @@ struct dpaa_eth_parse_results_t {
struct annotations_t {
uint8_t reserved[DEFAULT_RX_ICEOF];
struct dpaa_eth_parse_results_t parse; /**< Pointer to Parsed result*/
- uint64_t reserved1;
+ uint64_t timestamp;
uint64_t hash; /**< Hash Result */
};
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 42e1f8c2e2..239858adda 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -14,6 +14,7 @@ sources = files(
'dpaa_flow.c',
'dpaa_rxtx.c',
'dpaa_fmc.c',
+ 'dpaa_ptp.c',
)
if cc.has_argument('-Wno-pointer-arith')
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 10/18] net/dpaa: support IEEE 1588 PTP
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (8 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
` (9 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch adds the support for the ethdev APIs
to enable/disable and read/write/adjust IEEE1588
PTP timestamps for DPAA platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/dpaa.rst | 1 +
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 15 ++++++
drivers/bus/dpaa/include/fman.h | 45 +++++++++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 5 ++
drivers/net/dpaa/dpaa_ethdev.h | 16 +++++++
drivers/net/dpaa/dpaa_ptp.c | 80 ++++++++++++++++++++++++++++++-
7 files changed, 161 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index acf4daab02..ea86e6146c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -148,6 +148,7 @@ Features
- Packet type information
- Checksum offload
- Promiscuous mode
+ - IEEE1588 PTP
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 4196dd800c..4f31b61de1 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timesync = Y
Timestamp offload = Y
Basic stats = Y
Extended stats = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e39bc8c252..e2b7120237 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -28,6 +28,7 @@ u32 fman_dealloc_bufs_mask_lo;
int fman_ccsr_map_fd = -1;
static COMPAT_LIST_HEAD(__ifs);
+void *rtc_map;
/* This is the (const) global variable that callers have read-only access to.
* Internally, we have read-write access directly to __ifs.
@@ -539,6 +540,20 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ if (!rtc_map) {
+ __if->rtc_map = mmap(NULL, FMAN_IEEE_1588_SIZE,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, FMAN_IEEE_1588_OFFSET);
+ if (__if->rtc_map == MAP_FAILED) {
+ pr_err("Can not map FMan RTC regs base\n");
+ _errno = -EINVAL;
+ goto err;
+ }
+ rtc_map = __if->rtc_map;
+ } else {
+ __if->rtc_map = rtc_map;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 09d1ddb897..e8bc913943 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -64,6 +64,12 @@
#define GROUP_ADDRESS 0x0000010000000000LL
#define HASH_CTRL_ADDR_MASK 0x0000003F
+#define FMAN_RTC_MAX_NUM_OF_ALARMS 3
+#define FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES 4
+#define FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS 3
+#define FMAN_IEEE_1588_OFFSET 0X1AFE000
+#define FMAN_IEEE_1588_SIZE 4096
+
/* Pre definitions of FMAN interface and Bpool structures */
struct __fman_if;
struct fman_if_bpool;
@@ -307,6 +313,44 @@ struct tx_bmi_regs {
uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
};
+
+/* Description FM RTC timer alarm */
+struct t_tmr_alarm {
+ uint32_t tmr_alarm_h;
+ uint32_t tmr_alarm_l;
+};
+
+/* Description FM RTC timer Ex trigger */
+struct t_tmr_ext_trigger {
+ uint32_t tmr_etts_h;
+ uint32_t tmr_etts_l;
+};
+
+struct rtc_regs {
+ uint32_t tmr_id; /* 0x000 Module ID register */
+ uint32_t tmr_id2; /* 0x004 Controller ID register */
+ uint32_t reserved0008[30];
+ uint32_t tmr_ctrl; /* 0x0080 timer control register */
+ uint32_t tmr_tevent; /* 0x0084 timer event register */
+ uint32_t tmr_temask; /* 0x0088 timer event mask register */
+ uint32_t reserved008c[3];
+ uint32_t tmr_cnt_h; /* 0x0098 timer counter high register */
+ uint32_t tmr_cnt_l; /* 0x009c timer counter low register */
+ uint32_t tmr_add; /* 0x00a0 timer drift compensation addend register */
+ uint32_t tmr_acc; /* 0x00a4 timer accumulator register */
+ uint32_t tmr_prsc; /* 0x00a8 timer prescale */
+ uint32_t reserved00ac;
+ uint32_t tmr_off_h; /* 0x00b0 timer offset high */
+ uint32_t tmr_off_l; /* 0x00b4 timer offset low */
+ struct t_tmr_alarm tmr_alarm[FMAN_RTC_MAX_NUM_OF_ALARMS];
+ /* 0x00b8 timer alarm */
+ uint32_t tmr_fiper[FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES];
+ /* 0x00d0 timer fixed period interval */
+ struct t_tmr_ext_trigger tmr_etts[FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS];
+ /* 0x00e0 time stamp general purpose external */
+ uint32_t reserved00f0[4];
+};
+
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -396,6 +440,7 @@ struct __fman_if {
void *ccsr_map;
void *bmi_map;
void *tx_bmi_map;
+ void *rtc_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 682cb1c77e..82d1960356 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,11 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_enable = dpaa_timesync_enable,
+ .timesync_disable = dpaa_timesync_disable,
+ .timesync_read_time = dpaa_timesync_read_time,
+ .timesync_write_time = dpaa_timesync_write_time,
+ .timesync_adjust_time = dpaa_timesync_adjust_time,
.timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
.timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bbdb0936c0..7884cc034c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -245,6 +245,22 @@ int
dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *timestamp);
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta);
+
int
dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
index 2ecdda6db0..48e29e22eb 100644
--- a/drivers/net/dpaa/dpaa_ptp.c
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -16,7 +16,82 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
-int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = (uint64_t)in_be32(tmr_cnt_l);
+ time |= ((uint64_t)in_be32(tmr_cnt_h) << 32);
+
+ *timestamp = rte_ns_to_timespec(time);
+ return 0;
+}
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *ts)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = rte_timespec_to_ns(ts);
+
+ out_be32(tmr_cnt_l, (uint32_t)time);
+ out_be32(tmr_cnt_h, (uint32_t)(time >> 32));
+
+ return 0;
+}
+
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta)
+{
+ struct timespec ts = {0, 0}, *timestamp = &ts;
+ uint64_t ns;
+
+ dpaa_timesync_read_time(dev, timestamp);
+
+ ns = rte_timespec_to_ns(timestamp);
+ ns += delta;
+ *timestamp = rte_ns_to_timespec(ns);
+
+ dpaa_timesync_write_time(dev, timestamp);
+
+ return 0;
+}
+
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -32,7 +107,8 @@ int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
return 0;
}
-int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused)
{
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 11/18] net/dpaa: implement detailed packet parsing
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (9 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
` (8 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch implements the detailed packet parsing using
the annotation info from the hardware.
decode parser to set RX muf packet type by dpaa_slow_parsing.
Support to identify the IPSec ESP, GRE and SCTP packets.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 35 +++++++-
drivers/net/dpaa/dpaa_rxtx.h | 143 ++++++++++++++-------------------
3 files changed, 93 insertions(+), 86 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 82d1960356..a302b24be6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -411,6 +411,7 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev, size_t *no_of_elements)
RTE_PTYPE_L4_UDP,
RTE_PTYPE_L4_SCTP,
RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_TUNNEL_GRE,
};
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e3b4bb14ab..99fc3f1b43 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -110,11 +110,38 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
#define dpaa_display_frame_info(a, b, c)
#endif
-static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
- uint64_t prs __rte_unused)
+static inline void
+dpaa_slow_parsing(struct rte_mbuf *m,
+ const struct annotations_t *annot)
{
+ const struct dpaa_eth_parse_results_t *parse;
+
DPAA_DP_LOG(DEBUG, "Slow parsing");
- /*TBD:XXX: to be implemented*/
+ parse = &annot->parse;
+
+ if (parse->ethernet)
+ m->packet_type |= RTE_PTYPE_L2_ETHER;
+ if (parse->vlan)
+ m->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
+ if (parse->first_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4;
+ if (parse->first_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6;
+ if (parse->gre)
+ m->packet_type |= RTE_PTYPE_TUNNEL_GRE;
+ if (parse->last_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4_EXT;
+ if (parse->last_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6_EXT;
+ if (parse->l4_type == DPAA_PR_L4_TCP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_TCP;
+ else if (parse->l4_type == DPAA_PR_L4_UDP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_UDP;
+ else if (parse->l4_type == DPAA_PR_L4_IPSEC_TYPE &&
+ !parse->l4_info_err && parse->esp_sum)
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ else if (parse->l4_type == DPAA_PR_L4_SCTP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_SCTP;
}
static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
@@ -228,7 +255,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
break;
/* More switch cases can be added */
default:
- dpaa_slow_parsing(m, prs);
+ dpaa_slow_parsing(m, annot);
}
m->tx_offload = annot->parse.ip_off[0];
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 1048e86d41..215bdeaf7f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2022 NXP
+ * Copyright 2017,2020-2024 NXP
*
*/
@@ -162,98 +162,77 @@
#define DPAA_PKT_L3_LEN_SHIFT 7
+enum dpaa_parse_result_l4_type {
+ DPAA_PR_L4_TCP_TYPE = 1,
+ DPAA_PR_L4_UDP_TYPE = 2,
+ DPAA_PR_L4_IPSEC_TYPE = 3,
+ DPAA_PR_L4_SCTP_TYPE = 4,
+ DPAA_PR_L4_DCCP_TYPE = 5
+};
+
/**
* FMan parse result array
*/
struct dpaa_eth_parse_results_t {
- uint8_t lpid; /**< Logical port id */
- uint8_t shimr; /**< Shim header result */
- union {
- uint16_t l2r; /**< Layer 2 result */
+ uint8_t lpid; /**< Logical port id */
+ uint8_t shimr; /**< Shim header result */
+ union {
+ uint16_t l2r; /**< Layer 2 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t ethernet:1;
- uint16_t vlan:1;
- uint16_t llc_snap:1;
- uint16_t mpls:1;
- uint16_t ppoe_ppp:1;
- uint16_t unused_1:3;
- uint16_t unknown_eth_proto:1;
- uint16_t eth_frame_type:2;
- uint16_t l2r_err:5;
+ uint16_t unused_1:3;
+ uint16_t ppoe_ppp:1;
+ uint16_t mpls:1;
+ uint16_t llc_snap:1;
+ uint16_t vlan:1;
+ uint16_t ethernet:1;
+
+ uint16_t l2r_err:5;
+ uint16_t eth_frame_type:2;
/*00-unicast, 01-multicast, 11-broadcast*/
-#else
- uint16_t l2r_err:5;
- uint16_t eth_frame_type:2;
- uint16_t unknown_eth_proto:1;
- uint16_t unused_1:3;
- uint16_t ppoe_ppp:1;
- uint16_t mpls:1;
- uint16_t llc_snap:1;
- uint16_t vlan:1;
- uint16_t ethernet:1;
-#endif
+ uint16_t unknown_eth_proto:1;
} __rte_packed;
- } __rte_packed;
- union {
- uint16_t l3r; /**< Layer 3 result */
+ } __rte_packed;
+ union {
+ uint16_t l3r; /**< Layer 3 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t first_ipv4:1;
- uint16_t first_ipv6:1;
- uint16_t gre:1;
- uint16_t min_enc:1;
- uint16_t last_ipv4:1;
- uint16_t last_ipv6:1;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t last_ip_err_code:3;
-#else
- uint16_t last_ip_err_code:3;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t last_ipv6:1;
- uint16_t last_ipv4:1;
- uint16_t min_enc:1;
- uint16_t gre:1;
- uint16_t first_ipv6:1;
- uint16_t first_ipv4:1;
-#endif
+ uint16_t unused_2:1;
+ uint16_t l3_err:1;
+ uint16_t last_ipv6:1;
+ uint16_t last_ipv4:1;
+ uint16_t min_enc:1;
+ uint16_t gre:1;
+ uint16_t first_ipv6:1;
+ uint16_t first_ipv4:1;
+
+ uint16_t unused_3:8;
} __rte_packed;
- } __rte_packed;
- union {
- uint8_t l4r; /**< Layer 4 result */
+ } __rte_packed;
+ union {
+ uint8_t l4r; /**< Layer 4 result */
struct{
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint8_t l4_type:3;
- uint8_t l4_info_err:1;
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
-#else
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
- uint8_t l4_info_err:1;
- uint8_t l4_type:3;
-#endif
+ uint8_t l4cv:1;
+ uint8_t unused_4:1;
+ uint8_t ah:1;
+ uint8_t esp_sum:1;
+ uint8_t l4_info_err:1;
+ uint8_t l4_type:3;
} __rte_packed;
- } __rte_packed;
- uint8_t cplan; /**< Classification plan id */
- uint16_t nxthdr; /**< Next Header */
- uint16_t cksum; /**< Checksum */
- uint32_t lcv; /**< LCV */
- uint8_t shim_off[3]; /**< Shim offset */
- uint8_t eth_off; /**< ETH offset */
- uint8_t llc_snap_off; /**< LLC_SNAP offset */
- uint8_t vlan_off[2]; /**< VLAN offset */
- uint8_t etype_off; /**< ETYPE offset */
- uint8_t pppoe_off; /**< PPP offset */
- uint8_t mpls_off[2]; /**< MPLS offset */
- uint8_t ip_off[2]; /**< IP offset */
- uint8_t gre_off; /**< GRE offset */
- uint8_t l4_off; /**< Layer 4 offset */
- uint8_t nxthdr_off; /**< Parser end point */
+ } __rte_packed;
+ uint8_t cplan; /**< Classification plan id */
+ uint16_t nxthdr; /**< Next Header */
+ uint16_t cksum; /**< Checksum */
+ uint32_t lcv; /**< LCV */
+ uint8_t shim_off[3]; /**< Shim offset */
+ uint8_t eth_off; /**< ETH offset */
+ uint8_t llc_snap_off; /**< LLC_SNAP offset */
+ uint8_t vlan_off[2]; /**< VLAN offset */
+ uint8_t etype_off; /**< ETYPE offset */
+ uint8_t pppoe_off; /**< PPP offset */
+ uint8_t mpls_off[2]; /**< MPLS offset */
+ uint8_t ip_off[2]; /**< IP offset */
+ uint8_t gre_off; /**< GRE offset */
+ uint8_t l4_off; /**< Layer 4 offset */
+ uint8_t nxthdr_off; /**< Parser end point */
} __rte_packed;
/* The structure is the Prepended Data to the Frame which is used by FMAN */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 12/18] net/dpaa: enhance DPAA frame display
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (10 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 13/18] net/dpaa: support mempool debug Hemant Agrawal
` (7 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch enhances the received packet debugging capability.
This help displaying the full packet parsing output.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 9 +++
drivers/net/dpaa/dpaa_rxtx.c | 138 +++++++++++++++++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 5 ++
4 files changed, 133 insertions(+), 24 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ea86e6146c..edf7a7e350 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -227,6 +227,11 @@ state during application initialization:
application want to use eventdev with DPAA device.
Currently these queues are not used for LS1023/LS1043 platform by default.
+- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
+
+ This defines the debug flag, whether to dump the detailed frame and packet
+ parsing result for the incoming packets.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a302b24be6..4ead890278 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -2056,6 +2056,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
struct rte_device *dev = eth_dev->device;
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ char *penv;
+#endif
PMD_INIT_FUNC_TRACE();
@@ -2135,6 +2138,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
td_tx_threshold = CGR_RX_PERFQ_THRESH;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ penv = getenv("DPAA_DISPLAY_FRAME_AND_PARSER_RESULT");
+ if (penv)
+ dpaa_force_display_frame_set(atoi(penv));
+#endif
+
/* If congestion control is enabled globally*/
if (num_rx_fqs > 0 && td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 99fc3f1b43..945c84ab10 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -47,6 +47,10 @@
#include <dpaa_of.h>
#include <netcfg.h>
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int s_force_display_frm;
+#endif
+
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
(_fd)->opaque_addr = 0; \
@@ -58,37 +62,122 @@
} while (0)
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set)
+{
+ s_force_display_frm = set;
+}
+
#define DISPLAY_PRINT printf
-static void dpaa_display_frame_info(const struct qm_fd *fd,
- uint32_t fqid, bool rx)
+static void
+dpaa_display_frame_info(const struct qm_fd *fd,
+ uint32_t fqid, bool rx)
{
- int ii;
- char *ptr;
+ int pos, offset = 0;
+ char *ptr, info[1024];
struct annotations_t *annot = rte_dpaa_mem_ptov(fd->addr);
uint8_t format;
+ const struct dpaa_eth_parse_results_t *psr;
- if (!fd->status) {
- /* Do not display correct packets.*/
+ if (!fd->status && !s_force_display_frm) {
+ /* Do not display correct packets unless force display.*/
return;
}
+ psr = &annot->parse;
- format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
- DPAA_FD_FORMAT_SHIFT;
-
- DISPLAY_PRINT("fqid %d bpid %d addr 0x%lx, format %d\r\n",
- fqid, fd->bpid, (unsigned long)fd->addr, fd->format);
- DISPLAY_PRINT("off %d, len %d stat 0x%x\r\n",
- fd->offset, fd->length20, fd->status);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (format == qm_fd_contig)
+ sprintf(info, "simple");
+ else if (format == qm_fd_sg)
+ sprintf(info, "sg");
+ else
+ sprintf(info, "unknown format(%d)", format);
+
+ DISPLAY_PRINT("%s: fqid=%08x, bpid=%d, phy addr=0x%lx ",
+ rx ? "RX" : "TX", fqid, fd->bpid, (unsigned long)fd->addr);
+ DISPLAY_PRINT("format=%s offset=%d, len=%d, stat=0x%x\r\n",
+ info, fd->offset, fd->length20, fd->status);
if (rx) {
- ptr = (char *)&annot->parse;
- DISPLAY_PRINT("RX parser result:\r\n");
- for (ii = 0; ii < (int)sizeof(struct dpaa_eth_parse_results_t);
- ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("Display usual RX parser result:\r\n");
+ if (psr->eth_frame_type == 0)
+ offset += sprintf(&info[offset], "unicast");
+ else if (psr->eth_frame_type == 1)
+ offset += sprintf(&info[offset], "multicast");
+ else if (psr->eth_frame_type == 3)
+ offset += sprintf(&info[offset], "broadcast");
+ else
+ offset += sprintf(&info[offset], "unknown eth type(%d)",
+ psr->eth_frame_type);
+ if (psr->l2r_err) {
+ offset += sprintf(&info[offset], " L2 error(%d)",
+ psr->l2r_err);
+ } else {
+ offset += sprintf(&info[offset], " L2 non error");
}
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("L2: %s, %s, ethernet type:%s\r\n",
+ psr->ethernet ? "is ethernet" : "non ethernet",
+ psr->vlan ? "is vlan" : "non vlan", info);
+
+ offset = 0;
+ DISPLAY_PRINT("L3: %s/%s, %s/%s, %s, %s\r\n",
+ psr->first_ipv4 ? "first IPv4" : "non first IPv4",
+ psr->last_ipv4 ? "last IPv4" : "non last IPv4",
+ psr->first_ipv6 ? "first IPv6" : "non first IPv6",
+ psr->last_ipv6 ? "last IPv6" : "non last IPv6",
+ psr->gre ? "GRE" : "non GRE",
+ psr->l3_err ? "L3 has error" : "L3 non error");
+
+ if (psr->l4_type == DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "tcp");
+ } else if (psr->l4_type == DPAA_PR_L4_UDP_TYPE) {
+ offset += sprintf(&info[offset], "udp");
+ } else if (psr->l4_type == DPAA_PR_L4_IPSEC_TYPE) {
+ offset += sprintf(&info[offset], "IPSec ");
+ if (psr->esp_sum)
+ offset += sprintf(&info[offset], "ESP");
+ if (psr->ah)
+ offset += sprintf(&info[offset], "AH");
+ } else if (psr->l4_type == DPAA_PR_L4_SCTP_TYPE) {
+ offset += sprintf(&info[offset], "sctp");
+ } else if (psr->l4_type == DPAA_PR_L4_DCCP_TYPE) {
+ offset += sprintf(&info[offset], "dccp");
+ } else {
+ offset += sprintf(&info[offset], "unknown l4 type(%d)",
+ psr->l4_type);
+ }
+ DISPLAY_PRINT("L4: type:%s, L4 validation %s\r\n",
+ info, psr->l4cv ? "Performed" : "NOT performed");
+
+ offset = 0;
+ if (psr->ethernet) {
+ offset += sprintf(&info[offset],
+ "Eth offset=%d, ethtype offset=%d, ",
+ psr->eth_off, psr->etype_off);
+ }
+ if (psr->vlan) {
+ offset += sprintf(&info[offset], "vLAN offset=%d, ",
+ psr->vlan_off[0]);
+ }
+ if (psr->first_ipv4 || psr->first_ipv6) {
+ offset += sprintf(&info[offset], "first IP offset=%d, ",
+ psr->ip_off[0]);
+ }
+ if (psr->last_ipv4 || psr->last_ipv6) {
+ offset += sprintf(&info[offset], "last IP offset=%d, ",
+ psr->ip_off[1]);
+ }
+ if (psr->gre) {
+ offset += sprintf(&info[offset], "GRE offset=%d, ",
+ psr->gre_off);
+ }
+ if (psr->l4_type >= DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "L4 offset=%d, ",
+ psr->l4_off);
+ }
+ offset += sprintf(&info[offset], "Next HDR(0x%04x) offset=%d.",
+ rte_be_to_cpu_16(psr->nxthdr), psr->nxthdr_off);
+
+ DISPLAY_PRINT("%s\r\n", info);
}
if (unlikely(format == qm_fd_sg)) {
@@ -99,13 +188,14 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
DISPLAY_PRINT("Frame payload:\r\n");
ptr = (char *)annot;
ptr += fd->offset;
- for (ii = 0; ii < fd->length20; ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
+ for (pos = 0; pos < fd->length20; pos++) {
+ DISPLAY_PRINT("%02x ", ptr[pos]);
+ if (((pos + 1) % 16) == 0)
DISPLAY_PRINT("\n");
}
DISPLAY_PRINT("\n");
}
+
#else
#define dpaa_display_frame_info(a, b, c)
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 215bdeaf7f..392926e286 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -274,4 +274,9 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs);
void dpaa_rx_cb_no_prefetch(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set);
+#endif
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 13/18] net/dpaa: support mempool debug
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (11 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
` (6 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch adds support to compile time debug the mempool
corruptions in dpaa driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 945c84ab10..d82c6f3be2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -494,6 +494,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->data_len = sg_temp->length;
first_seg->pkt_len = sg_temp->length;
rte_mbuf_refcnt_set(first_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)first_seg),
+ (void **)&first_seg, 1, 1);
+#endif
first_seg->port = ifid;
first_seg->nb_segs = 1;
@@ -511,6 +515,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len += sg_temp->length;
first_seg->nb_segs += 1;
rte_mbuf_refcnt_set(cur_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 1);
+#endif
prev_seg->next = cur_seg;
if (sg_temp->final) {
cur_seg->next = NULL;
@@ -522,6 +530,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len, first_seg->nb_segs);
dpaa_eth_packet_info(first_seg, vaddr);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 1);
+#endif
rte_pktmbuf_free_seg(temp);
return first_seg;
@@ -562,6 +574,10 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
return mbuf;
@@ -676,6 +692,10 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -722,6 +742,10 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -972,6 +996,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
return -1;
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 0);
+#endif
fd->cmd = 0;
fd->opaque_addr = 0;
@@ -1017,6 +1045,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
} else {
sg_temp->bpid =
DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
free_buf[*free_count].seg = cur_seg;
@@ -1074,6 +1106,10 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
* released by BMAN.
*/
DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
buf_to_free[*free_count].seg = mbuf;
@@ -1302,6 +1338,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_CKSUM_OFFLOAD_MASK)
dpaa_unsegmented_checksum(mbuf,
&fd_arr[loop]);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
continue;
}
} else {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 14/18] bus/dpaa: add OH port mode for dpaa eth
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (12 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 13/18] net/dpaa: support mempool debug Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
` (5 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
NXP DPAA architecture supports the concept of DPAA
port as Offline Port - meaning - not connected to an actual MAC.
This is an hardware assited IPC mechanism for communiting between two
applications.
Offline(O/H) port is a type of hardware port which is able to dequeue and
enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute
(PCD) flow and (if configured to do so) enqueues the frame back in a
QMan queue.
The FMan is able to copy the frame into new buffers and enqueue back to the
QMan. This means these ports can be used to send and receive packet
between two applications.
An O/H port Have two queues. One to receive and one to send the packets.
It will loopback all the packets on Tx queue which are received
on Rx queue.
This property is completely driven by the device-tree. During the
DPAA bus scan, based on the platform device properties as in
device-tree, the port can be classified as OH port.
This patch add support in the driver to use dpaa eth port
in OH mode as well with DPDK applications.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 26 ++-
drivers/bus/dpaa/base/fman/fman.c | 259 ++++++++++++++--------
drivers/bus/dpaa/base/fman/fman_hw.c | 24 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/dpaa_bus.c | 23 +-
drivers/bus/dpaa/include/fman.h | 31 ++-
drivers/net/dpaa/dpaa_ethdev.c | 85 ++++++-
drivers/net/dpaa/dpaa_ethdev.h | 6 +
drivers/net/dpaa/dpaa_flow.c | 51 +++--
9 files changed, 378 insertions(+), 146 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index edf7a7e350..47dcce334c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2017,2020 NXP
+ Copyright 2017,2020-2024 NXP
DPAA Poll Mode Driver
@@ -136,6 +136,8 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
+- PMD also support OH mode, where the port works as a HW assisted
+ virtual port without actually connecting to a Physical MAC.
Features
@@ -149,6 +151,8 @@ Features
- Checksum offload
- Promiscuous mode
- IEEE1588 PTP
+ - OH Port for inter application communication
+
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -326,6 +330,26 @@ FMLIB
`Kernel FMD Driver
<https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
+OH Port
+~~~~~~~
+ Offline(O/H) port is a type of hardware port which is able to dequeue and
+ enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute (PCD)
+ flow and (if configured to do so) enqueues the frame back in a QMan queue.
+
+ The FMan is able to copy the frame into new buffers and enqueue back to the
+ QMan. This means these ports can be used to send and receive packets between two
+ applications as well.
+
+ An O/H port have two queues. One to receive and one to send the packets. It will
+ loopback all the packets on Tx queue which are received on Rx queue.
+
+
+ -------- Tx Packets ---------
+ | App | - - - - - - - - - > | O/H |
+ | | < - - - - - - - - - | Port |
+ -------- Rx Packets ---------
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e2b7120237..f817305ab7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -246,26 +246,34 @@ fman_if_init(const struct device_node *dpa_node)
uint64_t port_cell_idx_val = 0;
uint64_t ext_args_cell_idx_val = 0;
- const struct device_node *mac_node = NULL, *tx_node, *ext_args_node;
- const struct device_node *pool_node, *fman_node, *rx_node;
+ const struct device_node *mac_node = NULL, *ext_args_node;
+ const struct device_node *pool_node, *fman_node;
+ const struct device_node *rx_node = NULL, *tx_node = NULL;
+ const struct device_node *oh_node = NULL;
const uint32_t *regs_addr = NULL;
const char *mname, *fname;
const char *dname = dpa_node->full_name;
size_t lenp;
- int _errno, is_shared = 0;
+ int _errno, is_shared = 0, is_offline = 0;
const char *char_prop;
uint32_t na;
if (of_device_is_available(dpa_node) == false)
return 0;
- if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
- !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
+ if (of_device_is_compatible(dpa_node, "fsl,dpa-oh"))
+ is_offline = 1;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-oh") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
return 0;
}
- rprop = "fsl,qman-frame-queues-rx";
- mprop = "fsl,fman-mac";
+ rprop = is_offline ? "fsl,qman-frame-queues-oh" :
+ "fsl,qman-frame-queues-rx";
+ mprop = is_offline ? "fsl,fman-oh-port" :
+ "fsl,fman-mac";
/* Obtain the MAC node used by this interface except macless */
mac_phandle = of_get_property(dpa_node, mprop, &lenp);
@@ -281,27 +289,43 @@ fman_if_init(const struct device_node *dpa_node)
}
mname = mac_node->full_name;
- /* Extract the Rx and Tx ports */
- ports_phandle = of_get_property(mac_node, "fsl,port-handles",
- &lenp);
- if (!ports_phandle)
- ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ if (!is_offline) {
+ /* Extract the Rx and Tx ports */
+ ports_phandle = of_get_property(mac_node, "fsl,port-handles",
&lenp);
- if (!ports_phandle) {
- FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
- mname);
- return -EINVAL;
- }
- assert(lenp == (2 * sizeof(phandle)));
- rx_node = of_find_node_by_phandle(ports_phandle[0]);
- if (!rx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
- return -ENXIO;
- }
- tx_node = of_find_node_by_phandle(ports_phandle[1]);
- if (!tx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
- return -ENXIO;
+ if (!ports_phandle)
+ ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
+ mname);
+ return -EINVAL;
+ }
+ assert(lenp == (2 * sizeof(phandle)));
+ rx_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!rx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
+ tx_node = of_find_node_by_phandle(ports_phandle[1]);
+ if (!tx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
+ return -ENXIO;
+ }
+ } else {
+ /* Extract the OH ports */
+ ports_phandle = of_get_property(dpa_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,fman-oh-port", dname);
+ return -EINVAL;
+ }
+ assert(lenp == (sizeof(phandle)));
+ oh_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!oh_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
}
/* Check if the port is shared interface */
@@ -430,17 +454,19 @@ fman_if_init(const struct device_node *dpa_node)
* Set A2V, OVOM, EBD bits in contextA to allow external
* buffer deallocation by fman.
*/
- fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
- FMAN_V3_CONTEXTA_EN_OVOM;
- fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+ fman_dealloc_bufs_mask_hi = DPAA_FQD_CTX_A_A2_FIELD_VALID |
+ DPAA_FQD_CTX_A_OVERRIDE_OMB;
+ fman_dealloc_bufs_mask_lo = DPAA_FQD_CTX_A2_EBD_BIT;
} else {
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 2.5G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G or offline? */
__if->__if.is_memac = 0;
- if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+ if (is_offline)
+ __if->__if.mac_type = fman_offline;
+ else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
__if->__if.mac_type = fman_mac_10g;
@@ -468,46 +494,81 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /*
- * For MAC ports, we cannot rely on cell-index. In
- * T2080, two of the 10G ports on single FMAN have same
- * duplicate cell-indexes as the other two 10G ports on
- * same FMAN. Hence, we now rely upon addresses of the
- * ports from device tree to deduce the index.
- */
+ if (!is_offline) {
+ /*
+ * For MAC ports, we cannot rely on cell-index. In
+ * T2080, two of the 10G ports on single FMAN have same
+ * duplicate cell-indexes as the other two 10G ports on
+ * same FMAN. Hence, we now rely upon addresses of the
+ * ports from device tree to deduce the index.
+ */
- _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
- if (_errno) {
- FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
- regs_addr_host);
- goto err;
- }
+ _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+ if (_errno) {
+ FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
+ regs_addr_host);
+ goto err;
+ }
+ } else {
+ cell_idx = of_get_property(oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n",
+ oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+ cell_idx_host = of_read_number(cell_idx,
+ lenp / sizeof(phandle));
- /* Extract the MAC address for private and shared interfaces */
- mac_addr = of_get_property(mac_node, "local-mac-address",
- &lenp);
- if (!mac_addr) {
- FMAN_ERR(-EINVAL, "%s: no local-mac-address",
- mname);
- goto err;
+ __if->__if.mac_idx = cell_idx_host;
}
- memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
- /* Extract the channel ID (from tx-port-handle) */
- tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
- &lenp);
- if (!tx_channel_id) {
- FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
- tx_node->full_name);
- goto err;
+ if (!is_offline) {
+ /* Extract the MAC address for private and shared interfaces */
+ mac_addr = of_get_property(mac_node, "local-mac-address",
+ &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address",
+ mname);
+ goto err;
+ }
+ memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the channel ID (from tx-port-handle) */
+ tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
+ } else {
+ /* Extract the channel ID (from mac) */
+ tx_channel_id = of_get_property(mac_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
}
- regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ na = of_n_addr_cells(mac_node);
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ if (!is_offline)
+ regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ else
+ regs_addr = of_get_address(oh_node, 0, &__if->regs_size, NULL);
if (!regs_addr) {
FMAN_ERR(-EINVAL, "of_get_address(%s)", mname);
goto err;
}
- phys_addr = of_translate_address(rx_node, regs_addr);
+
+ if (!is_offline)
+ phys_addr = of_translate_address(rx_node, regs_addr);
+ else
+ phys_addr = of_translate_address(oh_node, regs_addr);
if (!phys_addr) {
FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)",
mname, regs_addr);
@@ -521,23 +582,27 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
- if (!regs_addr) {
- FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
- goto err;
- }
- phys_addr = of_translate_address(tx_node, regs_addr);
- if (!phys_addr) {
- FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
- mname, regs_addr);
- goto err;
- }
- __if->tx_bmi_map = mmap(NULL, __if->regs_size,
- PROT_READ | PROT_WRITE, MAP_SHARED,
- fman_ccsr_map_fd, phys_addr);
- if (__if->tx_bmi_map == MAP_FAILED) {
- FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
- goto err;
+ if (!is_offline) {
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
}
if (!rtc_map) {
@@ -554,11 +619,6 @@ fman_if_init(const struct device_node *dpa_node)
__if->rtc_map = rtc_map;
}
- /* No channel ID for MAC-less */
- assert(lenp == sizeof(*tx_channel_id));
- na = of_n_addr_cells(mac_node);
- __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
-
/* Extract the Rx FQIDs. (Note, the device representation is silly,
* there are "counts" that must always be 1.)
*/
@@ -568,13 +628,26 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /* Check if "fsl,qman-frame-queues-rx" in dtb file is valid entry or
- * not. A valid entry contains at least 4 entries, rx_error_queue,
- * rx_error_queue_count, fqid_rx_def and rx_error_queue_count.
+ /*
+ * Check if "fsl,qman-frame-queues-rx/oh" in dtb file is valid entry or
+ * not.
+ *
+ * A valid rx entry contains either 4 or 6 entries. Mandatory entries
+ * are rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count. Optional entries are fqid_rx_pcd and
+ * fqid_rx_pcd_count.
+ *
+ * A valid oh entry contains 4 entries. Those entries are
+ * rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count.
*/
- assert(lenp >= (4 * sizeof(phandle)));
- na = of_n_addr_cells(mac_node);
+ if (!is_offline)
+ assert(lenp == (4 * sizeof(phandle)) ||
+ lenp == (6 * sizeof(phandle)));
+ else
+ assert(lenp == (4 * sizeof(phandle)));
+
/* Get rid of endianness (issues). Convert to host byte order */
rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
@@ -595,6 +668,9 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.fqid_rx_pcd_count = rx_phandle_host[5];
}
+ if (is_offline)
+ goto oh_init_done;
+
/* Extract the Tx FQIDs */
tx_phandle = of_get_property(dpa_node,
"fsl,qman-frame-queues-tx", &lenp);
@@ -706,6 +782,7 @@ fman_if_init(const struct device_node *dpa_node)
if (is_shared)
__if->__if.is_shared_mac = 1;
+oh_init_done:
fman_if_vsp_init(__if);
/* Parsing of the network interface is complete, add it to the list */
@@ -769,6 +846,10 @@ fman_finish(void)
list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
int _errno;
+ /* No need to disable Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ continue;
+
/* disable Rx and Tx */
if ((__if->__if.mac_type == fman_mac_1g) &&
(!__if->__if.is_memac))
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4fc41c1ae9..1f61ae406b 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020,2022 NXP
+ * Copyright 2017,2020,2022-2023 NXP
*
*/
@@ -88,6 +88,10 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+ /* Add hash mac addr not supported on Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ return 0;
+
eth_addr = ETH_ADDR_TO_UINT64(eth);
if (!(eth_addr & GROUP_ADDRESS))
@@ -109,6 +113,15 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
void *mac_reg =
&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
u32 val = in_be32(mac_reg);
+ int i;
+
+ /* Get mac addr not supported on Offline port */
+ /* Return NULL mac address */
+ if (__if->__if.mac_type == fman_offline) {
+ for (i = 0; i < 6; i++)
+ eth[i] = 0x0;
+ return 0;
+ }
eth[0] = (val & 0x000000ff) >> 0;
eth[1] = (val & 0x0000ff00) >> 8;
@@ -130,6 +143,10 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
+ /* Clear mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return;
+
if (addr_num) {
reg = &((struct memac_regs *)m->ccsr_map)->
mac_addr[addr_num-1].mac_addr_l;
@@ -149,10 +166,13 @@ int
fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
{
struct __fman_if *m = container_of(p, struct __fman_if, __if);
-
void *reg;
u32 val;
+ /* Set mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return 0;
+
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
if (addr_num)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 57d87afcb0..e6a6ed1eb6 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2019,2023 NXP
*
*/
#include <inttypes.h>
@@ -44,6 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
+ (__if->mac_type == fman_offline) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -56,13 +57,15 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
- fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
- fman_if_for_each_bpool(bpool, __if)
- fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
- " size=%"PRId64", addr=0x%"PRIx64")\n",
- bpool->bpid, bpool->count, bpool->size,
- bpool->addr);
+ if (__if->mac_type != fman_offline) {
+ fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+ fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+ fman_if_for_each_bpool(bpool, __if)
+ fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
+ " size=%"PRId64", addr=0x%"PRIx64")\n",
+ bpool->bpid, bpool->count, bpool->size,
+ bpool->addr);
+ }
}
}
#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1f6997c77e..6e4ec90670 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -43,6 +43,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <netcfg.h>
+#include <fman.h>
struct rte_dpaa_bus {
struct rte_bus bus;
@@ -203,9 +204,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
- fman_intf->mac_idx);
- DPAA_BUS_LOG(INFO, "%s netdev added", dev->name);
+ if (fman_intf->mac_type == fman_offline)
+ sprintf(dev->name, "fm%d-oh%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else
+ sprintf(dev->name, "fm%d-mac%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
dev->device.name = dev->name;
dev->device.devargs = dpaa_devargs_lookup(dev);
@@ -441,7 +445,7 @@ static int
rte_dpaa_bus_parse(const char *name, void *out)
{
unsigned int i, j;
- size_t delta;
+ size_t delta, dev_delta;
size_t max_name_len;
/* There are two ways of passing device name, with and without
@@ -458,16 +462,25 @@ rte_dpaa_bus_parse(const char *name, void *out)
delta = 5;
}
+ /* dev_delta points to the dev name (mac/oh/onic). Not valid for
+ * dpaa_sec.
+ */
+ dev_delta = delta + sizeof("fm.-") - 1;
+
if (strncmp("dpaa_sec", &name[delta], 8) == 0) {
if (sscanf(&name[delta], "dpaa_sec-%u", &i) != 1 ||
i < 1 || i > 4)
return -EINVAL;
max_name_len = sizeof("dpaa_sec-.") - 1;
+ } else if (strncmp("oh", &name[dev_delta], 2) == 0) {
+ if (sscanf(&name[delta], "fm%u-oh%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-oh..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
return -EINVAL;
-
max_name_len = sizeof("fm.-mac..") - 1;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index e8bc913943..377f73bf0d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
*
*/
@@ -474,11 +474,30 @@ extern int fman_ccsr_map_fd;
#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
#define FMAN_IP_REV_1_MAJOR_SHIFT 8
#define FMAN_V3 0x06
-#define FMAN_V3_CONTEXTA_EN_A2V 0x10000000
-#define FMAN_V3_CONTEXTA_EN_OVOM 0x02000000
-#define FMAN_V3_CONTEXTA_EN_EBD 0x80000000
-#define FMAN_CONTEXTA_DIS_CHECKSUM 0x7ull
-#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+
+#define DPAA_FQD_CTX_A_SHIFT_BITS 24
+#define DPAA_FQD_CTX_B_SHIFT_BITS 24
+
+/* Following flags are used to set in context A hi field of FQD */
+#define DPAA_FQD_CTX_A_OVERRIDE_FQ (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_IGNORE_CMD (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A1_FIELD_VALID (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A2_FIELD_VALID (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A0_FIELD_VALID (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_B0_FIELD_VALID (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_OVERRIDE_OMB (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_RESERVED (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
+/* Following flags are used to set in context A lo field of FQD */
+#define DPAA_FQD_CTX_A2_EBD_BIT (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_EBAD_BIT (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_FWD_BIT (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NL_BIT (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_CWD_BIT (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NENQ_BIT (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_RESERVED_BIT (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_VSPE_BIT (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
extern u16 fman_ip_rev;
extern u32 fman_dealloc_bufs_mask_hi;
extern u32 fman_dealloc_bufs_mask_lo;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ead890278..f8196ddd14 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (!fif->is_shared_mac)
+ if (fif->mac_type != fman_offline)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -314,6 +314,10 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dpaa_write_fm_config_to_file();
}
+ /* Disable interrupt support on offline port*/
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* if the interrupts were configured on this devices*/
if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
@@ -531,6 +535,9 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* Reset link to autoneg */
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
@@ -644,6 +651,11 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
+ } else if (fif->mac_type == fman_offline) {
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -744,7 +756,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -757,6 +770,11 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
+ if (fif->mac_type == fman_offline) {
+ /*Max supported rate for O/H port is 3.75Mpps*/
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ }
}
if (ioctl_version < 2) {
@@ -1077,7 +1095,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1222,8 +1240,11 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->fqid, ret);
}
}
+
/* Enable main queue to receive error packets also by default */
- fman_if_set_err_fqid(fif, rxq->fqid);
+ if (fif->mac_type != fman_offline)
+ fman_if_set_err_fqid(fif, rxq->fqid);
+
return 0;
}
@@ -1372,7 +1393,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1388,7 +1410,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1483,9 +1506,15 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1498,8 +1527,15 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1508,9 +1544,15 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1807,6 +1849,17 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
return ret;
}
+uint8_t fm_default_vsp_id(struct fman_if *fif)
+{
+ /* Avoid being same as base profile which could be used
+ * for kernel interface of shared mac.
+ */
+ if (fif->base_profile_id)
+ return 0;
+ else
+ return DPAA_DEFAULT_RXQ_VSP_ID;
+}
+
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf,
@@ -1842,13 +1895,20 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
} else {
/* no tx-confirmation */
opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ opts.fqd.context_a.hi = DPAA_FQD_CTX_A_OVERRIDE_FQ |
+ fman_dealloc_bufs_mask_hi;
}
- if (fman_ip_rev >= FMAN_V3) {
+ if (fman_ip_rev >= FMAN_V3)
/* Set B0V bit in contextA to set ASPID to 0 */
- opts.fqd.context_a.hi |= 0x04000000;
+ opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
+
+ if (fman_intf->mac_type == fman_offline) {
+ opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
+ opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
+ DPAA_FQD_CTX_B_SHIFT_BITS;
}
+
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
if (cgr_tx) {
@@ -2263,7 +2323,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf, fman_intf);
+ if (fman_intf->mac_type != fman_offline)
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -2294,10 +2355,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac) {
+ if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
- FM_FD_RX_STATUS_ERR_MASK);
+ FM_FD_RX_STATUS_ERR_MASK);
/* Disable RX mode */
fman_if_disable_rx(fman_intf);
/* Disable promiscuous mode */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 7884cc034c..8ec5155cfc 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -121,6 +121,9 @@ enum {
extern struct rte_mempool *dpaa_tx_sg_pool;
extern int dpaa_ieee_1588;
+/* PMD related logs */
+extern int dpaa_logtype_pmd;
+
/* structure to free external and indirect
* buffers.
*/
@@ -266,6 +269,9 @@ dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused);
+uint8_t
+fm_default_vsp_id(struct fman_if *fif);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 082bd5d014..9ef0cce38a 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2019,2021 NXP
+ * Copyright 2017-2019,2021-2024 NXP
*/
/* System headers */
@@ -29,6 +29,11 @@ return &scheme_params->param.key_ext_and_hash.extract_array[hdr_idx];
#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \
SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field
+/* FMAN mac indexes mappings (0 is unused, first 8 are for 1G, next for 10G
+ * ports).
+ */
+const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
/* FM global info */
struct dpaa_fm_info {
t_handle fman_handle;
@@ -48,17 +53,6 @@ static struct dpaa_fm_info fm_info;
static struct dpaa_fm_model fm_model;
static const char *fm_log = "/tmp/fmdpdk.bin";
-static inline uint8_t fm_default_vsp_id(struct fman_if *fif)
-{
- /* Avoid being same as base profile which could be used
- * for kernel interface of shared mac.
- */
- if (fif->base_profile_id)
- return 0;
- else
- return DPAA_DEFAULT_RXQ_VSP_ID;
-}
-
static void fm_prev_cleanup(void)
{
uint32_t fman_id = 0, i = 0, devid;
@@ -649,12 +643,15 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
}
-static inline int get_port_type(struct fman_if *fif)
+static inline int get_rx_port_type(struct fman_if *fif)
{
+
+ if (fif->mac_type == fman_offline_internal)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
*/
- if (fif->mac_type == fman_mac_1g &&
+ else if (fif->mac_type == fman_mac_1g &&
fif->mac_idx >= DPAA_10G_MAC_START_IDX)
return e_FM_PORT_TYPE_RX_10G;
else if (fif->mac_type == fman_mac_1g)
@@ -665,7 +662,7 @@ static inline int get_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_RX_10G;
DPAA_PMD_ERR("MAC type unsupported");
- return -1;
+ return e_FM_PORT_TYPE_DUMMY;
}
static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
@@ -676,17 +673,12 @@ static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
ioc_fm_pcd_net_env_params_t dist_units;
PMD_INIT_FUNC_TRACE();
- /* FMAN mac indexes mappings (0 is unused,
- * first 8 are for 1G, next for 10G ports
- */
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
-
/* Memset FM port params */
memset(&fm_port_params, 0, sizeof(fm_port_params));
/* Set FM port params */
fm_port_params.h_fm = fm_info.fman_handle;
- fm_port_params.port_type = get_port_type(fif);
+ fm_port_params.port_type = get_rx_port_type(fif);
fm_port_params.port_id = mac_idx[fif->mac_idx];
/* FM PORT Open */
@@ -949,7 +941,6 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
{
t_fm_vsp_params vsp_params;
t_fm_buffer_prefix_content buf_prefix_cont;
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
uint8_t idx = mac_idx[fif->mac_idx];
int ret;
@@ -970,17 +961,31 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- vsp_params.port_params.port_id = idx;
+ if (fif->mac_type == fman_offline_internal)
+ vsp_params.port_params.port_id = fif->mac_idx;
+ else
+ vsp_params.port_params.port_id = idx;
+
if (fif->mac_type == fman_mac_1g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
} else if (fif->mac_type == fman_mac_2_5g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
+ } else if (fif->mac_type == fman_offline) {
+ vsp_params.port_params.port_type =
+ e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
} else {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
return -1;
}
+
+ vsp_params.port_params.port_type = get_rx_port_type(fif);
+ if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
+ DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
+ return -1;
+ }
+
vsp_params.ext_buf_pools.num_of_pools_used = 1;
vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 15/18] bus/dpaa: add ONIC port mode for the DPAA eth
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (13 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
` (4 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
The OH ports can also be used by two application, processing contexts
to communicate to each other.
This patch enables this mode for dpaa-eth OH port as ONIC port,
so that application can use the dpaa-eth to communicate to each
other on the same SoC.
Again,this properties is driven by the system device-tree variables.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 33 ++-
drivers/bus/dpaa/base/fman/fman.c | 299 +++++++++++++++++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 20 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 4 +-
drivers/bus/dpaa/dpaa_bus.c | 16 +-
drivers/bus/dpaa/include/fman.h | 15 +-
drivers/net/dpaa/dpaa_ethdev.c | 114 +++++++--
drivers/net/dpaa/dpaa_flow.c | 24 +-
drivers/net/dpaa/dpaa_fmc.c | 2 +-
9 files changed, 467 insertions(+), 60 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 47dcce334c..a266e71a5b 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -136,7 +136,7 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
-- PMD also support OH mode, where the port works as a HW assisted
+- PMD also support OH/ONIC mode, where the port works as a HW assisted
virtual port without actually connecting to a Physical MAC.
@@ -152,7 +152,7 @@ Features
- Promiscuous mode
- IEEE1588 PTP
- OH Port for inter application communication
-
+ - ONIC virtual port support
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -350,6 +350,35 @@ OH Port
-------- Rx Packets ---------
+ONIC
+~~~~
+ To use OH port to communicate between two applications, we can assign Rx port
+ of an O/H port to Application 1 and Tx port to Application 2 so that
+ Application 1 can send packets to Application 2. Similarly, we can assign Tx
+ port of another O/H port to Application 1 and Rx port to Application 2 so that
+ Application 2 can send packets to Application 1.
+
+ ONIC is logically defined to achieve it. Internally it will use one Rx queue
+ of an O/H port and one Tx queue of another O/H port.
+ For application, it will behave as single O/H port.
+
+ +------+ +------+ +------+ +------+ +------+
+ | | Tx | | Rx | O/H | Tx | | Rx | |
+ | | - - - > | | - - > | Port | - - > | | - - > | |
+ | | | | | 1 | | | | |
+ | | | | +------+ | | | |
+ | App | | ONIC | | ONIC | | App |
+ | 1 | | Port | | Port | | 2 |
+ | | | 1 | +------+ | 2 | | |
+ | | Rx | | Tx | O/H | Rx | | Tx | |
+ | | < - - - | | < - - -| Port | < - - -| | < - - -| |
+ | | | | | 2 | | | | |
+ +------+ +------+ +------+ +------+ +------+
+
+ All the packets received by ONIC port 1 will be send to ONIC port 2 and vice
+ versa. These ports can be used by DPDK applications just like physical ports.
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index f817305ab7..efe6eab4a9 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -43,7 +43,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
@@ -465,7 +465,7 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.is_memac = 0;
if (is_offline)
- __if->__if.mac_type = fman_offline;
+ __if->__if.mac_type = fman_offline_internal;
else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
@@ -791,6 +791,292 @@ fman_if_init(const struct device_node *dpa_node)
dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
__if->__if.mac_idx);
+ /* Don't add OH port to the port list since they will be used by ONIC
+ * ports.
+ */
+ if (!is_offline)
+ list_add_tail(&__if->__if.node, &__ifs);
+
+ return 0;
+err:
+ if_destructor(__if);
+ return _errno;
+}
+
+static int fman_if_init_onic(const struct device_node *dpa_node)
+{
+ struct __fman_if *__if;
+ struct fman_if_bpool *bpool;
+ const phandle *tx_pools_phandle;
+ const phandle *tx_channel_id, *mac_addr, *cell_idx;
+ const phandle *rx_phandle;
+ const struct device_node *pool_node;
+ size_t lenp;
+ int _errno;
+ const phandle *p_onic_oh_nodes = NULL;
+ const struct device_node *rx_oh_node = NULL;
+ const struct device_node *tx_oh_node = NULL;
+ const phandle *p_fman_rx_oh_node = NULL, *p_fman_tx_oh_node = NULL;
+ const struct device_node *fman_rx_oh_node = NULL;
+ const struct device_node *fman_tx_oh_node = NULL;
+ const struct device_node *fman_node;
+ uint32_t na = OF_DEFAULT_NA;
+ uint64_t rx_phandle_host[4] = {0};
+ uint64_t cell_idx_host = 0;
+
+ if (of_device_is_available(dpa_node) == false)
+ return 0;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-generic"))
+ return 0;
+
+ /* Allocate an object for this network interface */
+ __if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+ if (!__if) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+ goto err;
+ }
+ memset(__if, 0, sizeof(*__if));
+
+ INIT_LIST_HEAD(&__if->__if.bpool_list);
+
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
+
+ strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+ __if->node_path[PATH_MAX - 1] = '\0';
+
+ /* Mac node is onic */
+ __if->__if.is_memac = 0;
+ __if->__if.mac_type = fman_onic;
+
+ /* Extract the MAC address for linux peer */
+ mac_addr = of_get_property(dpa_node, "local-mac-address", &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ memcpy(&__if->__if.onic_info.peer_mac, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the Rx port (it's the first of the two port handles)
+ * and get its channel ID.
+ */
+ p_onic_oh_nodes = of_get_property(dpa_node, "fsl,oh-ports", &lenp);
+ if (!p_onic_oh_nodes) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_onic_oh_nodes\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ rx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[0]);
+ if (!rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get rx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_rx_oh_node = of_get_property(rx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_rx_oh_node = of_find_node_by_phandle(*p_fman_rx_oh_node);
+ if (!fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ tx_channel_id = of_get_property(fman_rx_oh_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*tx_channel_id));
+
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ /* Extract the FQs from which oNIC driver in Linux is dequeuing */
+ rx_phandle = of_get_property(rx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ __if->__if.onic_info.rx_start = of_read_number(&rx_phandle[2], na);
+ __if->__if.onic_info.rx_count = of_read_number(&rx_phandle[3], na);
+
+ /* Extract the Rx FQIDs */
+ tx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[1]);
+ if (!tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get tx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_tx_oh_node = of_get_property(tx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_tx_oh_node = of_find_node_by_phandle(*p_fman_tx_oh_node);
+ if (!fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ cell_idx = of_get_property(fman_tx_oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.mac_idx = cell_idx_host;
+
+ fman_node = of_get_parent(fman_tx_oh_node);
+ cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.fman_idx = cell_idx_host;
+
+ rx_phandle = of_get_property(tx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ dpa_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+ rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+ rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+ rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+ assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+
+ __if->__if.fqid_rx_err = rx_phandle_host[0];
+ __if->__if.fqid_rx_def = rx_phandle_host[2];
+
+ /* Don't Extract the Tx FQIDs */
+ __if->__if.fqid_tx_err = 0;
+ __if->__if.fqid_tx_confirm = 0;
+
+ /* Obtain the buffer pool nodes used by Tx OH port */
+ tx_pools_phandle = of_get_property(tx_oh_node, "fsl,bman-buffer-pools",
+ &lenp);
+ if (!tx_pools_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp && !(lenp % sizeof(phandle)));
+
+ /* For each pool, parse the corresponding node and add a pool object to
+ * the interface's "bpool_list".
+ */
+
+ while (lenp) {
+ size_t proplen;
+ const phandle *prop;
+ uint64_t bpool_host[6] = {0};
+
+ /* Allocate an object for the pool */
+ bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+ if (!bpool) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+ goto err;
+ }
+
+ /* Find the pool node */
+ pool_node = of_find_node_by_phandle(*tx_pools_phandle);
+ if (!pool_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+
+ /* Extract the BPID property */
+ prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+ if (!prop) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n",
+ pool_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+ assert(proplen == sizeof(*prop));
+
+ bpool->bpid = of_read_number(prop, na);
+
+ /* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+ * indicates for the Bman driver to seed the pool.
+ * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+ * two are mutually exclusive, so check for either of them.
+ */
+
+ prop = of_get_property(pool_node, "fsl,bpool-cfg", &proplen);
+ if (!prop)
+ prop = of_get_property(pool_node,
+ "fsl,bpool-ethernet-cfg",
+ &proplen);
+ if (!prop) {
+ /* It's OK for there to be no bpool-cfg */
+ bpool->count = bpool->size = bpool->addr = 0;
+ } else {
+ assert(proplen == (6 * sizeof(*prop)));
+
+ bpool_host[0] = of_read_number(&prop[0], na);
+ bpool_host[1] = of_read_number(&prop[1], na);
+ bpool_host[2] = of_read_number(&prop[2], na);
+ bpool_host[3] = of_read_number(&prop[3], na);
+ bpool_host[4] = of_read_number(&prop[4], na);
+ bpool_host[5] = of_read_number(&prop[5], na);
+
+ bpool->count = ((uint64_t)bpool_host[0] << 32) |
+ bpool_host[1];
+ bpool->size = ((uint64_t)bpool_host[2] << 32) |
+ bpool_host[3];
+ bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+ bpool_host[5];
+ }
+
+ /* Parsing of the pool is complete, add it to the interface
+ * list.
+ */
+ list_add_tail(&bpool->node, &__if->__if.bpool_list);
+ lenp -= sizeof(phandle);
+ tx_pools_phandle++;
+ }
+
+ fman_if_vsp_init(__if);
+
+ /* Parsing of the network interface is complete, add it to the list. */
+ DPAA_BUS_DEBUG("Found %s, Tx Channel = %x, FMAN = %x, Port ID = %x",
+ dpa_node->full_name, __if->__if.tx_channel_id,
+ __if->__if.fman_idx, __if->__if.mac_idx);
+
list_add_tail(&__if->__if.node, &__ifs);
return 0;
err:
@@ -830,6 +1116,13 @@ fman_init(void)
}
}
+ for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-generic") {
+ /* it is a oNIC interface */
+ _errno = fman_if_init_onic(dpa_node);
+ if (_errno)
+ FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+ }
+
return 0;
err:
fman_finish();
@@ -847,7 +1140,7 @@ fman_finish(void)
int _errno;
/* No need to disable Offline port */
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
continue;
/* disable Rx and Tx */
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 1f61ae406b..cbb0491d70 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -88,8 +88,9 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- /* Add hash mac addr not supported on Offline port */
- if (__if->__if.mac_type == fman_offline)
+ /* Add hash mac addr not supported on Offline port and onic port */
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic)
return 0;
eth_addr = ETH_ADDR_TO_UINT64(eth);
@@ -115,9 +116,10 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
u32 val = in_be32(mac_reg);
int i;
- /* Get mac addr not supported on Offline port */
+ /* Get mac addr not supported on Offline port and onic port */
/* Return NULL mac address */
- if (__if->__if.mac_type == fman_offline) {
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic) {
for (i = 0; i < 6; i++)
eth[i] = 0x0;
return 0;
@@ -143,8 +145,9 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
- /* Clear mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Clear mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return;
if (addr_num) {
@@ -169,8 +172,9 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
void *reg;
u32 val;
- /* Set mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Set mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return 0;
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index e6a6ed1eb6..ffb37825c2 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_offline) ? "OFFLINE" :
+ (__if->mac_type == fman_offline_internal) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -57,7 +57,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- if (__if->mac_type != fman_offline) {
+ if (__if->mac_type != fman_offline_internal) {
fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
fman_if_for_each_bpool(bpool, __if)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6e4ec90670..9ffbe07c93 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -171,8 +171,10 @@ dpaa_create_device_list(void)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
+ rte_dpaa_bus.device_count = 0;
+
/* Creating Ethernet Devices */
- for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+ for (i = 0; dpaa_netcfg && (i < dpaa_netcfg->num_ethports); i++) {
dev = calloc(1, sizeof(struct rte_dpaa_device));
if (!dev) {
DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
@@ -204,9 +206,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- if (fman_intf->mac_type == fman_offline)
+ if (fman_intf->mac_type == fman_offline_internal)
sprintf(dev->name, "fm%d-oh%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else if (fman_intf->mac_type == fman_onic)
+ sprintf(dev->name, "fm%d-onic%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
else
sprintf(dev->name, "fm%d-mac%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
@@ -216,7 +221,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
- rte_dpaa_bus.device_count = i;
+ rte_dpaa_bus.device_count += i;
/* Unlike case of ETH, RTE_LIBRTE_DPAA_MAX_CRYPTODEV SEC devices are
* constantly created only if "sec" property is found in the device
@@ -477,6 +482,11 @@ rte_dpaa_bus_parse(const char *name, void *out)
i >= 2 || j >= 16)
return -EINVAL;
max_name_len = sizeof("fm.-oh..") - 1;
+ } else if (strncmp("onic", &name[dev_delta], 4) == 0) {
+ if (sscanf(&name[delta], "fm%u-onic%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-onic..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 377f73bf0d..01556cf2a8 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline = 0,
+ fman_offline_internal = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
@@ -366,6 +366,16 @@ struct fman_port_qmi_regs {
uint32_t fmqm_pndcc; /**< PortID n Dequeue Confirm Counter */
};
+struct onic_port_cfg {
+ char macless_name[IF_NAME_MAX_LEN];
+ uint32_t rx_start;
+ uint32_t rx_count;
+ uint32_t tx_start;
+ uint32_t tx_count;
+ struct rte_ether_addr src_mac;
+ struct rte_ether_addr peer_mac;
+};
+
/* This struct exports parameters about an Fman network interface, determined
* from the device-tree.
*/
@@ -401,6 +411,9 @@ struct fman_if {
uint32_t fqid_tx_err;
uint32_t fqid_tx_confirm;
+ /* oNIC port info */
+ struct onic_port_cfg onic_info;
+
struct list_head bpool_list;
/* The node for linking this interface into "fman_if_list" */
struct list_head node;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f8196ddd14..133fbd5bc9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -315,7 +316,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* Disable interrupt support on offline port*/
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* if the interrupts were configured on this devices*/
@@ -467,10 +469,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_bmi_stats_enable(fif);
- fman_if_bmi_stats_reset(fif);
- fman_if_enable_rx(fif);
-
+ if (fif->mac_type != fman_onic) {
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
+ }
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
for (i = 0; i < dev->data->nb_tx_queues; i++)
@@ -535,7 +538,8 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* Reset link to autoneg */
@@ -651,11 +655,14 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
- } else if (fif->mac_type == fman_offline) {
+ } else if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
| RTE_ETH_LINK_SPEED_10M
| RTE_ETH_LINK_SPEED_100M_HD
- | RTE_ETH_LINK_SPEED_100M;
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -757,7 +764,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline) {
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -770,7 +778,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
/*Max supported rate for O/H port is 3.75Mpps*/
link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -933,8 +942,16 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_enable(dev->process_private);
return 0;
@@ -942,8 +959,16 @@ static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_disable(dev->process_private);
return 0;
@@ -951,8 +976,15 @@ static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_set_mcast_filter_table(dev->process_private);
return 0;
@@ -960,8 +992,15 @@ static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
@@ -1095,7 +1134,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac && fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1126,8 +1166,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
dpaa_intf->valid = 1;
- DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif), max_rx_pktlen);
+ if (fif->mac_type != fman_onic)
+ DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
@@ -1242,7 +1283,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
/* Enable main queue to receive error packets also by default */
- if (fif->mac_type != fman_offline)
+ if (fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_err_fqid(fif, rxq->fqid);
return 0;
@@ -1394,7 +1436,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1411,7 +1454,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1510,11 +1554,16 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Add MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1531,11 +1580,16 @@ dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
return;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Remove MAC Address not supported on ONIC port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1548,11 +1602,16 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Set MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1903,7 +1962,8 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
- if (fman_intf->mac_type == fman_offline) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
DPAA_FQD_CTX_B_SHIFT_BITS;
@@ -2156,6 +2216,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
if (!num_rx_fqs) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
+ ret = -ENODEV;
+ goto free_rx;
+ }
DPAA_PMD_WARN("%s is not configured by FMC.",
dpaa_intf->name);
}
@@ -2323,7 +2388,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- if (fman_intf->mac_type != fman_offline)
+ if (fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic)
dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
@@ -2355,7 +2421,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
+ if (!fman_intf->is_shared_mac &&
+ fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
FM_FD_RX_STATUS_ERR_MASK);
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 9ef0cce38a..62a2884172 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -645,8 +645,11 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_rx_port_type(struct fman_if *fif)
{
-
- if (fif->mac_type == fman_offline_internal)
+ /* For onic ports, configure the VSP as offline ports so that
+ * kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
@@ -961,25 +964,12 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- if (fif->mac_type == fman_offline_internal)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
vsp_params.port_params.port_id = fif->mac_idx;
else
vsp_params.port_params.port_id = idx;
- if (fif->mac_type == fman_mac_1g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
- } else if (fif->mac_type == fman_mac_2_5g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
- } else if (fif->mac_type == fman_mac_10g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
- } else if (fif->mac_type == fman_offline) {
- vsp_params.port_params.port_type =
- e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
- } else {
- DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
- return -1;
- }
-
vsp_params.port_params.port_type = get_rx_port_type(fif);
if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index d80ea1010a..7dc42f6e23 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -215,7 +215,7 @@ dpaa_port_fmc_port_parse(struct fman_if *fif,
if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
pport->number == fif->mac_idx &&
- (fif->mac_type == fman_offline ||
+ (fif->mac_type == fman_offline_internal ||
fif->mac_type == fman_onic))
return current_port;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 16/18] net/dpaa: improve the dpaa port cleanup
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (14 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
` (3 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
During DPAA cleanup in FMCLESS mode, application can
see segmentation fault in device close API and in DPAA
destructor execution.
Segmentation fault in device close is because driver
reducing the number of queues initialised during
device configuration without releasing the actual queues.
And segmentation fault in DPAA destruction is because
it is trying to access RTE* devices whose memory has
been released in rte_eal_cleanup() call by the application.
This patch improves the behavior.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 33 +++++++++++----------------------
drivers/net/dpaa/dpaa_flow.c | 5 ++---
2 files changed, 13 insertions(+), 25 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 133fbd5bc9..41ae033c75 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -561,10 +561,10 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (dpaa_intf->cgr_rx) {
for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
}
- rte_free(dpaa_intf->cgr_rx);
- dpaa_intf->cgr_rx = NULL;
/* Release TX congestion Groups */
if (dpaa_intf->cgr_tx) {
for (loop = 0; loop < MAX_DPAA_CORES; loop++)
@@ -578,6 +578,15 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
rte_free(dpaa_intf->tx_queues);
dpaa_intf->tx_queues = NULL;
+ if (dpaa_intf->port_handle) {
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM "
+ "deconfig failed\n");
+ }
+ if (fif->num_profiles) {
+ if (dpaa_port_vsp_cleanup(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n");
+ }
return ret;
}
@@ -2607,26 +2616,6 @@ static void __attribute__((destructor(102))) dpaa_finish(void)
return;
if (!(default_q || fmc_q)) {
- unsigned int i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].dev_ops == &dpaa_devops) {
- struct rte_eth_dev *dev = &rte_eth_devices[i];
- struct dpaa_if *dpaa_intf =
- dev->data->dev_private;
- struct fman_if *fif =
- dev->process_private;
- if (dpaa_intf->port_handle)
- if (dpaa_fm_deconfig(dpaa_intf, fif))
- DPAA_PMD_WARN("DPAA FM "
- "deconfig failed");
- if (fif->num_profiles) {
- if (dpaa_port_vsp_cleanup(dpaa_intf,
- fif))
- DPAA_PMD_WARN("DPAA FM vsp cleanup failed");
- }
- }
- }
if (is_global_init)
if (dpaa_fm_term())
DPAA_PMD_WARN("DPAA FM term failed");
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 62a2884172..2a22b23c8f 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -13,6 +13,7 @@
#include <rte_dpaa_logs.h>
#include <fmlib/fm_port_ext.h>
#include <fmlib/fm_vsp_ext.h>
+#include <rte_pmd_dpaa.h>
#define DPAA_MAX_NUM_ETH_DEV 8
@@ -795,8 +796,6 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
return -1;
}
- dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues;
-
/* Open FM Port and set it in port info */
ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif);
if (ret) {
@@ -805,7 +804,7 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
}
if (fif->num_profiles) {
- for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
dpaa_intf->rx_queues[i].vsp_id =
fm_default_vsp_id(fif);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 17/18] net/dpaa: improve dpaa errata A010022 handling
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (15 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-09-30 12:15 ` [PATCH v4 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
` (2 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch improves the errata handling for
"RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022"
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d82c6f3be2..1d7efdef88 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1258,6 +1258,35 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
return new_mbufs[0];
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+/* In case the data offset is not multiple of 16,
+ * FMAN can stall because of an errata. So reallocate
+ * the buffer in such case.
+ */
+static inline int
+dpaa_eth_ls1043a_mbuf_realloc(struct rte_mbuf *mbuf)
+{
+ uint64_t len, offset;
+
+ if (dpaa_svr_family != SVR_LS1043A_FAMILY)
+ return 0;
+
+ while (mbuf) {
+ len = mbuf->data_len;
+ offset = mbuf->data_off;
+ if ((mbuf->next &&
+ !rte_is_aligned((void *)len, 16)) ||
+ !rte_is_aligned((void *)offset, 16)) {
+ DPAA_PMD_DEBUG("Errata condition hit");
+
+ return 1;
+ }
+ mbuf = mbuf->next;
+ }
+ return 0;
+}
+#endif
+
uint16_t
dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
@@ -1296,14 +1325,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++) {
mbuf = *(bufs++);
- /* In case the data offset is not multiple of 16,
- * FMAN can stall because of an errata. So reallocate
- * the buffer in such case.
- */
- if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
- (mbuf->data_off & 0x7F) != 0x0)
- realloc_mbuf = 1;
-
fd_arr[loop].cmd = 0;
if (dpaa_ieee_1588) {
fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
@@ -1311,6 +1332,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
DPAA_FD_CMD_UPD;
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+ realloc_mbuf = dpaa_eth_ls1043a_mbuf_realloc(mbuf);
+#endif
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v4 18/18] net/dpaa: fix reallocate_mbuf handling
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (16 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
@ 2024-09-30 12:15 ` Hemant Agrawal
2024-10-01 8:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-09-30 12:15 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla, stable
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch fixes the bug in the reallocate_mbuf code
handling. The source location is corrected when copying
the data in the new mbuf.
Fixes: f8c7a17a48c9 ("net/dpaa: support Tx scatter gather for non-DPAA buffer")
Cc: stable@dpdk.org
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 1d7efdef88..247e7b92ba 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1223,7 +1223,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
/* Copy the data */
data = rte_pktmbuf_append(new_mbufs[0], bytes_to_copy);
- rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(mbuf,
+ rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(temp_mbuf,
void *, offset1), bytes_to_copy);
/* Set new offsets and the temp buffers */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (17 preceding siblings ...)
2024-09-30 12:15 ` [PATCH v4 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
@ 2024-10-01 8:15 ` Ferruh Yigit
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
19 siblings, 0 replies; 129+ messages in thread
From: Ferruh Yigit @ 2024-10-01 8:15 UTC (permalink / raw)
To: Hemant Agrawal, dev
On 9/30/2024 1:15 PM, Hemant Agrawal wrote:
> v4: fix clan compilation issues
> v3: addressed Ferruh's comments
> - dropped Tx rate limit API patch
> - added one small bug fix
> - fixed removal/add of fman_offline type
>
> v2: address review comments
> - improve commit message
> - add documentarion for new functions
> - make IEEE1588 config runtime
>
> This series adds several enhancement to the NXP DPAA Ethernet driver.
>
> Primarily:
> 1. timestamp and IEEE 1588 support
> 2. OH and ONIC based virtual port config in DPAA
> 3. frame display and debugging infra
>
>
> Gagandeep Singh (3):
> bus/dpaa: fix PFDRs leaks due to FQRNIs
> net/dpaa: support mempool debug
> net/dpaa: improve the dpaa port cleanup
>
> Hemant Agrawal (5):
> bus/dpaa: fix VSP for 1G fm1-mac9 and 10
> bus/dpaa: fix the fman details status
> bus/dpaa: add port buffer manager stats
> net/dpaa: implement detailed packet parsing
> net/dpaa: enhance DPAA frame display
>
> Jun Yang (2):
> net/dpaa: share MAC FMC scheme and CC parse
> net/dpaa: improve dpaa errata A010022 handling
>
> Rohit Raj (3):
> net/dpaa: fix typecasting ch ID to u32
> bus/dpaa: add OH port mode for dpaa eth
> bus/dpaa: add ONIC port mode for the DPAA eth
>
> Vanshika Shukla (5):
> net/dpaa: support Tx confirmation to enable PTP
> net/dpaa: add support to separate Tx conf queues
> net/dpaa: support Rx/Tx timestamp read
> net/dpaa: support IEEE 1588 PTP
> net/dpaa: fix reallocate_mbuf handling
>
Patch by patch build fails [1] on following patch:
[PATCH v4 14/18] bus/dpaa: add OH port mode for dpaa eth
Also there are some valid checkpatch warnings, can you please fix them:
https://mails.dpdk.org/archives/test-report/2024-September/801309.html
WARNING:TYPO_SPELLING: 'assited' may be misspelled - perhaps 'assisted'?
#8:
This is an hardware assited IPC mechanism for communiting between two
^^^^^^^
[1]
../drivers/net/dpaa/dpaa_flow.c: In function ‘get_rx_port_type’:
../drivers/net/dpaa/dpaa_flow.c:648:30: error: ‘fman_offline_internal’
undeclared (first use in this function)
648 | if (fif->mac_type == fman_offline_internal)
| ^~~~~~~~~~~~~~~~~~~~~
../drivers/net/dpaa/dpaa_flow.c:648:30: note: each undeclared identifier
is reported only once for each function it appears in
../drivers/net/dpaa/dpaa_flow.c: In function ‘dpaa_port_vsp_configure’:
../drivers/net/dpaa/dpaa_flow.c:963:30: error: ‘fman_offline_internal’
undeclared (first use in this function)
963 | if (fif->mac_type == fman_offline_internal)
| ^~~~~~~~~~~~~~~~~~~~~
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 00/18] NXP DPAA ETH driver enhancement and fixes
2024-09-30 12:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Hemant Agrawal
` (18 preceding siblings ...)
2024-10-01 8:15 ` [PATCH v4 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
` (19 more replies)
19 siblings, 20 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
v5: fix individual patch compilation and checkpatch warning
v4: fix clang compilation issues
v3: addressed Ferruh's comments
- dropped Tx rate limit API patch
- added one small bug fix
- fixed removal/add of fman_offline type
v2: address review comments
- improve commit message
- add documentarion for new functions
- make IEEE1588 config runtime
This series adds several enhancement to the NXP DPAA Ethernet driver.
Primarily:
1. timestamp and IEEE 1588 support
2. OH and ONIC based virtual port config in DPAA
3. frame display and debugging infra
Gagandeep Singh (3):
bus/dpaa: fix PFDRs leaks due to FQRNIs
net/dpaa: support mempool debug
net/dpaa: improve the dpaa port cleanup
Hemant Agrawal (5):
bus/dpaa: fix VSP for 1G fm1-mac9 and 10
bus/dpaa: fix the fman details status
bus/dpaa: add port buffer manager stats
net/dpaa: implement detailed packet parsing
net/dpaa: enhance DPAA frame display
Jun Yang (2):
net/dpaa: share MAC FMC scheme and CC parse
net/dpaa: improve dpaa errata A010022 handling
Rohit Raj (3):
net/dpaa: fix typecasting ch ID to u32
bus/dpaa: add OH port mode for dpaa eth
bus/dpaa: add ONIC port mode for the DPAA eth
Vanshika Shukla (5):
net/dpaa: support Tx confirmation to enable PTP
net/dpaa: add support to separate Tx conf queues
net/dpaa: support Rx/Tx timestamp read
net/dpaa: support IEEE 1588 PTP
net/dpaa: fix reallocate_mbuf handling
doc/guides/nics/dpaa.rst | 64 ++-
doc/guides/nics/features/dpaa.ini | 2 +
drivers/bus/dpaa/base/fman/fman.c | 583 +++++++++++++++++++---
drivers/bus/dpaa/base/fman/fman_hw.c | 102 +++-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/base/qbman/qman.c | 46 +-
drivers/bus/dpaa/dpaa_bus.c | 37 +-
drivers/bus/dpaa/include/fman.h | 112 ++++-
drivers/bus/dpaa/include/fsl_fman.h | 12 +
drivers/bus/dpaa/include/fsl_qman.h | 4 +-
drivers/bus/dpaa/version.map | 4 +
drivers/net/dpaa/dpaa_ethdev.c | 428 +++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 68 ++-
drivers/net/dpaa/dpaa_flow.c | 66 +--
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++------
drivers/net/dpaa/dpaa_ptp.c | 118 +++++
drivers/net/dpaa/dpaa_rxtx.c | 378 ++++++++++++--
drivers/net/dpaa/dpaa_rxtx.h | 152 +++---
drivers/net/dpaa/meson.build | 1 +
19 files changed, 2105 insertions(+), 512 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
` (18 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh, stable
From: Gagandeep Singh <g.singh@nxp.com>
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experienace while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Cc: stable@dpdk.org
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 301057723e..9c90ee25a6 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -317,6 +339,7 @@ static int drain_mr_fqrni(struct qm_portal *p)
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 02/18] net/dpaa: fix typecasting ch ID to u32
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
` (17 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj, hemant.agrawal, stable
From: Rohit Raj <rohit.raj@nxp.com>
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecase
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Cc: hemant.agrawal@nxp.com
Cc: stable@dpdk.org
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 060b8c678f..1a2de5240f 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 01/18] bus/dpaa: fix PFDRs leaks due to FQRNIs Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 02/18] net/dpaa: fix typecasting ch ID to u32 Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
` (16 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 41195eb0a7..beeb03dbf2 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 04/18] bus/dpaa: fix the fman details status
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (2 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 03/18] bus/dpaa: fix VSP for 1G fm1-mac9 and 10 Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
` (15 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, stable
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Cc: stable@dpdk.org
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 05/18] bus/dpaa: add port buffer manager stats
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (3 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 04/18] bus/dpaa: fix the fman details status Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
` (14 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
Add BMI statistics and improving the existing extended
statistics
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 61 ++++++++++++++++++++++++++++
drivers/bus/dpaa/include/fman.h | 4 +-
drivers/bus/dpaa/include/fsl_fman.h | 12 ++++++
drivers/bus/dpaa/version.map | 4 ++
drivers/net/dpaa/dpaa_ethdev.c | 46 ++++++++++++++++++---
drivers/net/dpaa/dpaa_ethdev.h | 12 ++++++
6 files changed, 132 insertions(+), 7 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 97e792806f..124c69edb4 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -267,6 +267,67 @@ fman_if_stats_reset(struct fman_if *p)
;
}
+void
+fman_if_bmi_stats_enable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp |= FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_disable(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ uint32_t tmp;
+
+ tmp = in_be32(®s->fmbm_rstc);
+
+ tmp &= ~FMAN_BMI_COUNTERS_EN;
+
+ out_be32(®s->fmbm_rstc, tmp);
+}
+
+void
+fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+ int i = 0;
+
+ value[i++] = (u32)in_be32(®s->fmbm_rfrc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfbc);
+ value[i++] = (u32)in_be32(®s->fmbm_rlfc);
+ value[i++] = (u32)in_be32(®s->fmbm_rffc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfdc);
+ value[i++] = (u32)in_be32(®s->fmbm_rfldec);
+ value[i++] = (u32)in_be32(®s->fmbm_rodc);
+ value[i++] = (u32)in_be32(®s->fmbm_rbdc);
+}
+
+void
+fman_if_bmi_stats_reset(struct fman_if *p)
+{
+ struct __fman_if *m = container_of(p, struct __fman_if, __if);
+ struct rx_bmi_regs *regs = (struct rx_bmi_regs *)m->bmi_map;
+
+ out_be32(®s->fmbm_rfrc, 0);
+ out_be32(®s->fmbm_rfbc, 0);
+ out_be32(®s->fmbm_rlfc, 0);
+ out_be32(®s->fmbm_rffc, 0);
+ out_be32(®s->fmbm_rfdc, 0);
+ out_be32(®s->fmbm_rfldec, 0);
+ out_be32(®s->fmbm_rodc, 0);
+ out_be32(®s->fmbm_rbdc, 0);
+}
+
void
fman_if_promiscuous_enable(struct fman_if *p)
{
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..60681068ea 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -56,6 +56,8 @@
#define FMAN_PORT_BMI_FIFO_UNITS 0x100
#define FMAN_PORT_IC_OFFSET_UNITS 0x10
+#define FMAN_BMI_COUNTERS_EN 0x80000000
+
#define FMAN_ENABLE_BPOOL_DEPLETION 0xF00000F0
#define HASH_CTRL_MCAST_EN 0x00000100
@@ -260,7 +262,7 @@ struct rx_bmi_regs {
/**< Buffer Manager pool Information-*/
uint32_t fmbm_acnt[FMAN_PORT_MAX_EXT_POOLS_NUM];
/**< Allocate Counter-*/
- uint32_t reserved0130[8];
+ uint32_t reserved0120[16];
/**< 0x130/0x140 - 0x15F reserved -*/
uint32_t fmbm_rcgm[FMAN_PORT_CG_MAP_NUM];
/**< Congestion Group Map*/
diff --git a/drivers/bus/dpaa/include/fsl_fman.h b/drivers/bus/dpaa/include/fsl_fman.h
index 20690f8329..5a9750ad0c 100644
--- a/drivers/bus/dpaa/include/fsl_fman.h
+++ b/drivers/bus/dpaa/include/fsl_fman.h
@@ -60,6 +60,18 @@ void fman_if_stats_reset(struct fman_if *p);
__rte_internal
void fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n);
+__rte_internal
+void fman_if_bmi_stats_enable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_disable(struct fman_if *p);
+
+__rte_internal
+void fman_if_bmi_stats_get_all(struct fman_if *p, uint64_t *value);
+
+__rte_internal
+void fman_if_bmi_stats_reset(struct fman_if *p);
+
/* Set ignore pause option for a specific interface */
void fman_if_set_rx_ignore_pause_frames(struct fman_if *p, bool enable);
diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map
index 3f547f75cf..a17d57632e 100644
--- a/drivers/bus/dpaa/version.map
+++ b/drivers/bus/dpaa/version.map
@@ -24,6 +24,10 @@ INTERNAL {
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
+ fman_if_bmi_stats_enable;
+ fman_if_bmi_stats_disable;
+ fman_if_bmi_stats_get_all;
+ fman_if_bmi_stats_reset;
fman_if_clear_mac_addr;
fman_if_disable_rx;
fman_if_discard_rx_errors;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 1a2de5240f..90b34e42f2 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -131,6 +131,22 @@ static const struct rte_dpaa_xstats_name_off dpaa_xstats_strings[] = {
offsetof(struct dpaa_if_stats, tvlan)},
{"rx_undersized",
offsetof(struct dpaa_if_stats, tund)},
+ {"rx_frame_counter",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfrc)},
+ {"rx_bad_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfbc)},
+ {"rx_large_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rlfc)},
+ {"rx_filter_frames_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rffc)},
+ {"rx_frame_discrad_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfdc)},
+ {"rx_frame_list_dma_err_count",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rfldec)},
+ {"rx_out_of_buffer_discard ",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rodc)},
+ {"rx_buf_diallocate",
+ offsetof(struct dpaa_if_rx_bmi_stats, fmbm_rbdc)},
};
static struct rte_dpaa_driver rte_dpaa_pmd;
@@ -430,6 +446,7 @@ static void dpaa_interrupt_handler(void *param)
static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ struct fman_if *fif = dev->process_private;
uint16_t i;
PMD_INIT_FUNC_TRACE();
@@ -443,7 +460,9 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_enable_rx(dev->process_private);
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -461,8 +480,10 @@ static int dpaa_eth_dev_stop(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
dev->data->dev_started = 0;
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac) {
+ fman_if_bmi_stats_disable(fif);
fman_if_disable_rx(fif);
+ }
dev->tx_pkt_burst = dpaa_eth_tx_drop_all;
for (i = 0; i < dev->data->nb_rx_queues; i++)
@@ -769,6 +790,7 @@ static int dpaa_eth_stats_reset(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
fman_if_stats_reset(dev->process_private);
+ fman_if_bmi_stats_reset(dev->process_private);
return 0;
}
@@ -777,8 +799,9 @@ static int
dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned int n)
{
- unsigned int i = 0, num = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i = 0, j, num = RTE_DIM(dpaa_xstats_strings);
uint64_t values[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (n < num)
return num;
@@ -789,10 +812,16 @@ dpaa_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
fman_if_stats_get_all(dev->process_private, values,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < num; i++) {
+ for (i = 0; i < num - (bmi_count - 1); i++) {
xstats[i].id = i;
xstats[i].value = values[dpaa_xstats_strings[i].offset / 8];
}
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < num; i++, j++) {
+ xstats[i].id = i;
+ xstats[i].value = values[j];
+ }
+
return i;
}
@@ -819,8 +848,9 @@ static int
dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
uint64_t *values, unsigned int n)
{
- unsigned int i, stat_cnt = RTE_DIM(dpaa_xstats_strings);
+ unsigned int i, j, stat_cnt = RTE_DIM(dpaa_xstats_strings);
uint64_t values_copy[sizeof(struct dpaa_if_stats) / 8];
+ unsigned int bmi_count = sizeof(struct dpaa_if_rx_bmi_stats) / 4;
if (!ids) {
if (n < stat_cnt)
@@ -832,10 +862,14 @@ dpaa_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids,
fman_if_stats_get_all(dev->process_private, values_copy,
sizeof(struct dpaa_if_stats) / 8);
- for (i = 0; i < stat_cnt; i++)
+ for (i = 0; i < stat_cnt - (bmi_count - 1); i++)
values[i] =
values_copy[dpaa_xstats_strings[i].offset / 8];
+ fman_if_bmi_stats_get_all(dev->process_private, values);
+ for (j = 0; i < stat_cnt; i++, j++)
+ values[i] = values_copy[j];
+
return stat_cnt;
}
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b6c61b8b6b..261a5a3ca7 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -212,6 +212,18 @@ dpaa_rx_cb_atomic(void *event,
const struct qm_dqrr_entry *dqrr,
void **bufs);
+struct dpaa_if_rx_bmi_stats {
+ uint32_t fmbm_rstc; /**< Rx Statistics Counters*/
+ uint32_t fmbm_rfrc; /**< Rx Frame Counter*/
+ uint32_t fmbm_rfbc; /**< Rx Bad Frames Counter*/
+ uint32_t fmbm_rlfc; /**< Rx Large Frames Counter*/
+ uint32_t fmbm_rffc; /**< Rx Filter Frames Counter*/
+ uint32_t fmbm_rfdc; /**< Rx Frame Discard Counter*/
+ uint32_t fmbm_rfldec; /**< Rx Frames List DMA Error Counter*/
+ uint32_t fmbm_rodc; /**< Rx Out of Buffers Discard nntr*/
+ uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
+};
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 06/18] net/dpaa: support Tx confirmation to enable PTP
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (4 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 05/18] bus/dpaa: add port buffer manager stats Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-04 14:01 ` David Marchand
2024-10-01 11:03 ` [PATCH v5 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
` (13 subsequent siblings)
19 siblings, 1 reply; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
TX confirmation provides dedicated confirmation
queues for transmitted packets. These queues are
used by software to get the status and release
transmitted packets buffers.
This patch also changes the IEEE1588 support as devargs
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 3 +
drivers/net/dpaa/dpaa_ethdev.c | 124 ++++++++++++++++++++++++++-------
drivers/net/dpaa/dpaa_ethdev.h | 4 +-
drivers/net/dpaa/dpaa_rxtx.c | 49 +++++++++++++
drivers/net/dpaa/dpaa_rxtx.h | 2 +
5 files changed, 154 insertions(+), 28 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index e8402dff52..acf4daab02 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -264,6 +264,9 @@ for details.
Done
testpmd>
+* Use dev arg option ``drv_ieee1588=1`` to enable ieee 1588 support at
+ driver level. e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``
+
FMAN Config
-----------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 90b34e42f2..bba305cfb1 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2020,2022-2024 NXP
*
*/
/* System headers */
@@ -30,6 +30,7 @@
#include <rte_eal.h>
#include <rte_alarm.h>
#include <rte_ether.h>
+#include <rte_kvargs.h>
#include <ethdev_driver.h>
#include <rte_malloc.h>
#include <rte_ring.h>
@@ -50,6 +51,7 @@
#include <process.h>
#include <fmlib/fm_ext.h>
+#define DRIVER_IEEE1588 "drv_ieee1588"
#define CHECK_INTERVAL 100 /* 100ms */
#define MAX_REPEAT_TIME 90 /* 9s (90 * 100ms) in total */
@@ -83,6 +85,7 @@ static uint64_t dev_tx_offloads_nodis =
static int is_global_init;
static int fmc_q = 1; /* Indicates the use of static fmc for distribution */
static int default_q; /* use default queue - FMC is not executed*/
+int dpaa_ieee_1588; /* use to indicate if IEEE 1588 is enabled for the driver */
/* At present we only allow up to 4 push mode queues as default - as each of
* this queue need dedicated portal and we are short of portals.
*/
@@ -1826,9 +1829,15 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
opts.fqd.dest.wq = DPAA_IF_TX_PRIORITY;
opts.fqd.fq_ctrl = QM_FQCTRL_PREFERINCACHE;
opts.fqd.context_b = 0;
- /* no tx-confirmation */
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
- opts.fqd.context_a.lo = 0 | fman_dealloc_bufs_mask_lo;
+ if (dpaa_ieee_1588) {
+ opts.fqd.context_a.lo = 0;
+ opts.fqd.context_a.hi = fman_dealloc_bufs_mask_hi;
+ } else {
+ /* no tx-confirmation */
+ opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
+ opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ }
+
if (fman_ip_rev >= FMAN_V3) {
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= 0x04000000;
@@ -1861,9 +1870,10 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default). */
-static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
+ * to support PTP
+ */
+static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
int ret;
@@ -1872,15 +1882,15 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
ret = qman_reserve_fqid(fqid);
if (ret) {
- DPAA_PMD_ERR("Reserve debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("Reserve fqid %d failed with ret: %d",
fqid, ret);
return -EINVAL;
}
/* "map" this Rx FQ to one of the interfaces Tx FQID */
- DPAA_PMD_DEBUG("Creating debug fq %p, fqid %d", fq, fqid);
+ DPAA_PMD_DEBUG("Creating fq %p, fqid %d", fq, fqid);
ret = qman_create_fq(fqid, QMAN_FQ_FLAG_NO_ENQUEUE, fq);
if (ret) {
- DPAA_PMD_ERR("create debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("create fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
@@ -1888,11 +1898,10 @@ static int dpaa_debug_queue_init(struct qman_fq *fq, uint32_t fqid)
opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
ret = qman_init_fq(fq, 0, &opts);
if (ret)
- DPAA_PMD_ERR("init debug fqid %d failed with ret: %d",
+ DPAA_PMD_ERR("init fqid %d failed with ret: %d",
fqid, ret);
return ret;
}
-#endif
/* Initialise a network interface */
static int
@@ -1927,6 +1936,43 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+static int
+check_devargs_handler(__rte_unused const char *key, const char *value,
+ __rte_unused void *opaque)
+{
+ if (strcmp(value, "1"))
+ return -1;
+
+ return 0;
+}
+
+static int
+dpaa_get_devargs(struct rte_devargs *devargs, const char *key)
+{
+ struct rte_kvargs *kvlist;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, NULL);
+ if (!kvlist)
+ return 0;
+
+ if (!rte_kvargs_count(kvlist, key)) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+
+ if (rte_kvargs_process(kvlist, key,
+ check_devargs_handler, NULL) < 0) {
+ rte_kvargs_free(kvlist);
+ return 0;
+ }
+ rte_kvargs_free(kvlist);
+
+ return 1;
+}
+
/* Initialise a network interface */
static int
dpaa_dev_init(struct rte_eth_dev *eth_dev)
@@ -1944,6 +1990,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
uint32_t dev_rx_fqids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
+ struct rte_device *dev = eth_dev->device;
PMD_INIT_FUNC_TRACE();
@@ -1960,6 +2007,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
dpaa_intf->ifid = dev_id;
dpaa_intf->cfg = cfg;
+ if (dpaa_get_devargs(dev->devargs, DRIVER_IEEE1588))
+ dpaa_ieee_1588 = 1;
+
memset((char *)dev_rx_fqids, 0,
sizeof(uint32_t) * DPAA_MAX_NUM_PCD_QUEUES);
@@ -2079,6 +2129,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
+ dpaa_intf->tx_conf_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
+ MAX_DPAA_CORES, MAX_CACHELINE);
+ if (!dpaa_intf->tx_conf_queues) {
+ DPAA_PMD_ERR("Failed to alloc mem for TX conf queues\n");
+ ret = -ENOMEM;
+ goto free_rx;
+ }
+
/* If congestion control is enabled globally*/
if (td_tx_threshold) {
dpaa_intf->cgr_tx = rte_zmalloc(NULL,
@@ -2115,22 +2173,32 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_debug_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ if (dpaa_ieee_1588)
#endif
+ {
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
+ [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
+ ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
+ fman_intf->fqid_tx_confirm);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
+ goto free_tx;
+ }
+ dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
+ }
DPAA_PMD_DEBUG("All frame queues created");
@@ -2388,4 +2456,6 @@ static struct rte_dpaa_driver rte_dpaa_pmd = {
};
RTE_PMD_REGISTER_DPAA(net_dpaa, rte_dpaa_pmd);
+RTE_PMD_REGISTER_PARAM_STRING(net_dpaa,
+ DRIVER_IEEE1588 "=<int>");
RTE_LOG_REGISTER_DEFAULT(dpaa_logtype_pmd, NOTICE);
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 261a5a3ca7..b427b29cb6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright (c) 2014-2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2024 NXP
*
*/
#ifndef __DPAA_ETHDEV_H__
@@ -112,6 +112,7 @@
#define FMC_FILE "/tmp/fmc.bin"
extern struct rte_mempool *dpaa_tx_sg_pool;
+extern int dpaa_ieee_1588;
/* structure to free external and indirect
* buffers.
@@ -131,6 +132,7 @@ struct dpaa_if {
struct qman_fq *rx_queues;
struct qman_cgr *cgr_rx;
struct qman_fq *tx_queues;
+ struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
struct qman_fq debug_queues[2];
uint16_t nb_rx_queues;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index c2579d65ee..8593e20200 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1082,6 +1082,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
+ struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
@@ -1162,6 +1165,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
+
+ if (dpaa_ieee_1588)
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
+
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1190,6 +1197,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
+ if (dpaa_ieee_1588)
+ dpaa_eth_tx_conf(fq_txconf);
+
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1200,6 +1210,45 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return sent;
}
+void
+dpaa_eth_tx_conf(void *q)
+{
+ struct qman_fq *fq = q;
+ struct qm_dqrr_entry *dq;
+ int num_tx_conf, ret, dq_num;
+ uint32_t vdqcr_flags = 0;
+
+ if (unlikely(rte_dpaa_bpid_info == NULL &&
+ rte_eal_process_type() == RTE_PROC_SECONDARY))
+ rte_dpaa_bpid_info = fq->bp_array;
+
+ if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+ ret = rte_dpaa_portal_init((void *)0);
+ if (ret) {
+ DPAA_PMD_ERR("Failure in affining portal");
+ return;
+ }
+ }
+
+ num_tx_conf = DPAA_MAX_DEQUEUE_NUM_FRAMES - 2;
+
+ do {
+ dq_num = 0;
+ ret = qman_set_vdq(fq, num_tx_conf, vdqcr_flags);
+ if (ret)
+ return;
+ do {
+ dq = qman_dequeue(fq);
+ if (!dq)
+ continue;
+ dq_num++;
+ dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ qman_dqrr_consume(fq, dq);
+ dpaa_free_mbuf(&dq->fd);
+ } while (fq->flags & QMAN_FQ_STATE_VDQCR);
+ } while (dq_num == num_tx_conf);
+}
+
uint16_t
dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index b2d7c0f2a3..042602e087 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -281,6 +281,8 @@ uint16_t dpaa_eth_queue_tx_slow(void *q, struct rte_mbuf **bufs,
uint16_t nb_bufs);
uint16_t dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs);
+void dpaa_eth_tx_conf(void *q);
+
uint16_t dpaa_eth_tx_drop_all(void *q __rte_unused,
struct rte_mbuf **bufs __rte_unused,
uint16_t nb_bufs __rte_unused);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v5 06/18] net/dpaa: support Tx confirmation to enable PTP
2024-10-01 11:03 ` [PATCH v5 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
@ 2024-10-04 14:01 ` David Marchand
0 siblings, 0 replies; 129+ messages in thread
From: David Marchand @ 2024-10-04 14:01 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev, ferruh.yigit, Vanshika Shukla
On Tue, Oct 1, 2024 at 1:04 PM Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
> diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
> index 90b34e42f2..bba305cfb1 100644
> --- a/drivers/net/dpaa/dpaa_ethdev.c
> +++ b/drivers/net/dpaa/dpaa_ethdev.c
[snip]
> @@ -2079,6 +2129,14 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
> goto free_rx;
> }
>
> + dpaa_intf->tx_conf_queues = rte_zmalloc(NULL, sizeof(struct qman_fq) *
> + MAX_DPAA_CORES, MAX_CACHELINE);
> + if (!dpaa_intf->tx_conf_queues) {
> + DPAA_PMD_ERR("Failed to alloc mem for TX conf queues\n");
No need for \n.
Please send a fix against next-net.
> + ret = -ENOMEM;
> + goto free_rx;
> + }
> +
--
David Marchand
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 07/18] net/dpaa: add support to separate Tx conf queues
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (5 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 06/18] net/dpaa: support Tx confirmation to enable PTP Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
` (12 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch separates Tx confirmation queues for kernel
and DPDK so as to support the VSP case.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fsl_qman.h | 4 ++-
drivers/net/dpaa/dpaa_ethdev.c | 45 +++++++++++++++++++++--------
drivers/net/dpaa/dpaa_rxtx.c | 3 +-
3 files changed, 37 insertions(+), 15 deletions(-)
diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h
index c0677976e8..db14dfb839 100644
--- a/drivers/bus/dpaa/include/fsl_qman.h
+++ b/drivers/bus/dpaa/include/fsl_qman.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2008-2012 Freescale Semiconductor, Inc.
- * Copyright 2019 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -1237,6 +1237,8 @@ struct qman_fq {
/* DPDK Interface */
void *dpaa_intf;
+ /*to store tx_conf_queue corresponding to tx_queue*/
+ struct qman_fq *tx_conf_queue;
struct rte_event ev;
/* affined portal in case of static queue */
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bba305cfb1..3ee3029729 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1870,9 +1870,30 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
return ret;
}
-/* Initialise a DEBUG FQ ([rt]x_error, rx_default) and DPAA TX CONFIRM queue
- * to support PTP
- */
+static int
+dpaa_tx_conf_queue_init(struct qman_fq *fq)
+{
+ struct qm_mcc_initfq opts = {0};
+ int ret;
+
+ PMD_INIT_FUNC_TRACE();
+
+ ret = qman_create_fq(0, QMAN_FQ_FLAG_DYNAMIC_FQID, fq);
+ if (ret) {
+ DPAA_PMD_ERR("create Tx_conf failed with ret: %d", ret);
+ return ret;
+ }
+
+ opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL;
+ opts.fqd.dest.wq = DPAA_IF_DEBUG_PRIORITY;
+ ret = qman_init_fq(fq, 0, &opts);
+ if (ret)
+ DPAA_PMD_ERR("init Tx_conf fqid %d failed with ret: %d",
+ fq->fqid, ret);
+ return ret;
+}
+
+/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
struct qm_mcc_initfq opts = {0};
@@ -2170,6 +2191,15 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
if (ret)
goto free_tx;
dpaa_intf->tx_queues[loop].dpaa_intf = dpaa_intf;
+
+ if (dpaa_ieee_1588) {
+ ret = dpaa_tx_conf_queue_init(&dpaa_intf->tx_conf_queues[loop]);
+ if (ret)
+ goto free_tx;
+
+ dpaa_intf->tx_conf_queues[loop].dpaa_intf = dpaa_intf;
+ dpaa_intf->tx_queues[loop].tx_conf_queue = &dpaa_intf->tx_conf_queues[loop];
+ }
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
@@ -2190,16 +2220,7 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
goto free_tx;
}
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_TX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(dpaa_intf->tx_conf_queues,
- fman_intf->fqid_tx_confirm);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX CONFIRM queue init failed!");
- goto free_tx;
- }
- dpaa_intf->tx_conf_queues->dpaa_intf = dpaa_intf;
}
-
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 8593e20200..3bd35c7a0e 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1083,8 +1083,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
- struct dpaa_if *dpaa_intf = fq->dpaa_intf;
- struct qman_fq *fq_txconf = dpaa_intf->tx_conf_queues;
+ struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
ret = rte_dpaa_portal_init((void *)0);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 08/18] net/dpaa: share MAC FMC scheme and CC parse
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (6 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 07/18] net/dpaa: add support to separate Tx conf queues Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
` (11 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
For Shared MAC:
1) Allocate RXQ from VSP scheme. (Virtual Storage Profile)
2) Allocate RXQ from Coarse classifiation (CC) rules to VSP.
2) Remove RXQ allocated which is reconfigured without VSP.
3) Don't alloc default queue and err queues.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/include/fman.h | 1 +
drivers/net/dpaa/dpaa_ethdev.c | 60 +++--
drivers/net/dpaa/dpaa_ethdev.h | 13 +-
drivers/net/dpaa/dpaa_flow.c | 8 +-
drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++++++++++++------------
drivers/net/dpaa/dpaa_rxtx.c | 20 +-
6 files changed, 346 insertions(+), 177 deletions(-)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 60681068ea..6b2a1893f9 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -76,6 +76,7 @@ enum fman_mac_type {
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
+ fman_onic,
};
struct mac_addr {
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 3ee3029729..bf14d73433 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -255,7 +255,6 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
DPAA_PMD_ERR("Cannot open IF socket");
return -errno;
}
-
strncpy(ifr.ifr_name, dpaa_intf->name, IFNAMSIZ - 1);
if (ioctl(socket_fd, SIOCGIFMTU, &ifr) < 0) {
@@ -1893,6 +1892,7 @@ dpaa_tx_conf_queue_init(struct qman_fq *fq)
return ret;
}
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
/* Initialise a DEBUG FQ ([rt]x_error, rx_default) */
static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
{
@@ -1923,6 +1923,7 @@ static int dpaa_def_queue_init(struct qman_fq *fq, uint32_t fqid)
fqid, ret);
return ret;
}
+#endif
/* Initialise a network interface */
static int
@@ -1957,6 +1958,41 @@ dpaa_dev_init_secondary(struct rte_eth_dev *eth_dev)
return 0;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int
+dpaa_error_queue_init(struct dpaa_if *dpaa_intf,
+ struct fman_if *fman_intf)
+{
+ int i, ret;
+ struct qman_fq *err_queues = dpaa_intf->debug_queues;
+ uint32_t err_fqid = 0;
+
+ if (fman_intf->is_shared_mac) {
+ DPAA_PMD_DEBUG("Shared MAC's err queues are handled in kernel");
+ return 0;
+ }
+
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
+ if (i == DPAA_DEBUG_FQ_RX_ERROR)
+ err_fqid = fman_intf->fqid_rx_err;
+ else if (i == DPAA_DEBUG_FQ_TX_ERROR)
+ err_fqid = fman_intf->fqid_tx_err;
+ else
+ continue;
+ ret = dpaa_def_queue_init(&err_queues[i], err_fqid);
+ if (ret) {
+ DPAA_PMD_ERR("DPAA %s ERROR queue init failed!",
+ i == DPAA_DEBUG_FQ_RX_ERROR ?
+ "RX" : "TX");
+ return ret;
+ }
+ err_queues[i].dpaa_intf = dpaa_intf;
+ }
+
+ return 0;
+}
+#endif
+
static int
check_devargs_handler(__rte_unused const char *key, const char *value,
__rte_unused void *opaque)
@@ -2202,25 +2238,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
}
}
dpaa_intf->nb_tx_queues = MAX_DPAA_CORES;
-
-#if !defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
- if (dpaa_ieee_1588)
+#if defined(RTE_LIBRTE_DPAA_DEBUG_DRIVER)
+ ret = dpaa_error_queue_init(dpaa_intf, fman_intf);
+ if (ret)
+ goto free_tx;
#endif
- {
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_RX_ERROR], fman_intf->fqid_rx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA RX ERROR queue init failed!");
- goto free_tx;
- }
- dpaa_intf->debug_queues[DPAA_DEBUG_FQ_RX_ERROR].dpaa_intf = dpaa_intf;
- ret = dpaa_def_queue_init(&dpaa_intf->debug_queues
- [DPAA_DEBUG_FQ_TX_ERROR], fman_intf->fqid_tx_err);
- if (ret) {
- DPAA_PMD_ERR("DPAA TX ERROR queue init failed!");
- goto free_tx;
- }
- }
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b427b29cb6..0a1ceb376a 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -78,8 +78,11 @@
#define DPAA_IF_RX_CONTEXT_STASH 0
/* Each "debug" FQ is represented by one of these */
-#define DPAA_DEBUG_FQ_RX_ERROR 0
-#define DPAA_DEBUG_FQ_TX_ERROR 1
+enum {
+ DPAA_DEBUG_FQ_RX_ERROR,
+ DPAA_DEBUG_FQ_TX_ERROR,
+ DPAA_DEBUG_FQ_MAX_NUM
+};
#define DPAA_RSS_OFFLOAD_ALL ( \
RTE_ETH_RSS_L2_PAYLOAD | \
@@ -107,6 +110,10 @@
#define DPAA_FD_CMD_CFQ 0x00ffffff
/**< Confirmation Frame Queue */
+#define DPAA_1G_MAC_START_IDX 1
+#define DPAA_10G_MAC_START_IDX 9
+#define DPAA_2_5G_MAC_START_IDX DPAA_10G_MAC_START_IDX
+
#define DPAA_DEFAULT_RXQ_VSP_ID 1
#define FMC_FILE "/tmp/fmc.bin"
@@ -134,7 +141,7 @@ struct dpaa_if {
struct qman_fq *tx_queues;
struct qman_fq *tx_conf_queues;
struct qman_cgr *cgr_tx;
- struct qman_fq debug_queues[2];
+ struct qman_fq debug_queues[DPAA_DEBUG_FQ_MAX_NUM];
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
uint32_t ifid;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 02aca78d05..082bd5d014 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -651,7 +651,13 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_mac_1g)
+ /* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
+ * ports so that kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_mac_1g &&
+ fif->mac_idx >= DPAA_10G_MAC_START_IDX)
+ return e_FM_PORT_TYPE_RX_10G;
+ else if (fif->mac_type == fman_mac_1g)
return e_FM_PORT_TYPE_RX;
else if (fif->mac_type == fman_mac_2_5g)
return e_FM_PORT_TYPE_RX_2_5G;
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index f8c9360311..d80ea1010a 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2021 NXP
+ * Copyright 2017-2023 NXP
*/
/* System headers */
@@ -204,139 +204,258 @@ struct fmc_model_t {
struct fmc_model_t *g_fmc_model;
-static int dpaa_port_fmc_port_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx)
+static int
+dpaa_port_fmc_port_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc_model,
+ int apply_idx)
{
int current_port = fmc_model->apply_order[apply_idx].index;
const fmc_port *pport = &fmc_model->port[current_port];
- const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- const uint8_t mac_type[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2};
+ uint32_t num;
+
+ if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
+ pport->number == fif->mac_idx &&
+ (fif->mac_type == fman_offline ||
+ fif->mac_type == fman_onic))
+ return current_port;
+
+ if (fif->mac_type == fman_mac_1g) {
+ if (pport->type != e_FM_PORT_TYPE_RX)
+ return -ENODEV;
+ num = pport->number + DPAA_1G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_2_5g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_2_5G)
+ return -ENODEV;
+ num = pport->number + DPAA_2_5G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ if (fif->mac_type == fman_mac_10g) {
+ if (pport->type != e_FM_PORT_TYPE_RX_10G)
+ return -ENODEV;
+ num = pport->number + DPAA_10G_MAC_START_IDX;
+ if (fif->mac_idx == num)
+ return current_port;
+
+ return -ENODEV;
+ }
+
+ DPAA_PMD_ERR("Invalid MAC(mac_idx=%d) type(%d)",
+ fif->mac_idx, fif->mac_type);
+
+ return -EINVAL;
+}
+
+static int
+dpaa_fq_is_in_kernel(uint32_t fqid,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if ((fqid == fif->fqid_rx_def ||
+ (fqid >= fif->fqid_rx_pcd &&
+ fqid < (fif->fqid_rx_pcd + fif->fqid_rx_pcd_count)) ||
+ fqid == fif->fqid_rx_err ||
+ fqid == fif->fqid_tx_err))
+ return true;
+
+ return false;
+}
+
+static int
+dpaa_vsp_id_is_in_kernel(uint8_t vsp_id,
+ struct fman_if *fif)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (vsp_id == fif->base_profile_id)
+ return true;
+
+ return false;
+}
+
+static uint8_t
+dpaa_enqueue_vsp_id(struct fman_if *fif,
+ const struct ioc_fm_pcd_cc_next_enqueue_params_t *eq_param)
+{
+ if (eq_param->override_fqid)
+ return eq_param->new_relative_storage_profile_id;
+
+ return fif->base_profile_id;
+}
- if (mac_idx[fif->mac_idx] != pport->number ||
- mac_type[fif->mac_idx] != pport->type)
- return -1;
+static int
+dpaa_kg_storage_is_in_kernel(struct fman_if *fif,
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage)
+{
+ if (!fif->is_shared_mac)
+ return false;
+
+ if (!kg_storage->direct ||
+ (kg_storage->direct &&
+ kg_storage->profile_select.direct_relative_profile_id ==
+ fif->base_profile_id))
+ return true;
- return current_port;
+ return false;
}
-static int dpaa_port_fmc_scheme_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static void
+dpaa_fmc_remove_fq_from_allocated(uint32_t *fqids,
+ uint16_t *rxq_idx, uint32_t rm_fqid)
{
- int idx = fmc->apply_order[apply_idx].index;
uint32_t i;
- if (!fmc->scheme[idx].override_storage_profile &&
- fif->is_shared_mac) {
- DPAA_PMD_WARN("No VSP assigned to scheme %d for sharemac %d!",
- idx, fif->mac_idx);
- DPAA_PMD_WARN("Risk to receive pkts from skb pool to CRASH!");
+ for (i = 0; i < (*rxq_idx); i++) {
+ if (fqids[i] != rm_fqid)
+ continue;
+ DPAA_PMD_WARN("Remove fq(0x%08x) allocated.",
+ rm_fqid);
+ if ((*rxq_idx) > (i + 1)) {
+ memmove(&fqids[i], &fqids[i + 1],
+ ((*rxq_idx) - (i + 1)) * sizeof(uint32_t));
+ }
+ (*rxq_idx)--;
+ break;
}
+}
- if (e_IOC_FM_PCD_DONE ==
- fmc->scheme[idx].next_engine) {
- for (i = 0; i < fmc->scheme[idx]
- .key_ext_and_hash.hash_dist_num_of_fqids; i++) {
- uint32_t fqid = fmc->scheme[idx].base_fqid + i;
- int k, found = 0;
-
- if (fqid == fif->fqid_rx_def ||
- (fqid >= fif->fqid_rx_pcd &&
- fqid < (fif->fqid_rx_pcd +
- fif->fqid_rx_pcd_count))) {
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id !=
- fif->base_profile_id) {
- DPAA_PMD_ERR("Def RXQ must be associated with def VSP on sharemac!");
-
- return -1;
- }
- continue;
+static int
+dpaa_port_fmc_scheme_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
+{
+ int scheme_idx = fmc->apply_order[apply_idx].index;
+ int k, found = 0;
+ uint32_t i, num_rxq, fqid, rxq_idx_start = *rxq_idx;
+ const struct fm_pcd_kg_scheme_params_t *scheme;
+ const struct ioc_fm_pcd_kg_key_extract_and_hash_params_t *params;
+ const struct ioc_fm_pcd_kg_storage_profile_t *kg_storage;
+ uint8_t vsp_id;
+
+ scheme = &fmc->scheme[scheme_idx];
+ params = &scheme->key_ext_and_hash;
+ num_rxq = params->hash_dist_num_of_fqids;
+ kg_storage = &scheme->storage_profile;
+
+ if (scheme->override_storage_profile && kg_storage->direct)
+ vsp_id = kg_storage->profile_select.direct_relative_profile_id;
+ else
+ vsp_id = fif->base_profile_id;
+
+ if (dpaa_kg_storage_is_in_kernel(fif, kg_storage)) {
+ DPAA_PMD_WARN("Scheme[%d]'s VSP is in kernel",
+ scheme_idx);
+ /* The FQ may be allocated from previous CC or scheme,
+ * find and remove it.
+ */
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ DPAA_PMD_WARN("Removed fqid(0x%08x) of Scheme[%d]",
+ fqid, scheme_idx);
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ if (!dpaa_fq_is_in_kernel(fqid, fif)) {
+ char reason_msg[128];
+ char result_msg[128];
+
+ sprintf(reason_msg,
+ "NOT handled in kernel");
+ sprintf(result_msg,
+ "will DRAIN kernel pool!");
+ DPAA_PMD_WARN("Traffic to FQ(%08x)(%s) %s",
+ fqid, reason_msg, result_msg);
}
+ }
- if (fif->is_shared_mac &&
- !fmc->scheme[idx].override_storage_profile) {
- DPAA_PMD_ERR("RXQ to DPDK must be associated with VSP on sharemac!");
- return -1;
- }
+ return 0;
+ }
- if (fif->is_shared_mac &&
- fmc->scheme[idx].override_storage_profile &&
- fmc->scheme[idx].storage_profile.direct &&
- fmc->scheme[idx].storage_profile
- .profile_select.direct_relative_profile_id ==
- fif->base_profile_id) {
- DPAA_PMD_ERR("RXQ can't be associated with default VSP on sharemac!");
+ if (e_IOC_FM_PCD_DONE != scheme->next_engine) {
+ /* Do nothing.*/
+ DPAA_PMD_DEBUG("Will parse scheme[%d]'s next engine(%d)",
+ scheme_idx, scheme->next_engine);
+ return 0;
+ }
- return -1;
- }
+ for (i = 0; i < num_rxq; i++) {
+ fqid = scheme->base_fqid + i;
+ found = 0;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_DEBUG("Too many queues in FMC policy"
- "%d overflow %d",
- (*rxq_idx), max_nb_rxq);
+ if (dpaa_fq_is_in_kernel(fqid, fif)) {
+ DPAA_PMD_WARN("FQ(0x%08x) is handled in kernel.",
+ fqid);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
- continue;
- }
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
- for (k = 0; k < (*rxq_idx); k++) {
- if (fqids[k] == fqid) {
- found = 1;
- break;
- }
- }
+ break;
+ }
- if (found)
- continue;
- fqids[(*rxq_idx)] = fqid;
- if (fmc->scheme[idx].override_storage_profile) {
- if (fmc->scheme[idx].storage_profile.direct) {
- vspids[(*rxq_idx)] =
- fmc->scheme[idx].storage_profile
- .profile_select
- .direct_relative_profile_id;
- } else {
- vspids[(*rxq_idx)] = -1;
- }
- } else {
- vspids[(*rxq_idx)] = -1;
+ for (k = 0; k < (*rxq_idx); k++) {
+ if (fqids[k] == fqid) {
+ found = 1;
+ break;
}
- (*rxq_idx)++;
}
+
+ if (found)
+ continue;
+ fqids[(*rxq_idx)] = fqid;
+ vspids[(*rxq_idx)] = vsp_id;
+
+ (*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
- const struct fmc_model_t *fmc_model,
- int apply_idx,
- uint16_t *rxq_idx, int max_nb_rxq,
- uint32_t *fqids, int8_t *vspids)
+static int
+dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
+ const struct fmc_model_t *fmc,
+ int apply_idx,
+ uint16_t *rxq_idx, int max_nb_rxq,
+ uint32_t *fqids, int8_t *vspids)
{
uint16_t j, k, found = 0;
const struct ioc_keys_params_t *keys_params;
- uint32_t fqid, cc_idx = fmc_model->apply_order[apply_idx].index;
-
- keys_params = &fmc_model->ccnode[cc_idx].keys_params;
+ const struct ioc_fm_pcd_cc_next_engine_params_t *params;
+ uint32_t fqid, cc_idx = fmc->apply_order[apply_idx].index;
+ uint32_t rxq_idx_start = *rxq_idx;
+ uint8_t vsp_id;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
+ keys_params = &fmc->ccnode[cc_idx].keys_params;
for (j = 0; j < keys_params->num_of_keys; ++j) {
+ if ((*rxq_idx) >= max_nb_rxq) {
+ DPAA_PMD_WARN("Too many queues(%d) >= MAX number(%d)",
+ (*rxq_idx), max_nb_rxq);
+
+ break;
+ }
found = 0;
- fqid = keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.new_fqid;
+ params = &keys_params->key_params[j].cc_next_engine_params;
/* We read DPDK queue from last classification rule present in
* FMC policy file. Hence, this check is required here.
@@ -344,15 +463,30 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
* have userspace queue so that it can be used by DPDK
* application.
*/
- if (keys_params->key_params[j].cc_next_engine_params
- .next_engine != e_IOC_FM_PCD_DONE) {
- DPAA_PMD_WARN("FMC CC next engine not support");
+ if (params->next_engine != e_IOC_FM_PCD_DONE) {
+ DPAA_PMD_WARN("CC next engine(%d) not support",
+ params->next_engine);
continue;
}
- if (keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params.action !=
+ if (params->params.enqueue_params.action !=
e_IOC_FM_PCD_ENQ_FRAME)
continue;
+
+ fqid = params->params.enqueue_params.new_fqid;
+ vsp_id = dpaa_enqueue_vsp_id(fif,
+ ¶ms->params.enqueue_params);
+ if (dpaa_fq_is_in_kernel(fqid, fif) ||
+ dpaa_vsp_id_is_in_kernel(vsp_id, fif)) {
+ DPAA_PMD_DEBUG("FQ(0x%08x)/VSP(%d) is in kernel.",
+ fqid, vsp_id);
+ /* The FQ may be allocated from previous CC or scheme,
+ * remove it.
+ */
+ dpaa_fmc_remove_fq_from_allocated(fqids,
+ rxq_idx, fqid);
+ continue;
+ }
+
for (k = 0; k < (*rxq_idx); k++) {
if (fqids[k] == fqid) {
found = 1;
@@ -362,38 +496,22 @@ static int dpaa_port_fmc_ccnode_parse(struct fman_if *fif,
if (found)
continue;
- if ((*rxq_idx) >= max_nb_rxq) {
- DPAA_PMD_WARN("Too many queues in FMC policy %d overflow %d",
- (*rxq_idx), max_nb_rxq);
-
- return 0;
- }
-
fqids[(*rxq_idx)] = fqid;
- vspids[(*rxq_idx)] =
- keys_params->key_params[j].cc_next_engine_params
- .params.enqueue_params
- .new_relative_storage_profile_id;
-
- if (vspids[(*rxq_idx)] == fif->base_profile_id &&
- fif->is_shared_mac) {
- DPAA_PMD_ERR("VSP %d can NOT be used on DPDK.",
- vspids[(*rxq_idx)]);
- DPAA_PMD_ERR("It is associated to skb pool of shared interface.");
- return -1;
- }
+ vspids[(*rxq_idx)] = vsp_id;
+
(*rxq_idx)++;
}
- return 0;
+ return (*rxq_idx) - rxq_idx_start;
}
-int dpaa_port_fmc_init(struct fman_if *fif,
- uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
+int
+dpaa_port_fmc_init(struct fman_if *fif,
+ uint32_t *fqids, int8_t *vspids, int max_nb_rxq)
{
int current_port = -1, ret;
uint16_t rxq_idx = 0;
- const struct fmc_model_t *fmc_model;
+ const struct fmc_model_t *fmc;
uint32_t i;
if (!g_fmc_model) {
@@ -402,14 +520,14 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (!fp) {
DPAA_PMD_ERR("%s not exists", FMC_FILE);
- return -1;
+ return -ENOENT;
}
g_fmc_model = rte_malloc(NULL, sizeof(struct fmc_model_t), 64);
if (!g_fmc_model) {
DPAA_PMD_ERR("FMC memory alloc failed");
fclose(fp);
- return -1;
+ return -ENOBUFS;
}
bytes_read = fread(g_fmc_model,
@@ -419,25 +537,28 @@ int dpaa_port_fmc_init(struct fman_if *fif,
fclose(fp);
rte_free(g_fmc_model);
g_fmc_model = NULL;
- return -1;
+ return -EIO;
}
fclose(fp);
}
- fmc_model = g_fmc_model;
+ fmc = g_fmc_model;
- if (fmc_model->format_version != FMC_OUTPUT_FORMAT_VER)
- return -1;
+ if (fmc->format_version != FMC_OUTPUT_FORMAT_VER) {
+ DPAA_PMD_ERR("FMC version(0x%08x) != Supported ver(0x%08x)",
+ fmc->format_version, FMC_OUTPUT_FORMAT_VER);
+ return -EINVAL;
+ }
- for (i = 0; i < fmc_model->apply_order_count; i++) {
- switch (fmc_model->apply_order[i].type) {
+ for (i = 0; i < fmc->apply_order_count; i++) {
+ switch (fmc->apply_order[i].type) {
case fmcengine_start:
break;
case fmcengine_end:
break;
case fmcport_start:
current_port = dpaa_port_fmc_port_parse(fif,
- fmc_model, i);
+ fmc, i);
break;
case fmcport_end:
break;
@@ -445,24 +566,24 @@ int dpaa_port_fmc_init(struct fman_if *fif,
if (current_port < 0)
break;
- ret = dpaa_port_fmc_scheme_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq,
- fqids, vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_scheme_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from scheme[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmcccnode:
if (current_port < 0)
break;
- ret = dpaa_port_fmc_ccnode_parse(fif, fmc_model,
- i, &rxq_idx,
- max_nb_rxq, fqids,
- vspids);
- if (ret)
- return ret;
+ ret = dpaa_port_fmc_ccnode_parse(fif, fmc,
+ i, &rxq_idx, max_nb_rxq, fqids, vspids);
+ DPAA_PMD_INFO("%s %d RXQ(s) from cc[%d]",
+ ret >= 0 ? "Alloc" : "Remove",
+ ret >= 0 ? ret : -ret,
+ fmc->apply_order[i].index);
break;
case fmchtnode:
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 3bd35c7a0e..d1338d1654 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -693,13 +693,26 @@ dpaa_rx_cb_atomic(void *event,
}
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
-static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
+static inline void
+dpaa_eth_err_queue(struct qman_fq *fq)
{
struct rte_mbuf *mbuf;
struct qman_fq *debug_fq;
int ret, i;
struct qm_dqrr_entry *dq;
struct qm_fd *fd;
+ struct dpaa_if *dpaa_intf;
+
+ dpaa_intf = fq->dpaa_intf;
+ if (fq != &dpaa_intf->rx_queues[0]) {
+ /* Associate error queues to the first RXQ.*/
+ return;
+ }
+
+ if (dpaa_intf->cfg->fman_if->is_shared_mac) {
+ /* Error queues of shared MAC are handled in kernel. */
+ return;
+ }
if (unlikely(!RTE_PER_LCORE(dpaa_io))) {
ret = rte_dpaa_portal_init((void *)0);
@@ -708,7 +721,7 @@ static inline void dpaa_eth_err_queue(struct dpaa_if *dpaa_intf)
return;
}
}
- for (i = 0; i <= DPAA_DEBUG_FQ_TX_ERROR; i++) {
+ for (i = 0; i < DPAA_DEBUG_FQ_MAX_NUM; i++) {
debug_fq = &dpaa_intf->debug_queues[i];
ret = qman_set_vdq(debug_fq, 4, QM_VDQCR_EXACT);
if (ret)
@@ -751,8 +764,7 @@ uint16_t dpaa_eth_queue_rx(void *q,
rte_dpaa_bpid_info = fq->bp_array;
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
- if (fq->fqid == ((struct dpaa_if *)fq->dpaa_intf)->rx_queues[0].fqid)
- dpaa_eth_err_queue((struct dpaa_if *)fq->dpaa_intf);
+ dpaa_eth_err_queue(fq);
#endif
if (likely(fq->is_static))
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 09/18] net/dpaa: support Rx/Tx timestamp read
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (7 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 08/18] net/dpaa: share MAC FMC scheme and CC parse Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
` (10 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch implements Rx/Tx timestamp read operations
for DPAA1 platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 21 +++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 6 ++-
drivers/bus/dpaa/include/fman.h | 18 ++++++-
drivers/net/dpaa/dpaa_ethdev.c | 2 +
drivers/net/dpaa/dpaa_ethdev.h | 17 +++++++
drivers/net/dpaa/dpaa_ptp.c | 42 ++++++++++++++++
drivers/net/dpaa/dpaa_rxtx.c | 71 ++++++++++++++++++++++++----
drivers/net/dpaa/dpaa_rxtx.h | 4 +-
drivers/net/dpaa/meson.build | 1 +
10 files changed, 168 insertions(+), 15 deletions(-)
create mode 100644 drivers/net/dpaa/dpaa_ptp.c
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index b136ed191a..4196dd800c 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timestamp offload = Y
Basic stats = Y
Extended stats = Y
FW version = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index beeb03dbf2..e39bc8c252 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2020 NXP
+ * Copyright 2017-2024 NXP
*
*/
@@ -520,6 +520,25 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 124c69edb4..4fc41c1ae9 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020 NXP
+ * Copyright 2017,2020,2022 NXP
*
*/
@@ -565,6 +565,10 @@ fman_if_set_ic_params(struct fman_if *fm_if,
&((struct rx_bmi_regs *)__if->bmi_map)->fmbm_ricp;
out_be32(fmbm_ricp, val);
+ unsigned int *fmbm_ticp =
+ &((struct tx_bmi_regs *)__if->tx_bmi_map)->fmbm_ticp;
+ out_be32(fmbm_ticp, val);
+
return 0;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 6b2a1893f9..09d1ddb897 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2021 NXP
+ * Copyright 2019-2022 NXP
*
*/
@@ -292,6 +292,21 @@ struct rx_bmi_regs {
uint32_t fmbm_rdbg; /**< Rx Debug-*/
};
+struct tx_bmi_regs {
+ uint32_t fmbm_tcfg; /**< Tx Configuration*/
+ uint32_t fmbm_tst; /**< Tx Status*/
+ uint32_t fmbm_tda; /**< Tx DMA attributes*/
+ uint32_t fmbm_tfp; /**< Tx FIFO Parameters*/
+ uint32_t fmbm_tfed; /**< Tx Frame End Data*/
+ uint32_t fmbm_ticp; /**< Tx Internal Context Parameters*/
+ uint32_t fmbm_tfdne; /**< Tx Frame Dequeue Next Engine*/
+ uint32_t fmbm_tfca; /**< Tx Frame Attributes*/
+ uint32_t fmbm_tcfqid; /**< Tx Confirmation Frame Queue ID*/
+ uint32_t fmbm_tefqid; /**< Tx Error Frame Queue ID*/
+ uint32_t fmbm_tfene; /**< Tx Frame Enqueue Next Engine*/
+ uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
+ uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
+};
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -380,6 +395,7 @@ struct __fman_if {
uint64_t regs_size;
void *ccsr_map;
void *bmi_map;
+ void *tx_bmi_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bf14d73433..682cb1c77e 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,8 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
+ .timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
static bool
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 0a1ceb376a..bbdb0936c0 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -151,6 +151,14 @@ struct dpaa_if {
void *netenv_handle;
void *scheme_handle[2];
uint32_t scheme_count;
+ /*stores timestamp of last received packet on dev*/
+ uint64_t rx_timestamp;
+ /*stores timestamp of last received tx confirmation packet on dev*/
+ uint64_t tx_timestamp;
+ /* stores pointer to next tx_conf queue that should be processed,
+ * it corresponds to last packet transmitted
+ */
+ struct qman_fq *next_tx_conf_queue;
void *vsp_handle[DPAA_VSP_PROFILE_MAX_NUM];
uint32_t vsp_bpid[DPAA_VSP_PROFILE_MAX_NUM];
@@ -233,6 +241,15 @@ struct dpaa_if_rx_bmi_stats {
uint32_t fmbm_rbdc; /**< Rx Buffers Deallocate Counter*/
};
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
new file mode 100644
index 0000000000..2ecdda6db0
--- /dev/null
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -0,0 +1,42 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2024 NXP
+ */
+
+/* System headers */
+#include <stdio.h>
+#include <inttypes.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_eth_ctrl.h>
+#include <rte_malloc.h>
+#include <rte_time.h>
+
+#include <dpaa_ethdev.h>
+#include <dpaa_rxtx.h>
+
+int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+
+ if (dpaa_intf->next_tx_conf_queue) {
+ while (!dpaa_intf->tx_timestamp)
+ dpaa_eth_tx_conf(dpaa_intf->next_tx_conf_queue);
+ } else {
+ return -1;
+ }
+ *timestamp = rte_ns_to_timespec(dpaa_intf->tx_timestamp);
+
+ return 0;
+}
+
+int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+ struct timespec *timestamp,
+ uint32_t flags __rte_unused)
+{
+ struct dpaa_if *dpaa_intf = dev->data->dev_private;
+ *timestamp = rte_ns_to_timespec(dpaa_intf->rx_timestamp);
+ return 0;
+}
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d1338d1654..e3b4bb14ab 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2024 NXP
*
*/
@@ -49,7 +49,6 @@
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
- (_fd)->cmd = 0; \
(_fd)->opaque_addr = 0; \
(_fd)->opaque = QM_FD_CONTIG << DPAA_FD_FORMAT_SHIFT; \
(_fd)->opaque |= ((_mbuf)->data_off) << DPAA_FD_OFFSET_SHIFT; \
@@ -122,6 +121,8 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
{
struct annotations_t *annot = GET_ANNOTATIONS(fd_virt_addr);
uint64_t prs = *((uintptr_t *)(&annot->parse)) & DPAA_PARSE_MASK;
+ struct rte_ether_hdr *eth_hdr =
+ rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
DPAA_DP_LOG(DEBUG, " Parsing mbuf: %p with annotations: %p", m, annot);
@@ -241,6 +242,11 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
if (prs & DPAA_PARSE_VLAN_MASK)
m->ol_flags |= RTE_MBUF_F_RX_VLAN;
/* Packet received without stripping the vlan */
+
+ if (eth_hdr->ether_type == htons(RTE_ETHER_TYPE_1588)) {
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP;
+ m->ol_flags |= RTE_MBUF_F_RX_IEEE1588_TMST;
+ }
}
static inline void dpaa_checksum(struct rte_mbuf *mbuf)
@@ -317,7 +323,7 @@ static inline void dpaa_checksum_offload(struct rte_mbuf *mbuf,
prs->ip_off[0] = mbuf->l2_len;
prs->l4_off = mbuf->l3_len + mbuf->l2_len;
/* Enable L3 (and L4, if TCP or UDP) HW checksum*/
- fd->cmd = DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
+ fd->cmd |= DPAA_FD_CMD_RPD | DPAA_FD_CMD_DTC;
}
static inline void
@@ -513,6 +519,7 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
bp_info = DPAA_BPID_TO_POOL_INFO(dqrr[0]->fd.bpid);
ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr[0]->fd));
@@ -554,6 +561,11 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -567,6 +579,7 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
uint16_t offset, i;
uint32_t length;
uint8_t format;
+ struct annotations_t *annot;
for (i = 0; i < num_bufs; i++) {
fd = &dqrr[i]->fd;
@@ -594,6 +607,11 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
rte_mbuf_refcnt_set(mbuf, 1);
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->rx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
}
}
@@ -758,6 +776,8 @@ uint16_t dpaa_eth_queue_rx(void *q,
uint32_t num_rx = 0, ifid = ((struct dpaa_if *)fq->dpaa_intf)->ifid;
int num_rx_bufs, ret;
uint32_t vdqcr_flags = 0;
+ struct annotations_t *annot;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -800,6 +820,10 @@ uint16_t dpaa_eth_queue_rx(void *q,
continue;
bufs[num_rx++] = dpaa_eth_fd_to_mbuf(&dq->fd, ifid);
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
+ if (dpaa_ieee_1588) {
+ annot = GET_ANNOTATIONS(bufs[num_rx - 1]->buf_addr);
+ dpaa_intf->rx_timestamp = rte_cpu_to_be_64(annot->timestamp);
+ }
qman_dqrr_consume(fq, dq);
} while (fq->flags & QMAN_FQ_STATE_VDQCR);
@@ -1095,6 +1119,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
uint32_t free_count = 0;
struct qman_fq *fq = q;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
struct qman_fq *fq_txconf = fq->tx_conf_queue;
if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
@@ -1107,6 +1132,12 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_DP_LOG(DEBUG, "Transmitting %d buffers on queue: %p", nb_bufs, q);
+ if (dpaa_ieee_1588) {
+ dpaa_intf->next_tx_conf_queue = fq_txconf;
+ dpaa_eth_tx_conf(fq_txconf);
+ dpaa_intf->tx_timestamp = 0;
+ }
+
while (nb_bufs) {
frames_to_send = (nb_bufs > DPAA_TX_BURST_SIZE) ?
DPAA_TX_BURST_SIZE : nb_bufs;
@@ -1119,6 +1150,14 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
(mbuf->data_off & 0x7F) != 0x0)
realloc_mbuf = 1;
+
+ fd_arr[loop].cmd = 0;
+ if (dpaa_ieee_1588) {
+ fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
+ qman_fq_fqid(fq_txconf);
+ fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
+ DPAA_FD_CMD_UPD;
+ }
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
@@ -1176,10 +1215,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
mbuf = temp_mbuf;
realloc_mbuf = 0;
}
-
- if (dpaa_ieee_1588)
- fd_arr[loop].cmd |= DPAA_FD_CMD_FCO | qman_fq_fqid(fq_txconf);
-
indirect_buf:
state = tx_on_dpaa_pool(mbuf, bp_info,
&fd_arr[loop],
@@ -1208,9 +1243,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
sent += frames_to_send;
}
- if (dpaa_ieee_1588)
- dpaa_eth_tx_conf(fq_txconf);
-
DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
for (loop = 0; loop < free_count; loop++) {
@@ -1228,6 +1260,12 @@ dpaa_eth_tx_conf(void *q)
struct qm_dqrr_entry *dq;
int num_tx_conf, ret, dq_num;
uint32_t vdqcr_flags = 0;
+ struct dpaa_if *dpaa_intf = fq->dpaa_intf;
+ struct qm_dqrr_entry *dqrr;
+ struct dpaa_bp_info *bp_info;
+ struct rte_mbuf *mbuf;
+ void *ptr;
+ struct annotations_t *annot;
if (unlikely(rte_dpaa_bpid_info == NULL &&
rte_eal_process_type() == RTE_PROC_SECONDARY))
@@ -1252,7 +1290,20 @@ dpaa_eth_tx_conf(void *q)
dq = qman_dequeue(fq);
if (!dq)
continue;
+ dqrr = dq;
dq_num++;
+ bp_info = DPAA_BPID_TO_POOL_INFO(dqrr->fd.bpid);
+ ptr = rte_dpaa_mem_ptov(qm_fd_addr(&dqrr->fd));
+ rte_prefetch0((void *)((uint8_t *)ptr
+ + DEFAULT_RX_ICEOF));
+ mbuf = (struct rte_mbuf *)
+ ((char *)ptr - bp_info->meta_data_size);
+
+ if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) {
+ annot = GET_ANNOTATIONS(mbuf->buf_addr);
+ dpaa_intf->tx_timestamp =
+ rte_cpu_to_be_64(annot->timestamp);
+ }
dpaa_display_frame_info(&dq->fd, fq->fqid, true);
qman_dqrr_consume(fq, dq);
dpaa_free_mbuf(&dq->fd);
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 042602e087..1048e86d41 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2021 NXP
+ * Copyright 2017,2020-2022 NXP
*
*/
@@ -260,7 +260,7 @@ struct dpaa_eth_parse_results_t {
struct annotations_t {
uint8_t reserved[DEFAULT_RX_ICEOF];
struct dpaa_eth_parse_results_t parse; /**< Pointer to Parsed result*/
- uint64_t reserved1;
+ uint64_t timestamp;
uint64_t hash; /**< Hash Result */
};
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index 42e1f8c2e2..239858adda 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -14,6 +14,7 @@ sources = files(
'dpaa_flow.c',
'dpaa_rxtx.c',
'dpaa_fmc.c',
+ 'dpaa_ptp.c',
)
if cc.has_argument('-Wno-pointer-arith')
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 10/18] net/dpaa: support IEEE 1588 PTP
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (8 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 09/18] net/dpaa: support Rx/Tx timestamp read Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
` (9 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch adds the support for the ethdev APIs
to enable/disable and read/write/adjust IEEE1588
PTP timestamps for DPAA platform.
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
doc/guides/nics/dpaa.rst | 1 +
doc/guides/nics/features/dpaa.ini | 1 +
drivers/bus/dpaa/base/fman/fman.c | 15 ++++++
drivers/bus/dpaa/include/fman.h | 45 +++++++++++++++++
drivers/net/dpaa/dpaa_ethdev.c | 5 ++
drivers/net/dpaa/dpaa_ethdev.h | 16 +++++++
drivers/net/dpaa/dpaa_ptp.c | 80 ++++++++++++++++++++++++++++++-
7 files changed, 161 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index acf4daab02..ea86e6146c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -148,6 +148,7 @@ Features
- Packet type information
- Checksum offload
- Promiscuous mode
+ - IEEE1588 PTP
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/dpaa.ini b/doc/guides/nics/features/dpaa.ini
index 4196dd800c..4f31b61de1 100644
--- a/doc/guides/nics/features/dpaa.ini
+++ b/doc/guides/nics/features/dpaa.ini
@@ -19,6 +19,7 @@ Flow control = Y
L3 checksum offload = Y
L4 checksum offload = Y
Packet type parsing = Y
+Timesync = Y
Timestamp offload = Y
Basic stats = Y
Extended stats = Y
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e39bc8c252..e2b7120237 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -28,6 +28,7 @@ u32 fman_dealloc_bufs_mask_lo;
int fman_ccsr_map_fd = -1;
static COMPAT_LIST_HEAD(__ifs);
+void *rtc_map;
/* This is the (const) global variable that callers have read-only access to.
* Internally, we have read-write access directly to __ifs.
@@ -539,6 +540,20 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
+ if (!rtc_map) {
+ __if->rtc_map = mmap(NULL, FMAN_IEEE_1588_SIZE,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, FMAN_IEEE_1588_OFFSET);
+ if (__if->rtc_map == MAP_FAILED) {
+ pr_err("Can not map FMan RTC regs base\n");
+ _errno = -EINVAL;
+ goto err;
+ }
+ rtc_map = __if->rtc_map;
+ } else {
+ __if->rtc_map = rtc_map;
+ }
+
/* No channel ID for MAC-less */
assert(lenp == sizeof(*tx_channel_id));
na = of_n_addr_cells(mac_node);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 09d1ddb897..e8bc913943 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -64,6 +64,12 @@
#define GROUP_ADDRESS 0x0000010000000000LL
#define HASH_CTRL_ADDR_MASK 0x0000003F
+#define FMAN_RTC_MAX_NUM_OF_ALARMS 3
+#define FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES 4
+#define FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS 3
+#define FMAN_IEEE_1588_OFFSET 0X1AFE000
+#define FMAN_IEEE_1588_SIZE 4096
+
/* Pre definitions of FMAN interface and Bpool structures */
struct __fman_if;
struct fman_if_bpool;
@@ -307,6 +313,44 @@ struct tx_bmi_regs {
uint32_t fmbm_trlmts; /**< Tx Rate Limiter Scale*/
uint32_t fmbm_trlmt; /**< Tx Rate Limiter*/
};
+
+/* Description FM RTC timer alarm */
+struct t_tmr_alarm {
+ uint32_t tmr_alarm_h;
+ uint32_t tmr_alarm_l;
+};
+
+/* Description FM RTC timer Ex trigger */
+struct t_tmr_ext_trigger {
+ uint32_t tmr_etts_h;
+ uint32_t tmr_etts_l;
+};
+
+struct rtc_regs {
+ uint32_t tmr_id; /* 0x000 Module ID register */
+ uint32_t tmr_id2; /* 0x004 Controller ID register */
+ uint32_t reserved0008[30];
+ uint32_t tmr_ctrl; /* 0x0080 timer control register */
+ uint32_t tmr_tevent; /* 0x0084 timer event register */
+ uint32_t tmr_temask; /* 0x0088 timer event mask register */
+ uint32_t reserved008c[3];
+ uint32_t tmr_cnt_h; /* 0x0098 timer counter high register */
+ uint32_t tmr_cnt_l; /* 0x009c timer counter low register */
+ uint32_t tmr_add; /* 0x00a0 timer drift compensation addend register */
+ uint32_t tmr_acc; /* 0x00a4 timer accumulator register */
+ uint32_t tmr_prsc; /* 0x00a8 timer prescale */
+ uint32_t reserved00ac;
+ uint32_t tmr_off_h; /* 0x00b0 timer offset high */
+ uint32_t tmr_off_l; /* 0x00b4 timer offset low */
+ struct t_tmr_alarm tmr_alarm[FMAN_RTC_MAX_NUM_OF_ALARMS];
+ /* 0x00b8 timer alarm */
+ uint32_t tmr_fiper[FMAN_RTC_MAX_NUM_OF_PERIODIC_PULSES];
+ /* 0x00d0 timer fixed period interval */
+ struct t_tmr_ext_trigger tmr_etts[FMAN_RTC_MAX_NUM_OF_EXT_TRIGGERS];
+ /* 0x00e0 time stamp general purpose external */
+ uint32_t reserved00f0[4];
+};
+
struct fman_port_qmi_regs {
uint32_t fmqm_pnc; /**< PortID n Configuration Register */
uint32_t fmqm_pns; /**< PortID n Status Register */
@@ -396,6 +440,7 @@ struct __fman_if {
void *ccsr_map;
void *bmi_map;
void *tx_bmi_map;
+ void *rtc_map;
void *qmi_map;
struct list_head node;
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 682cb1c77e..82d1960356 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -1673,6 +1673,11 @@ static struct eth_dev_ops dpaa_devops = {
.rx_queue_intr_disable = dpaa_dev_queue_intr_disable,
.rss_hash_update = dpaa_dev_rss_hash_update,
.rss_hash_conf_get = dpaa_dev_rss_hash_conf_get,
+ .timesync_enable = dpaa_timesync_enable,
+ .timesync_disable = dpaa_timesync_disable,
+ .timesync_read_time = dpaa_timesync_read_time,
+ .timesync_write_time = dpaa_timesync_write_time,
+ .timesync_adjust_time = dpaa_timesync_adjust_time,
.timesync_read_rx_timestamp = dpaa_timesync_read_rx_timestamp,
.timesync_read_tx_timestamp = dpaa_timesync_read_tx_timestamp,
};
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index bbdb0936c0..7884cc034c 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -245,6 +245,22 @@ int
dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp);
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev);
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp);
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *timestamp);
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta);
+
int
dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
diff --git a/drivers/net/dpaa/dpaa_ptp.c b/drivers/net/dpaa/dpaa_ptp.c
index 2ecdda6db0..48e29e22eb 100644
--- a/drivers/net/dpaa/dpaa_ptp.c
+++ b/drivers/net/dpaa/dpaa_ptp.c
@@ -16,7 +16,82 @@
#include <dpaa_ethdev.h>
#include <dpaa_rxtx.h>
-int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_enable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_disable(struct rte_eth_dev *dev __rte_unused)
+{
+ return 0;
+}
+
+int
+dpaa_timesync_read_time(struct rte_eth_dev *dev,
+ struct timespec *timestamp)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = (uint64_t)in_be32(tmr_cnt_l);
+ time |= ((uint64_t)in_be32(tmr_cnt_h) << 32);
+
+ *timestamp = rte_ns_to_timespec(time);
+ return 0;
+}
+
+int
+dpaa_timesync_write_time(struct rte_eth_dev *dev,
+ const struct timespec *ts)
+{
+ uint32_t *tmr_cnt_h, *tmr_cnt_l;
+ struct __fman_if *__fif;
+ struct fman_if *fif;
+ uint64_t time;
+
+ fif = dev->process_private;
+ __fif = container_of(fif, struct __fman_if, __if);
+
+ tmr_cnt_h = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_h;
+ tmr_cnt_l = &((struct rtc_regs *)__fif->rtc_map)->tmr_cnt_l;
+
+ time = rte_timespec_to_ns(ts);
+
+ out_be32(tmr_cnt_l, (uint32_t)time);
+ out_be32(tmr_cnt_h, (uint32_t)(time >> 32));
+
+ return 0;
+}
+
+int
+dpaa_timesync_adjust_time(struct rte_eth_dev *dev, int64_t delta)
+{
+ struct timespec ts = {0, 0}, *timestamp = &ts;
+ uint64_t ns;
+
+ dpaa_timesync_read_time(dev, timestamp);
+
+ ns = rte_timespec_to_ns(timestamp);
+ ns += delta;
+ *timestamp = rte_ns_to_timespec(ns);
+
+ dpaa_timesync_write_time(dev, timestamp);
+
+ return 0;
+}
+
+int
+dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp)
{
struct dpaa_if *dpaa_intf = dev->data->dev_private;
@@ -32,7 +107,8 @@ int dpaa_timesync_read_tx_timestamp(struct rte_eth_dev *dev,
return 0;
}
-int dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
+int
+dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused)
{
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 11/18] net/dpaa: implement detailed packet parsing
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (9 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 10/18] net/dpaa: support IEEE 1588 PTP Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
` (8 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch implements the detailed packet parsing using
the annotation info from the hardware.
decode parser to set RX muf packet type by dpaa_slow_parsing.
Support to identify the IPSec ESP, GRE and SCTP packets.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 1 +
drivers/net/dpaa/dpaa_rxtx.c | 35 +++++++-
drivers/net/dpaa/dpaa_rxtx.h | 143 ++++++++++++++-------------------
3 files changed, 93 insertions(+), 86 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 82d1960356..a302b24be6 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -411,6 +411,7 @@ dpaa_supported_ptypes_get(struct rte_eth_dev *dev, size_t *no_of_elements)
RTE_PTYPE_L4_UDP,
RTE_PTYPE_L4_SCTP,
RTE_PTYPE_TUNNEL_ESP,
+ RTE_PTYPE_TUNNEL_GRE,
};
PMD_INIT_FUNC_TRACE();
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e3b4bb14ab..99fc3f1b43 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -110,11 +110,38 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
#define dpaa_display_frame_info(a, b, c)
#endif
-static inline void dpaa_slow_parsing(struct rte_mbuf *m __rte_unused,
- uint64_t prs __rte_unused)
+static inline void
+dpaa_slow_parsing(struct rte_mbuf *m,
+ const struct annotations_t *annot)
{
+ const struct dpaa_eth_parse_results_t *parse;
+
DPAA_DP_LOG(DEBUG, "Slow parsing");
- /*TBD:XXX: to be implemented*/
+ parse = &annot->parse;
+
+ if (parse->ethernet)
+ m->packet_type |= RTE_PTYPE_L2_ETHER;
+ if (parse->vlan)
+ m->packet_type |= RTE_PTYPE_L2_ETHER_VLAN;
+ if (parse->first_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4;
+ if (parse->first_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6;
+ if (parse->gre)
+ m->packet_type |= RTE_PTYPE_TUNNEL_GRE;
+ if (parse->last_ipv4)
+ m->packet_type |= RTE_PTYPE_L3_IPV4_EXT;
+ if (parse->last_ipv6)
+ m->packet_type |= RTE_PTYPE_L3_IPV6_EXT;
+ if (parse->l4_type == DPAA_PR_L4_TCP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_TCP;
+ else if (parse->l4_type == DPAA_PR_L4_UDP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_UDP;
+ else if (parse->l4_type == DPAA_PR_L4_IPSEC_TYPE &&
+ !parse->l4_info_err && parse->esp_sum)
+ m->packet_type |= RTE_PTYPE_TUNNEL_ESP;
+ else if (parse->l4_type == DPAA_PR_L4_SCTP_TYPE)
+ m->packet_type |= RTE_PTYPE_L4_SCTP;
}
static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
@@ -228,7 +255,7 @@ static inline void dpaa_eth_packet_info(struct rte_mbuf *m, void *fd_virt_addr)
break;
/* More switch cases can be added */
default:
- dpaa_slow_parsing(m, prs);
+ dpaa_slow_parsing(m, annot);
}
m->tx_offload = annot->parse.ip_off[0];
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 1048e86d41..215bdeaf7f 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
* Copyright 2016 Freescale Semiconductor, Inc. All rights reserved.
- * Copyright 2017,2020-2022 NXP
+ * Copyright 2017,2020-2024 NXP
*
*/
@@ -162,98 +162,77 @@
#define DPAA_PKT_L3_LEN_SHIFT 7
+enum dpaa_parse_result_l4_type {
+ DPAA_PR_L4_TCP_TYPE = 1,
+ DPAA_PR_L4_UDP_TYPE = 2,
+ DPAA_PR_L4_IPSEC_TYPE = 3,
+ DPAA_PR_L4_SCTP_TYPE = 4,
+ DPAA_PR_L4_DCCP_TYPE = 5
+};
+
/**
* FMan parse result array
*/
struct dpaa_eth_parse_results_t {
- uint8_t lpid; /**< Logical port id */
- uint8_t shimr; /**< Shim header result */
- union {
- uint16_t l2r; /**< Layer 2 result */
+ uint8_t lpid; /**< Logical port id */
+ uint8_t shimr; /**< Shim header result */
+ union {
+ uint16_t l2r; /**< Layer 2 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t ethernet:1;
- uint16_t vlan:1;
- uint16_t llc_snap:1;
- uint16_t mpls:1;
- uint16_t ppoe_ppp:1;
- uint16_t unused_1:3;
- uint16_t unknown_eth_proto:1;
- uint16_t eth_frame_type:2;
- uint16_t l2r_err:5;
+ uint16_t unused_1:3;
+ uint16_t ppoe_ppp:1;
+ uint16_t mpls:1;
+ uint16_t llc_snap:1;
+ uint16_t vlan:1;
+ uint16_t ethernet:1;
+
+ uint16_t l2r_err:5;
+ uint16_t eth_frame_type:2;
/*00-unicast, 01-multicast, 11-broadcast*/
-#else
- uint16_t l2r_err:5;
- uint16_t eth_frame_type:2;
- uint16_t unknown_eth_proto:1;
- uint16_t unused_1:3;
- uint16_t ppoe_ppp:1;
- uint16_t mpls:1;
- uint16_t llc_snap:1;
- uint16_t vlan:1;
- uint16_t ethernet:1;
-#endif
+ uint16_t unknown_eth_proto:1;
} __rte_packed;
- } __rte_packed;
- union {
- uint16_t l3r; /**< Layer 3 result */
+ } __rte_packed;
+ union {
+ uint16_t l3r; /**< Layer 3 result */
struct {
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint16_t first_ipv4:1;
- uint16_t first_ipv6:1;
- uint16_t gre:1;
- uint16_t min_enc:1;
- uint16_t last_ipv4:1;
- uint16_t last_ipv6:1;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t last_ip_err_code:3;
-#else
- uint16_t last_ip_err_code:3;
- uint16_t last_info_err:1; /*0 info, 1 error*/
- uint16_t first_ip_err_code:5;
- uint16_t first_info_err:1;/*0 info, 1 error*/
- uint16_t last_ipv6:1;
- uint16_t last_ipv4:1;
- uint16_t min_enc:1;
- uint16_t gre:1;
- uint16_t first_ipv6:1;
- uint16_t first_ipv4:1;
-#endif
+ uint16_t unused_2:1;
+ uint16_t l3_err:1;
+ uint16_t last_ipv6:1;
+ uint16_t last_ipv4:1;
+ uint16_t min_enc:1;
+ uint16_t gre:1;
+ uint16_t first_ipv6:1;
+ uint16_t first_ipv4:1;
+
+ uint16_t unused_3:8;
} __rte_packed;
- } __rte_packed;
- union {
- uint8_t l4r; /**< Layer 4 result */
+ } __rte_packed;
+ union {
+ uint8_t l4r; /**< Layer 4 result */
struct{
-#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
- uint8_t l4_type:3;
- uint8_t l4_info_err:1;
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
-#else
- uint8_t l4_result:4;
- /* if type IPSec: 1 ESP, 2 AH */
- uint8_t l4_info_err:1;
- uint8_t l4_type:3;
-#endif
+ uint8_t l4cv:1;
+ uint8_t unused_4:1;
+ uint8_t ah:1;
+ uint8_t esp_sum:1;
+ uint8_t l4_info_err:1;
+ uint8_t l4_type:3;
} __rte_packed;
- } __rte_packed;
- uint8_t cplan; /**< Classification plan id */
- uint16_t nxthdr; /**< Next Header */
- uint16_t cksum; /**< Checksum */
- uint32_t lcv; /**< LCV */
- uint8_t shim_off[3]; /**< Shim offset */
- uint8_t eth_off; /**< ETH offset */
- uint8_t llc_snap_off; /**< LLC_SNAP offset */
- uint8_t vlan_off[2]; /**< VLAN offset */
- uint8_t etype_off; /**< ETYPE offset */
- uint8_t pppoe_off; /**< PPP offset */
- uint8_t mpls_off[2]; /**< MPLS offset */
- uint8_t ip_off[2]; /**< IP offset */
- uint8_t gre_off; /**< GRE offset */
- uint8_t l4_off; /**< Layer 4 offset */
- uint8_t nxthdr_off; /**< Parser end point */
+ } __rte_packed;
+ uint8_t cplan; /**< Classification plan id */
+ uint16_t nxthdr; /**< Next Header */
+ uint16_t cksum; /**< Checksum */
+ uint32_t lcv; /**< LCV */
+ uint8_t shim_off[3]; /**< Shim offset */
+ uint8_t eth_off; /**< ETH offset */
+ uint8_t llc_snap_off; /**< LLC_SNAP offset */
+ uint8_t vlan_off[2]; /**< VLAN offset */
+ uint8_t etype_off; /**< ETYPE offset */
+ uint8_t pppoe_off; /**< PPP offset */
+ uint8_t mpls_off[2]; /**< MPLS offset */
+ uint8_t ip_off[2]; /**< IP offset */
+ uint8_t gre_off; /**< GRE offset */
+ uint8_t l4_off; /**< Layer 4 offset */
+ uint8_t nxthdr_off; /**< Parser end point */
} __rte_packed;
/* The structure is the Prepended Data to the Frame which is used by FMAN */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 12/18] net/dpaa: enhance DPAA frame display
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (10 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 11/18] net/dpaa: implement detailed packet parsing Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 13/18] net/dpaa: support mempool debug Hemant Agrawal
` (7 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
This patch enhances the received packet debugging capability.
This help displaying the full packet parsing output.
Signed-off-by: Jun Yang <jun.yang@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
doc/guides/nics/dpaa.rst | 5 ++
drivers/net/dpaa/dpaa_ethdev.c | 9 +++
drivers/net/dpaa/dpaa_rxtx.c | 138 +++++++++++++++++++++++++++------
drivers/net/dpaa/dpaa_rxtx.h | 5 ++
4 files changed, 133 insertions(+), 24 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index ea86e6146c..edf7a7e350 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -227,6 +227,11 @@ state during application initialization:
application want to use eventdev with DPAA device.
Currently these queues are not used for LS1023/LS1043 platform by default.
+- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
+
+ This defines the debug flag, whether to dump the detailed frame and packet
+ parsing result for the incoming packets.
+
Driver compilation and testing
------------------------------
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index a302b24be6..4ead890278 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -2056,6 +2056,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
int8_t dev_vspids[DPAA_MAX_NUM_PCD_QUEUES];
int8_t vsp_id = -1;
struct rte_device *dev = eth_dev->device;
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ char *penv;
+#endif
PMD_INIT_FUNC_TRACE();
@@ -2135,6 +2138,12 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
td_tx_threshold = CGR_RX_PERFQ_THRESH;
}
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+ penv = getenv("DPAA_DISPLAY_FRAME_AND_PARSER_RESULT");
+ if (penv)
+ dpaa_force_display_frame_set(atoi(penv));
+#endif
+
/* If congestion control is enabled globally*/
if (num_rx_fqs > 0 && td_threshold) {
dpaa_intf->cgr_rx = rte_zmalloc(NULL,
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 99fc3f1b43..945c84ab10 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -47,6 +47,10 @@
#include <dpaa_of.h>
#include <netcfg.h>
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+static int s_force_display_frm;
+#endif
+
#define DPAA_MBUF_TO_CONTIG_FD(_mbuf, _fd, _bpid) \
do { \
(_fd)->opaque_addr = 0; \
@@ -58,37 +62,122 @@
} while (0)
#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set)
+{
+ s_force_display_frm = set;
+}
+
#define DISPLAY_PRINT printf
-static void dpaa_display_frame_info(const struct qm_fd *fd,
- uint32_t fqid, bool rx)
+static void
+dpaa_display_frame_info(const struct qm_fd *fd,
+ uint32_t fqid, bool rx)
{
- int ii;
- char *ptr;
+ int pos, offset = 0;
+ char *ptr, info[1024];
struct annotations_t *annot = rte_dpaa_mem_ptov(fd->addr);
uint8_t format;
+ const struct dpaa_eth_parse_results_t *psr;
- if (!fd->status) {
- /* Do not display correct packets.*/
+ if (!fd->status && !s_force_display_frm) {
+ /* Do not display correct packets unless force display.*/
return;
}
+ psr = &annot->parse;
- format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
- DPAA_FD_FORMAT_SHIFT;
-
- DISPLAY_PRINT("fqid %d bpid %d addr 0x%lx, format %d\r\n",
- fqid, fd->bpid, (unsigned long)fd->addr, fd->format);
- DISPLAY_PRINT("off %d, len %d stat 0x%x\r\n",
- fd->offset, fd->length20, fd->status);
+ format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
+ if (format == qm_fd_contig)
+ sprintf(info, "simple");
+ else if (format == qm_fd_sg)
+ sprintf(info, "sg");
+ else
+ sprintf(info, "unknown format(%d)", format);
+
+ DISPLAY_PRINT("%s: fqid=%08x, bpid=%d, phy addr=0x%lx ",
+ rx ? "RX" : "TX", fqid, fd->bpid, (unsigned long)fd->addr);
+ DISPLAY_PRINT("format=%s offset=%d, len=%d, stat=0x%x\r\n",
+ info, fd->offset, fd->length20, fd->status);
if (rx) {
- ptr = (char *)&annot->parse;
- DISPLAY_PRINT("RX parser result:\r\n");
- for (ii = 0; ii < (int)sizeof(struct dpaa_eth_parse_results_t);
- ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("Display usual RX parser result:\r\n");
+ if (psr->eth_frame_type == 0)
+ offset += sprintf(&info[offset], "unicast");
+ else if (psr->eth_frame_type == 1)
+ offset += sprintf(&info[offset], "multicast");
+ else if (psr->eth_frame_type == 3)
+ offset += sprintf(&info[offset], "broadcast");
+ else
+ offset += sprintf(&info[offset], "unknown eth type(%d)",
+ psr->eth_frame_type);
+ if (psr->l2r_err) {
+ offset += sprintf(&info[offset], " L2 error(%d)",
+ psr->l2r_err);
+ } else {
+ offset += sprintf(&info[offset], " L2 non error");
}
- DISPLAY_PRINT("\n");
+ DISPLAY_PRINT("L2: %s, %s, ethernet type:%s\r\n",
+ psr->ethernet ? "is ethernet" : "non ethernet",
+ psr->vlan ? "is vlan" : "non vlan", info);
+
+ offset = 0;
+ DISPLAY_PRINT("L3: %s/%s, %s/%s, %s, %s\r\n",
+ psr->first_ipv4 ? "first IPv4" : "non first IPv4",
+ psr->last_ipv4 ? "last IPv4" : "non last IPv4",
+ psr->first_ipv6 ? "first IPv6" : "non first IPv6",
+ psr->last_ipv6 ? "last IPv6" : "non last IPv6",
+ psr->gre ? "GRE" : "non GRE",
+ psr->l3_err ? "L3 has error" : "L3 non error");
+
+ if (psr->l4_type == DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "tcp");
+ } else if (psr->l4_type == DPAA_PR_L4_UDP_TYPE) {
+ offset += sprintf(&info[offset], "udp");
+ } else if (psr->l4_type == DPAA_PR_L4_IPSEC_TYPE) {
+ offset += sprintf(&info[offset], "IPSec ");
+ if (psr->esp_sum)
+ offset += sprintf(&info[offset], "ESP");
+ if (psr->ah)
+ offset += sprintf(&info[offset], "AH");
+ } else if (psr->l4_type == DPAA_PR_L4_SCTP_TYPE) {
+ offset += sprintf(&info[offset], "sctp");
+ } else if (psr->l4_type == DPAA_PR_L4_DCCP_TYPE) {
+ offset += sprintf(&info[offset], "dccp");
+ } else {
+ offset += sprintf(&info[offset], "unknown l4 type(%d)",
+ psr->l4_type);
+ }
+ DISPLAY_PRINT("L4: type:%s, L4 validation %s\r\n",
+ info, psr->l4cv ? "Performed" : "NOT performed");
+
+ offset = 0;
+ if (psr->ethernet) {
+ offset += sprintf(&info[offset],
+ "Eth offset=%d, ethtype offset=%d, ",
+ psr->eth_off, psr->etype_off);
+ }
+ if (psr->vlan) {
+ offset += sprintf(&info[offset], "vLAN offset=%d, ",
+ psr->vlan_off[0]);
+ }
+ if (psr->first_ipv4 || psr->first_ipv6) {
+ offset += sprintf(&info[offset], "first IP offset=%d, ",
+ psr->ip_off[0]);
+ }
+ if (psr->last_ipv4 || psr->last_ipv6) {
+ offset += sprintf(&info[offset], "last IP offset=%d, ",
+ psr->ip_off[1]);
+ }
+ if (psr->gre) {
+ offset += sprintf(&info[offset], "GRE offset=%d, ",
+ psr->gre_off);
+ }
+ if (psr->l4_type >= DPAA_PR_L4_TCP_TYPE) {
+ offset += sprintf(&info[offset], "L4 offset=%d, ",
+ psr->l4_off);
+ }
+ offset += sprintf(&info[offset], "Next HDR(0x%04x) offset=%d.",
+ rte_be_to_cpu_16(psr->nxthdr), psr->nxthdr_off);
+
+ DISPLAY_PRINT("%s\r\n", info);
}
if (unlikely(format == qm_fd_sg)) {
@@ -99,13 +188,14 @@ static void dpaa_display_frame_info(const struct qm_fd *fd,
DISPLAY_PRINT("Frame payload:\r\n");
ptr = (char *)annot;
ptr += fd->offset;
- for (ii = 0; ii < fd->length20; ii++) {
- DISPLAY_PRINT("%02x ", ptr[ii]);
- if (((ii + 1) % 16) == 0)
+ for (pos = 0; pos < fd->length20; pos++) {
+ DISPLAY_PRINT("%02x ", ptr[pos]);
+ if (((pos + 1) % 16) == 0)
DISPLAY_PRINT("\n");
}
DISPLAY_PRINT("\n");
}
+
#else
#define dpaa_display_frame_info(a, b, c)
#endif
diff --git a/drivers/net/dpaa/dpaa_rxtx.h b/drivers/net/dpaa/dpaa_rxtx.h
index 215bdeaf7f..392926e286 100644
--- a/drivers/net/dpaa/dpaa_rxtx.h
+++ b/drivers/net/dpaa/dpaa_rxtx.h
@@ -274,4 +274,9 @@ void dpaa_rx_cb_prepare(struct qm_dqrr_entry *dq, void **bufs);
void dpaa_rx_cb_no_prefetch(struct qman_fq **fq,
struct qm_dqrr_entry **dqrr, void **bufs, int num_bufs);
+#ifdef RTE_LIBRTE_DPAA_DEBUG_DRIVER
+void
+dpaa_force_display_frame_set(int set);
+#endif
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 13/18] net/dpaa: support mempool debug
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (11 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 12/18] net/dpaa: enhance DPAA frame display Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
` (6 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
This patch adds support to compile time debug the mempool
corruptions in dpaa driver.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 945c84ab10..d82c6f3be2 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -494,6 +494,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->data_len = sg_temp->length;
first_seg->pkt_len = sg_temp->length;
rte_mbuf_refcnt_set(first_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)first_seg),
+ (void **)&first_seg, 1, 1);
+#endif
first_seg->port = ifid;
first_seg->nb_segs = 1;
@@ -511,6 +515,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len += sg_temp->length;
first_seg->nb_segs += 1;
rte_mbuf_refcnt_set(cur_seg, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 1);
+#endif
prev_seg->next = cur_seg;
if (sg_temp->final) {
cur_seg->next = NULL;
@@ -522,6 +530,10 @@ dpaa_eth_sg_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
first_seg->pkt_len, first_seg->nb_segs);
dpaa_eth_packet_info(first_seg, vaddr);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 1);
+#endif
rte_pktmbuf_free_seg(temp);
return first_seg;
@@ -562,6 +574,10 @@ dpaa_eth_fd_to_mbuf(const struct qm_fd *fd, uint32_t ifid)
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
return mbuf;
@@ -676,6 +692,10 @@ dpaa_rx_cb_no_prefetch(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -722,6 +742,10 @@ dpaa_rx_cb(struct qman_fq **fq, struct qm_dqrr_entry **dqrr,
mbuf->ol_flags = 0;
mbuf->next = NULL;
rte_mbuf_refcnt_set(mbuf, 1);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 1);
+#endif
dpaa_eth_packet_info(mbuf, mbuf->buf_addr);
dpaa_display_frame_info(fd, fq[0]->fqid, true);
if (dpaa_ieee_1588) {
@@ -972,6 +996,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
return -1;
}
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)temp),
+ (void **)&temp, 1, 0);
+#endif
fd->cmd = 0;
fd->opaque_addr = 0;
@@ -1017,6 +1045,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
} else {
sg_temp->bpid =
DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)cur_seg),
+ (void **)&cur_seg, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
free_buf[*free_count].seg = cur_seg;
@@ -1074,6 +1106,10 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
* released by BMAN.
*/
DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
}
} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
buf_to_free[*free_count].seg = mbuf;
@@ -1302,6 +1338,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_CKSUM_OFFLOAD_MASK)
dpaa_unsegmented_checksum(mbuf,
&fd_arr[loop]);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ rte_mempool_check_cookies(rte_mempool_from_obj((void *)mbuf),
+ (void **)&mbuf, 1, 0);
+#endif
continue;
}
} else {
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 14/18] bus/dpaa: add OH port mode for dpaa eth
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (12 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 13/18] net/dpaa: support mempool debug Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
` (5 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
NXP DPAA architecture supports the concept of DPAA
port as Offline Port - meaning - not connected to an actual MAC.
This is an hardware assisted IPC mechanism for communiting between two
applications.
Offline(O/H) port is a type of hardware port which is able to dequeue and
enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute
(PCD) flow and (if configured to do so) enqueues the frame back in a
QMan queue.
The FMan is able to copy the frame into new buffers and enqueue back to the
QMan. This means these ports can be used to send and receive packet
between two applications.
An O/H port Have two queues. One to receive and one to send the packets.
It will loopback all the packets on Tx queue which are received
on Rx queue.
This property is completely driven by the device-tree. During the
DPAA bus scan, based on the platform device properties as in
device-tree, the port can be classified as OH port.
This patch add support in the driver to use dpaa eth port
in OH mode as well with DPDK applications.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 26 ++-
drivers/bus/dpaa/base/fman/fman.c | 259 ++++++++++++++--------
drivers/bus/dpaa/base/fman/fman_hw.c | 24 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
drivers/bus/dpaa/dpaa_bus.c | 23 +-
drivers/bus/dpaa/include/fman.h | 31 ++-
drivers/net/dpaa/dpaa_ethdev.c | 85 ++++++-
drivers/net/dpaa/dpaa_ethdev.h | 6 +
drivers/net/dpaa/dpaa_flow.c | 51 +++--
9 files changed, 378 insertions(+), 146 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index edf7a7e350..47dcce334c 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -1,5 +1,5 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2017,2020 NXP
+ Copyright 2017,2020-2024 NXP
DPAA Poll Mode Driver
@@ -136,6 +136,8 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
+- PMD also support OH mode, where the port works as a HW assisted
+ virtual port without actually connecting to a Physical MAC.
Features
@@ -149,6 +151,8 @@ Features
- Checksum offload
- Promiscuous mode
- IEEE1588 PTP
+ - OH Port for inter application communication
+
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -326,6 +330,26 @@ FMLIB
`Kernel FMD Driver
<https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
+OH Port
+~~~~~~~
+ Offline(O/H) port is a type of hardware port which is able to dequeue and
+ enqueue from/to a QMan queue. The FMan applies a Parse Classify Distribute (PCD)
+ flow and (if configured to do so) enqueues the frame back in a QMan queue.
+
+ The FMan is able to copy the frame into new buffers and enqueue back to the
+ QMan. This means these ports can be used to send and receive packets between two
+ applications as well.
+
+ An O/H port have two queues. One to receive and one to send the packets. It will
+ loopback all the packets on Tx queue which are received on Rx queue.
+
+
+ -------- Tx Packets ---------
+ | App | - - - - - - - - - > | O/H |
+ | | < - - - - - - - - - | Port |
+ -------- Rx Packets ---------
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index e2b7120237..f817305ab7 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -246,26 +246,34 @@ fman_if_init(const struct device_node *dpa_node)
uint64_t port_cell_idx_val = 0;
uint64_t ext_args_cell_idx_val = 0;
- const struct device_node *mac_node = NULL, *tx_node, *ext_args_node;
- const struct device_node *pool_node, *fman_node, *rx_node;
+ const struct device_node *mac_node = NULL, *ext_args_node;
+ const struct device_node *pool_node, *fman_node;
+ const struct device_node *rx_node = NULL, *tx_node = NULL;
+ const struct device_node *oh_node = NULL;
const uint32_t *regs_addr = NULL;
const char *mname, *fname;
const char *dname = dpa_node->full_name;
size_t lenp;
- int _errno, is_shared = 0;
+ int _errno, is_shared = 0, is_offline = 0;
const char *char_prop;
uint32_t na;
if (of_device_is_available(dpa_node) == false)
return 0;
- if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
- !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
+ if (of_device_is_compatible(dpa_node, "fsl,dpa-oh"))
+ is_offline = 1;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-oh") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-init") &&
+ !of_device_is_compatible(dpa_node, "fsl,dpa-ethernet")) {
return 0;
}
- rprop = "fsl,qman-frame-queues-rx";
- mprop = "fsl,fman-mac";
+ rprop = is_offline ? "fsl,qman-frame-queues-oh" :
+ "fsl,qman-frame-queues-rx";
+ mprop = is_offline ? "fsl,fman-oh-port" :
+ "fsl,fman-mac";
/* Obtain the MAC node used by this interface except macless */
mac_phandle = of_get_property(dpa_node, mprop, &lenp);
@@ -281,27 +289,43 @@ fman_if_init(const struct device_node *dpa_node)
}
mname = mac_node->full_name;
- /* Extract the Rx and Tx ports */
- ports_phandle = of_get_property(mac_node, "fsl,port-handles",
- &lenp);
- if (!ports_phandle)
- ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ if (!is_offline) {
+ /* Extract the Rx and Tx ports */
+ ports_phandle = of_get_property(mac_node, "fsl,port-handles",
&lenp);
- if (!ports_phandle) {
- FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
- mname);
- return -EINVAL;
- }
- assert(lenp == (2 * sizeof(phandle)));
- rx_node = of_find_node_by_phandle(ports_phandle[0]);
- if (!rx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
- return -ENXIO;
- }
- tx_node = of_find_node_by_phandle(ports_phandle[1]);
- if (!tx_node) {
- FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
- return -ENXIO;
+ if (!ports_phandle)
+ ports_phandle = of_get_property(mac_node, "fsl,fman-ports",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,port-handles",
+ mname);
+ return -EINVAL;
+ }
+ assert(lenp == (2 * sizeof(phandle)));
+ rx_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!rx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
+ tx_node = of_find_node_by_phandle(ports_phandle[1]);
+ if (!tx_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[1]", mname);
+ return -ENXIO;
+ }
+ } else {
+ /* Extract the OH ports */
+ ports_phandle = of_get_property(dpa_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!ports_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,fman-oh-port", dname);
+ return -EINVAL;
+ }
+ assert(lenp == (sizeof(phandle)));
+ oh_node = of_find_node_by_phandle(ports_phandle[0]);
+ if (!oh_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,port-handle[0]", mname);
+ return -ENXIO;
+ }
}
/* Check if the port is shared interface */
@@ -430,17 +454,19 @@ fman_if_init(const struct device_node *dpa_node)
* Set A2V, OVOM, EBD bits in contextA to allow external
* buffer deallocation by fman.
*/
- fman_dealloc_bufs_mask_hi = FMAN_V3_CONTEXTA_EN_A2V |
- FMAN_V3_CONTEXTA_EN_OVOM;
- fman_dealloc_bufs_mask_lo = FMAN_V3_CONTEXTA_EN_EBD;
+ fman_dealloc_bufs_mask_hi = DPAA_FQD_CTX_A_A2_FIELD_VALID |
+ DPAA_FQD_CTX_A_OVERRIDE_OMB;
+ fman_dealloc_bufs_mask_lo = DPAA_FQD_CTX_A2_EBD_BIT;
} else {
fman_dealloc_bufs_mask_hi = 0;
fman_dealloc_bufs_mask_lo = 0;
}
- /* Is the MAC node 1G, 2.5G, 10G? */
+ /* Is the MAC node 1G, 2.5G, 10G or offline? */
__if->__if.is_memac = 0;
- if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
+ if (is_offline)
+ __if->__if.mac_type = fman_offline;
+ else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
__if->__if.mac_type = fman_mac_10g;
@@ -468,46 +494,81 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /*
- * For MAC ports, we cannot rely on cell-index. In
- * T2080, two of the 10G ports on single FMAN have same
- * duplicate cell-indexes as the other two 10G ports on
- * same FMAN. Hence, we now rely upon addresses of the
- * ports from device tree to deduce the index.
- */
+ if (!is_offline) {
+ /*
+ * For MAC ports, we cannot rely on cell-index. In
+ * T2080, two of the 10G ports on single FMAN have same
+ * duplicate cell-indexes as the other two 10G ports on
+ * same FMAN. Hence, we now rely upon addresses of the
+ * ports from device tree to deduce the index.
+ */
- _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
- if (_errno) {
- FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
- regs_addr_host);
- goto err;
- }
+ _errno = fman_get_mac_index(regs_addr_host, &__if->__if.mac_idx);
+ if (_errno) {
+ FMAN_ERR(-EINVAL, "Invalid register address: %" PRIx64,
+ regs_addr_host);
+ goto err;
+ }
+ } else {
+ cell_idx = of_get_property(oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n",
+ oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+ cell_idx_host = of_read_number(cell_idx,
+ lenp / sizeof(phandle));
- /* Extract the MAC address for private and shared interfaces */
- mac_addr = of_get_property(mac_node, "local-mac-address",
- &lenp);
- if (!mac_addr) {
- FMAN_ERR(-EINVAL, "%s: no local-mac-address",
- mname);
- goto err;
+ __if->__if.mac_idx = cell_idx_host;
}
- memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
- /* Extract the channel ID (from tx-port-handle) */
- tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
- &lenp);
- if (!tx_channel_id) {
- FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
- tx_node->full_name);
- goto err;
+ if (!is_offline) {
+ /* Extract the MAC address for private and shared interfaces */
+ mac_addr = of_get_property(mac_node, "local-mac-address",
+ &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address",
+ mname);
+ goto err;
+ }
+ memcpy(&__if->__if.mac_addr, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the channel ID (from tx-port-handle) */
+ tx_channel_id = of_get_property(tx_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
+ } else {
+ /* Extract the channel ID (from mac) */
+ tx_channel_id = of_get_property(mac_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id",
+ tx_node->full_name);
+ goto err;
+ }
}
- regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ na = of_n_addr_cells(mac_node);
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ if (!is_offline)
+ regs_addr = of_get_address(rx_node, 0, &__if->regs_size, NULL);
+ else
+ regs_addr = of_get_address(oh_node, 0, &__if->regs_size, NULL);
if (!regs_addr) {
FMAN_ERR(-EINVAL, "of_get_address(%s)", mname);
goto err;
}
- phys_addr = of_translate_address(rx_node, regs_addr);
+
+ if (!is_offline)
+ phys_addr = of_translate_address(rx_node, regs_addr);
+ else
+ phys_addr = of_translate_address(oh_node, regs_addr);
if (!phys_addr) {
FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)",
mname, regs_addr);
@@ -521,23 +582,27 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
- if (!regs_addr) {
- FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
- goto err;
- }
- phys_addr = of_translate_address(tx_node, regs_addr);
- if (!phys_addr) {
- FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
- mname, regs_addr);
- goto err;
- }
- __if->tx_bmi_map = mmap(NULL, __if->regs_size,
- PROT_READ | PROT_WRITE, MAP_SHARED,
- fman_ccsr_map_fd, phys_addr);
- if (__if->tx_bmi_map == MAP_FAILED) {
- FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
- goto err;
+ if (!is_offline) {
+ regs_addr = of_get_address(tx_node, 0, &__if->regs_size, NULL);
+ if (!regs_addr) {
+ FMAN_ERR(-EINVAL, "of_get_address(%s)\n", mname);
+ goto err;
+ }
+
+ phys_addr = of_translate_address(tx_node, regs_addr);
+ if (!phys_addr) {
+ FMAN_ERR(-EINVAL, "of_translate_address(%s, %p)\n",
+ mname, regs_addr);
+ goto err;
+ }
+
+ __if->tx_bmi_map = mmap(NULL, __if->regs_size,
+ PROT_READ | PROT_WRITE, MAP_SHARED,
+ fman_ccsr_map_fd, phys_addr);
+ if (__if->tx_bmi_map == MAP_FAILED) {
+ FMAN_ERR(-errno, "mmap(0x%"PRIx64")\n", phys_addr);
+ goto err;
+ }
}
if (!rtc_map) {
@@ -554,11 +619,6 @@ fman_if_init(const struct device_node *dpa_node)
__if->rtc_map = rtc_map;
}
- /* No channel ID for MAC-less */
- assert(lenp == sizeof(*tx_channel_id));
- na = of_n_addr_cells(mac_node);
- __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
-
/* Extract the Rx FQIDs. (Note, the device representation is silly,
* there are "counts" that must always be 1.)
*/
@@ -568,13 +628,26 @@ fman_if_init(const struct device_node *dpa_node)
goto err;
}
- /* Check if "fsl,qman-frame-queues-rx" in dtb file is valid entry or
- * not. A valid entry contains at least 4 entries, rx_error_queue,
- * rx_error_queue_count, fqid_rx_def and rx_error_queue_count.
+ /*
+ * Check if "fsl,qman-frame-queues-rx/oh" in dtb file is valid entry or
+ * not.
+ *
+ * A valid rx entry contains either 4 or 6 entries. Mandatory entries
+ * are rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count. Optional entries are fqid_rx_pcd and
+ * fqid_rx_pcd_count.
+ *
+ * A valid oh entry contains 4 entries. Those entries are
+ * rx_error_queue, rx_error_queue_count, fqid_rx_def and
+ * fqid_rx_def_count.
*/
- assert(lenp >= (4 * sizeof(phandle)));
- na = of_n_addr_cells(mac_node);
+ if (!is_offline)
+ assert(lenp == (4 * sizeof(phandle)) ||
+ lenp == (6 * sizeof(phandle)));
+ else
+ assert(lenp == (4 * sizeof(phandle)));
+
/* Get rid of endianness (issues). Convert to host byte order */
rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
@@ -595,6 +668,9 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.fqid_rx_pcd_count = rx_phandle_host[5];
}
+ if (is_offline)
+ goto oh_init_done;
+
/* Extract the Tx FQIDs */
tx_phandle = of_get_property(dpa_node,
"fsl,qman-frame-queues-tx", &lenp);
@@ -706,6 +782,7 @@ fman_if_init(const struct device_node *dpa_node)
if (is_shared)
__if->__if.is_shared_mac = 1;
+oh_init_done:
fman_if_vsp_init(__if);
/* Parsing of the network interface is complete, add it to the list */
@@ -769,6 +846,10 @@ fman_finish(void)
list_for_each_entry_safe(__if, tmpif, &__ifs, __if.node) {
int _errno;
+ /* No need to disable Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ continue;
+
/* disable Rx and Tx */
if ((__if->__if.mac_type == fman_mac_1g) &&
(!__if->__if.is_memac))
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 4fc41c1ae9..1f61ae406b 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2020,2022 NXP
+ * Copyright 2017,2020,2022-2023 NXP
*
*/
@@ -88,6 +88,10 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
+ /* Add hash mac addr not supported on Offline port */
+ if (__if->__if.mac_type == fman_offline)
+ return 0;
+
eth_addr = ETH_ADDR_TO_UINT64(eth);
if (!(eth_addr & GROUP_ADDRESS))
@@ -109,6 +113,15 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
void *mac_reg =
&((struct memac_regs *)__if->ccsr_map)->mac_addr0.mac_addr_l;
u32 val = in_be32(mac_reg);
+ int i;
+
+ /* Get mac addr not supported on Offline port */
+ /* Return NULL mac address */
+ if (__if->__if.mac_type == fman_offline) {
+ for (i = 0; i < 6; i++)
+ eth[i] = 0x0;
+ return 0;
+ }
eth[0] = (val & 0x000000ff) >> 0;
eth[1] = (val & 0x0000ff00) >> 8;
@@ -130,6 +143,10 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
+ /* Clear mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return;
+
if (addr_num) {
reg = &((struct memac_regs *)m->ccsr_map)->
mac_addr[addr_num-1].mac_addr_l;
@@ -149,10 +166,13 @@ int
fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
{
struct __fman_if *m = container_of(p, struct __fman_if, __if);
-
void *reg;
u32 val;
+ /* Set mac addr not supported on Offline port */
+ if (m->__if.mac_type == fman_offline)
+ return 0;
+
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
if (addr_num)
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index 57d87afcb0..e6a6ed1eb6 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
*
* Copyright 2010-2016 Freescale Semiconductor Inc.
- * Copyright 2017-2019 NXP
+ * Copyright 2017-2019,2023 NXP
*
*/
#include <inttypes.h>
@@ -44,6 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
+ (__if->mac_type == fman_offline) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -56,13 +57,15 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
- fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
- fman_if_for_each_bpool(bpool, __if)
- fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
- " size=%"PRId64", addr=0x%"PRIx64")\n",
- bpool->bpid, bpool->count, bpool->size,
- bpool->addr);
+ if (__if->mac_type != fman_offline) {
+ fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
+ fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
+ fman_if_for_each_bpool(bpool, __if)
+ fprintf(f, "\tbuffer pool: (bpid=%d, count=%"PRId64
+ " size=%"PRId64", addr=0x%"PRIx64")\n",
+ bpool->bpid, bpool->count, bpool->size,
+ bpool->addr);
+ }
}
}
#endif /* RTE_LIBRTE_DPAA_DEBUG_DRIVER */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 1f6997c77e..6e4ec90670 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -43,6 +43,7 @@
#include <fsl_qman.h>
#include <fsl_bman.h>
#include <netcfg.h>
+#include <fman.h>
struct rte_dpaa_bus {
struct rte_bus bus;
@@ -203,9 +204,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- sprintf(dev->name, "fm%d-mac%d", (fman_intf->fman_idx + 1),
- fman_intf->mac_idx);
- DPAA_BUS_LOG(INFO, "%s netdev added", dev->name);
+ if (fman_intf->mac_type == fman_offline)
+ sprintf(dev->name, "fm%d-oh%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else
+ sprintf(dev->name, "fm%d-mac%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
dev->device.name = dev->name;
dev->device.devargs = dpaa_devargs_lookup(dev);
@@ -441,7 +445,7 @@ static int
rte_dpaa_bus_parse(const char *name, void *out)
{
unsigned int i, j;
- size_t delta;
+ size_t delta, dev_delta;
size_t max_name_len;
/* There are two ways of passing device name, with and without
@@ -458,16 +462,25 @@ rte_dpaa_bus_parse(const char *name, void *out)
delta = 5;
}
+ /* dev_delta points to the dev name (mac/oh/onic). Not valid for
+ * dpaa_sec.
+ */
+ dev_delta = delta + sizeof("fm.-") - 1;
+
if (strncmp("dpaa_sec", &name[delta], 8) == 0) {
if (sscanf(&name[delta], "dpaa_sec-%u", &i) != 1 ||
i < 1 || i > 4)
return -EINVAL;
max_name_len = sizeof("dpaa_sec-.") - 1;
+ } else if (strncmp("oh", &name[dev_delta], 2) == 0) {
+ if (sscanf(&name[delta], "fm%u-oh%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-oh..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
return -EINVAL;
-
max_name_len = sizeof("fm.-mac..") - 1;
}
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index e8bc913943..377f73bf0d 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -2,7 +2,7 @@
*
* Copyright 2010-2012 Freescale Semiconductor, Inc.
* All rights reserved.
- * Copyright 2019-2022 NXP
+ * Copyright 2019-2023 NXP
*
*/
@@ -474,11 +474,30 @@ extern int fman_ccsr_map_fd;
#define FMAN_IP_REV_1_MAJOR_MASK 0x0000FF00
#define FMAN_IP_REV_1_MAJOR_SHIFT 8
#define FMAN_V3 0x06
-#define FMAN_V3_CONTEXTA_EN_A2V 0x10000000
-#define FMAN_V3_CONTEXTA_EN_OVOM 0x02000000
-#define FMAN_V3_CONTEXTA_EN_EBD 0x80000000
-#define FMAN_CONTEXTA_DIS_CHECKSUM 0x7ull
-#define FMAN_CONTEXTA_SET_OPCODE11 0x2000000b00000000
+
+#define DPAA_FQD_CTX_A_SHIFT_BITS 24
+#define DPAA_FQD_CTX_B_SHIFT_BITS 24
+
+/* Following flags are used to set in context A hi field of FQD */
+#define DPAA_FQD_CTX_A_OVERRIDE_FQ (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_IGNORE_CMD (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A1_FIELD_VALID (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A2_FIELD_VALID (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_A0_FIELD_VALID (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_B0_FIELD_VALID (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_OVERRIDE_OMB (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A_RESERVED (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
+/* Following flags are used to set in context A lo field of FQD */
+#define DPAA_FQD_CTX_A2_EBD_BIT (0x80 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_EBAD_BIT (0x40 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_FWD_BIT (0x20 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NL_BIT (0x10 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_CWD_BIT (0x08 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_NENQ_BIT (0x04 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_RESERVED_BIT (0x02 << DPAA_FQD_CTX_A_SHIFT_BITS)
+#define DPAA_FQD_CTX_A2_VSPE_BIT (0x01 << DPAA_FQD_CTX_A_SHIFT_BITS)
+
extern u16 fman_ip_rev;
extern u32 fman_dealloc_bufs_mask_hi;
extern u32 fman_dealloc_bufs_mask_lo;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 4ead890278..f8196ddd14 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (!fif->is_shared_mac)
+ if (fif->mac_type != fman_offline)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -314,6 +314,10 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
dpaa_write_fm_config_to_file();
}
+ /* Disable interrupt support on offline port*/
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* if the interrupts were configured on this devices*/
if (intr_handle && rte_intr_fd_get(intr_handle)) {
if (dev->data->dev_conf.intr_conf.lsc != 0)
@@ -531,6 +535,9 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
+ if (fif->mac_type == fman_offline)
+ return 0;
+
/* Reset link to autoneg */
if (link->link_status && !link->link_autoneg)
dpaa_restart_link_autoneg(__fif->node_name);
@@ -644,6 +651,11 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
+ } else if (fif->mac_type == fman_offline) {
+ dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+ | RTE_ETH_LINK_SPEED_10M
+ | RTE_ETH_LINK_SPEED_100M_HD
+ | RTE_ETH_LINK_SPEED_100M;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -744,7 +756,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -757,6 +770,11 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
+ if (fif->mac_type == fman_offline) {
+ /*Max supported rate for O/H port is 3.75Mpps*/
+ link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
+ link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+ }
}
if (ioctl_version < 2) {
@@ -1077,7 +1095,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1222,8 +1240,11 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->fqid, ret);
}
}
+
/* Enable main queue to receive error packets also by default */
- fman_if_set_err_fqid(fif, rxq->fqid);
+ if (fif->mac_type != fman_offline)
+ fman_if_set_err_fqid(fif, rxq->fqid);
+
return 0;
}
@@ -1372,7 +1393,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1388,7 +1410,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
- if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
+ fif->mac_type != fman_offline)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1483,9 +1506,15 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
__rte_unused uint32_t pool)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1498,8 +1527,15 @@ static void
dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
uint32_t index)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1508,9 +1544,15 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *addr)
{
int ret;
+ struct fman_if *fif = dev->process_private;
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_offline) {
+ DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1807,6 +1849,17 @@ static int dpaa_rx_queue_init(struct qman_fq *fq, struct qman_cgr *cgr_rx,
return ret;
}
+uint8_t fm_default_vsp_id(struct fman_if *fif)
+{
+ /* Avoid being same as base profile which could be used
+ * for kernel interface of shared mac.
+ */
+ if (fif->base_profile_id)
+ return 0;
+ else
+ return DPAA_DEFAULT_RXQ_VSP_ID;
+}
+
/* Initialise a Tx FQ */
static int dpaa_tx_queue_init(struct qman_fq *fq,
struct fman_if *fman_intf,
@@ -1842,13 +1895,20 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
} else {
/* no tx-confirmation */
opts.fqd.context_a.lo = fman_dealloc_bufs_mask_lo;
- opts.fqd.context_a.hi = 0x80000000 | fman_dealloc_bufs_mask_hi;
+ opts.fqd.context_a.hi = DPAA_FQD_CTX_A_OVERRIDE_FQ |
+ fman_dealloc_bufs_mask_hi;
}
- if (fman_ip_rev >= FMAN_V3) {
+ if (fman_ip_rev >= FMAN_V3)
/* Set B0V bit in contextA to set ASPID to 0 */
- opts.fqd.context_a.hi |= 0x04000000;
+ opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
+
+ if (fman_intf->mac_type == fman_offline) {
+ opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
+ opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
+ DPAA_FQD_CTX_B_SHIFT_BITS;
}
+
DPAA_PMD_DEBUG("init tx fq %p, fqid 0x%x", fq, fq->fqid);
if (cgr_tx) {
@@ -2263,7 +2323,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- dpaa_fc_set_default(dpaa_intf, fman_intf);
+ if (fman_intf->mac_type != fman_offline)
+ dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
list_for_each_entry_safe(bp, tmp_bp, &cfg->fman_if->bpool_list, node) {
@@ -2294,10 +2355,10 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac) {
+ if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
- FM_FD_RX_STATUS_ERR_MASK);
+ FM_FD_RX_STATUS_ERR_MASK);
/* Disable RX mode */
fman_if_disable_rx(fman_intf);
/* Disable promiscuous mode */
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index 7884cc034c..8ec5155cfc 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -121,6 +121,9 @@ enum {
extern struct rte_mempool *dpaa_tx_sg_pool;
extern int dpaa_ieee_1588;
+/* PMD related logs */
+extern int dpaa_logtype_pmd;
+
/* structure to free external and indirect
* buffers.
*/
@@ -266,6 +269,9 @@ dpaa_timesync_read_rx_timestamp(struct rte_eth_dev *dev,
struct timespec *timestamp,
uint32_t flags __rte_unused);
+uint8_t
+fm_default_vsp_id(struct fman_if *fif);
+
/* PMD related logs */
extern int dpaa_logtype_pmd;
#define RTE_LOGTYPE_DPAA_PMD dpaa_logtype_pmd
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 082bd5d014..cd8b4f51ea 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017-2019,2021 NXP
+ * Copyright 2017-2019,2021-2024 NXP
*/
/* System headers */
@@ -29,6 +29,11 @@ return &scheme_params->param.key_ext_and_hash.extract_array[hdr_idx];
#define SCH_EXT_FULL_FLD(scheme_params, hdr_idx) \
SCH_EXT_HDR(scheme_params, hdr_idx).extract_by_hdr_type.full_field
+/* FMAN mac indexes mappings (0 is unused, first 8 are for 1G, next for 10G
+ * ports).
+ */
+const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
+
/* FM global info */
struct dpaa_fm_info {
t_handle fman_handle;
@@ -48,17 +53,6 @@ static struct dpaa_fm_info fm_info;
static struct dpaa_fm_model fm_model;
static const char *fm_log = "/tmp/fmdpdk.bin";
-static inline uint8_t fm_default_vsp_id(struct fman_if *fif)
-{
- /* Avoid being same as base profile which could be used
- * for kernel interface of shared mac.
- */
- if (fif->base_profile_id)
- return 0;
- else
- return DPAA_DEFAULT_RXQ_VSP_ID;
-}
-
static void fm_prev_cleanup(void)
{
uint32_t fman_id = 0, i = 0, devid;
@@ -649,12 +643,15 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
}
-static inline int get_port_type(struct fman_if *fif)
+static inline int get_rx_port_type(struct fman_if *fif)
{
+
+ if (fif->mac_type == fman_offline)
+ return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
*/
- if (fif->mac_type == fman_mac_1g &&
+ else if (fif->mac_type == fman_mac_1g &&
fif->mac_idx >= DPAA_10G_MAC_START_IDX)
return e_FM_PORT_TYPE_RX_10G;
else if (fif->mac_type == fman_mac_1g)
@@ -665,7 +662,7 @@ static inline int get_port_type(struct fman_if *fif)
return e_FM_PORT_TYPE_RX_10G;
DPAA_PMD_ERR("MAC type unsupported");
- return -1;
+ return e_FM_PORT_TYPE_DUMMY;
}
static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
@@ -676,17 +673,12 @@ static inline int set_fm_port_handle(struct dpaa_if *dpaa_intf,
ioc_fm_pcd_net_env_params_t dist_units;
PMD_INIT_FUNC_TRACE();
- /* FMAN mac indexes mappings (0 is unused,
- * first 8 are for 1G, next for 10G ports
- */
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
-
/* Memset FM port params */
memset(&fm_port_params, 0, sizeof(fm_port_params));
/* Set FM port params */
fm_port_params.h_fm = fm_info.fman_handle;
- fm_port_params.port_type = get_port_type(fif);
+ fm_port_params.port_type = get_rx_port_type(fif);
fm_port_params.port_id = mac_idx[fif->mac_idx];
/* FM PORT Open */
@@ -949,7 +941,6 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
{
t_fm_vsp_params vsp_params;
t_fm_buffer_prefix_content buf_prefix_cont;
- uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
uint8_t idx = mac_idx[fif->mac_idx];
int ret;
@@ -970,17 +961,31 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- vsp_params.port_params.port_id = idx;
+ if (fif->mac_type == fman_offline)
+ vsp_params.port_params.port_id = fif->mac_idx;
+ else
+ vsp_params.port_params.port_id = idx;
+
if (fif->mac_type == fman_mac_1g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
} else if (fif->mac_type == fman_mac_2_5g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
} else if (fif->mac_type == fman_mac_10g) {
vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
+ } else if (fif->mac_type == fman_offline) {
+ vsp_params.port_params.port_type =
+ e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
} else {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
return -1;
}
+
+ vsp_params.port_params.port_type = get_rx_port_type(fif);
+ if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
+ DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
+ return -1;
+ }
+
vsp_params.ext_buf_pools.num_of_pools_used = 1;
vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 15/18] bus/dpaa: add ONIC port mode for the DPAA eth
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (13 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 14/18] bus/dpaa: add OH port mode for dpaa eth Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
` (4 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
The OH ports can also be used by two application, processing contexts
to communicate to each other.
This patch enables this mode for dpaa-eth OH port as ONIC port,
so that application can use the dpaa-eth to communicate to each
other on the same SoC.
Again,this properties is driven by the system device-tree variables.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
doc/guides/nics/dpaa.rst | 33 ++-
drivers/bus/dpaa/base/fman/fman.c | 299 +++++++++++++++++++++-
drivers/bus/dpaa/base/fman/fman_hw.c | 20 +-
drivers/bus/dpaa/base/fman/netcfg_layer.c | 4 +-
drivers/bus/dpaa/dpaa_bus.c | 16 +-
drivers/bus/dpaa/include/fman.h | 15 +-
drivers/net/dpaa/dpaa_ethdev.c | 114 +++++++--
drivers/net/dpaa/dpaa_flow.c | 23 +-
drivers/net/dpaa/dpaa_fmc.c | 2 +-
9 files changed, 467 insertions(+), 59 deletions(-)
diff --git a/doc/guides/nics/dpaa.rst b/doc/guides/nics/dpaa.rst
index 47dcce334c..a266e71a5b 100644
--- a/doc/guides/nics/dpaa.rst
+++ b/doc/guides/nics/dpaa.rst
@@ -136,7 +136,7 @@ RTE framework and DPAA internal components/drivers.
The Ethernet driver is bound to a FMAN port and implements the interfaces
needed to connect the DPAA network interface to the network stack.
Each FMAN Port corresponds to a DPDK network interface.
-- PMD also support OH mode, where the port works as a HW assisted
+- PMD also support OH/ONIC mode, where the port works as a HW assisted
virtual port without actually connecting to a Physical MAC.
@@ -152,7 +152,7 @@ Features
- Promiscuous mode
- IEEE1588 PTP
- OH Port for inter application communication
-
+ - ONIC virtual port support
DPAA Mempool Driver
~~~~~~~~~~~~~~~~~~~
@@ -350,6 +350,35 @@ OH Port
-------- Rx Packets ---------
+ONIC
+~~~~
+ To use OH port to communicate between two applications, we can assign Rx port
+ of an O/H port to Application 1 and Tx port to Application 2 so that
+ Application 1 can send packets to Application 2. Similarly, we can assign Tx
+ port of another O/H port to Application 1 and Rx port to Application 2 so that
+ Application 2 can send packets to Application 1.
+
+ ONIC is logically defined to achieve it. Internally it will use one Rx queue
+ of an O/H port and one Tx queue of another O/H port.
+ For application, it will behave as single O/H port.
+
+ +------+ +------+ +------+ +------+ +------+
+ | | Tx | | Rx | O/H | Tx | | Rx | |
+ | | - - - > | | - - > | Port | - - > | | - - > | |
+ | | | | | 1 | | | | |
+ | | | | +------+ | | | |
+ | App | | ONIC | | ONIC | | App |
+ | 1 | | Port | | Port | | 2 |
+ | | | 1 | +------+ | 2 | | |
+ | | Rx | | Tx | O/H | Rx | | Tx | |
+ | | < - - - | | < - - -| Port | < - - -| | < - - -| |
+ | | | | | 2 | | | | |
+ +------+ +------+ +------+ +------+ +------+
+
+ All the packets received by ONIC port 1 will be send to ONIC port 2 and vice
+ versa. These ports can be used by DPDK applications just like physical ports.
+
+
VSP (Virtual Storage Profile)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The storage profiled are means to provide virtualized interface. A ranges of
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index f817305ab7..efe6eab4a9 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -43,7 +43,7 @@ if_destructor(struct __fman_if *__if)
if (!__if)
return;
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
goto cleanup;
list_for_each_entry_safe(bp, tmpbp, &__if->__if.bpool_list, node) {
@@ -465,7 +465,7 @@ fman_if_init(const struct device_node *dpa_node)
__if->__if.is_memac = 0;
if (is_offline)
- __if->__if.mac_type = fman_offline;
+ __if->__if.mac_type = fman_offline_internal;
else if (of_device_is_compatible(mac_node, "fsl,fman-1g-mac"))
__if->__if.mac_type = fman_mac_1g;
else if (of_device_is_compatible(mac_node, "fsl,fman-10g-mac"))
@@ -791,6 +791,292 @@ fman_if_init(const struct device_node *dpa_node)
dname, __if->__if.tx_channel_id, __if->__if.fman_idx,
__if->__if.mac_idx);
+ /* Don't add OH port to the port list since they will be used by ONIC
+ * ports.
+ */
+ if (!is_offline)
+ list_add_tail(&__if->__if.node, &__ifs);
+
+ return 0;
+err:
+ if_destructor(__if);
+ return _errno;
+}
+
+static int fman_if_init_onic(const struct device_node *dpa_node)
+{
+ struct __fman_if *__if;
+ struct fman_if_bpool *bpool;
+ const phandle *tx_pools_phandle;
+ const phandle *tx_channel_id, *mac_addr, *cell_idx;
+ const phandle *rx_phandle;
+ const struct device_node *pool_node;
+ size_t lenp;
+ int _errno;
+ const phandle *p_onic_oh_nodes = NULL;
+ const struct device_node *rx_oh_node = NULL;
+ const struct device_node *tx_oh_node = NULL;
+ const phandle *p_fman_rx_oh_node = NULL, *p_fman_tx_oh_node = NULL;
+ const struct device_node *fman_rx_oh_node = NULL;
+ const struct device_node *fman_tx_oh_node = NULL;
+ const struct device_node *fman_node;
+ uint32_t na = OF_DEFAULT_NA;
+ uint64_t rx_phandle_host[4] = {0};
+ uint64_t cell_idx_host = 0;
+
+ if (of_device_is_available(dpa_node) == false)
+ return 0;
+
+ if (!of_device_is_compatible(dpa_node, "fsl,dpa-ethernet-generic"))
+ return 0;
+
+ /* Allocate an object for this network interface */
+ __if = rte_malloc(NULL, sizeof(*__if), RTE_CACHE_LINE_SIZE);
+ if (!__if) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*__if));
+ goto err;
+ }
+ memset(__if, 0, sizeof(*__if));
+
+ INIT_LIST_HEAD(&__if->__if.bpool_list);
+
+ strlcpy(__if->node_name, dpa_node->name, IF_NAME_MAX_LEN - 1);
+ __if->node_name[IF_NAME_MAX_LEN - 1] = '\0';
+
+ strlcpy(__if->node_path, dpa_node->full_name, PATH_MAX - 1);
+ __if->node_path[PATH_MAX - 1] = '\0';
+
+ /* Mac node is onic */
+ __if->__if.is_memac = 0;
+ __if->__if.mac_type = fman_onic;
+
+ /* Extract the MAC address for linux peer */
+ mac_addr = of_get_property(dpa_node, "local-mac-address", &lenp);
+ if (!mac_addr) {
+ FMAN_ERR(-EINVAL, "%s: no local-mac-address\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ memcpy(&__if->__if.onic_info.peer_mac, mac_addr, ETHER_ADDR_LEN);
+
+ /* Extract the Rx port (it's the first of the two port handles)
+ * and get its channel ID.
+ */
+ p_onic_oh_nodes = of_get_property(dpa_node, "fsl,oh-ports", &lenp);
+ if (!p_onic_oh_nodes) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_onic_oh_nodes\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ rx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[0]);
+ if (!rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get rx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_rx_oh_node = of_get_property(rx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_rx_oh_node = of_find_node_by_phandle(*p_fman_rx_oh_node);
+ if (!fman_rx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_rx_oh_node\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+
+ tx_channel_id = of_get_property(fman_rx_oh_node, "fsl,qman-channel-id",
+ &lenp);
+ if (!tx_channel_id) {
+ FMAN_ERR(-EINVAL, "%s: no fsl-qman-channel-id\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*tx_channel_id));
+
+ __if->__if.tx_channel_id = of_read_number(tx_channel_id, na);
+
+ /* Extract the FQs from which oNIC driver in Linux is dequeuing */
+ rx_phandle = of_get_property(rx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ rx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ __if->__if.onic_info.rx_start = of_read_number(&rx_phandle[2], na);
+ __if->__if.onic_info.rx_count = of_read_number(&rx_phandle[3], na);
+
+ /* Extract the Rx FQIDs */
+ tx_oh_node = of_find_node_by_phandle(p_onic_oh_nodes[1]);
+ if (!tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get tx_oh_node\n",
+ dpa_node->full_name);
+ goto err;
+ }
+
+ p_fman_tx_oh_node = of_get_property(tx_oh_node, "fsl,fman-oh-port",
+ &lenp);
+ if (!p_fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get p_fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ fman_tx_oh_node = of_find_node_by_phandle(*p_fman_tx_oh_node);
+ if (!fman_tx_oh_node) {
+ FMAN_ERR(-EINVAL, "%s: couldn't get fman_tx_oh_node\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+
+ cell_idx = of_get_property(fman_tx_oh_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.mac_idx = cell_idx_host;
+
+ fman_node = of_get_parent(fman_tx_oh_node);
+ cell_idx = of_get_property(fman_node, "cell-index", &lenp);
+ if (!cell_idx) {
+ FMAN_ERR(-ENXIO, "%s: no cell-index)\n", tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp == sizeof(*cell_idx));
+
+ cell_idx_host = of_read_number(cell_idx, lenp / sizeof(phandle));
+ __if->__if.fman_idx = cell_idx_host;
+
+ rx_phandle = of_get_property(tx_oh_node, "fsl,qman-frame-queues-oh",
+ &lenp);
+ if (!rx_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,qman-frame-queues-oh\n",
+ dpa_node->full_name);
+ goto err;
+ }
+ assert(lenp == (4 * sizeof(phandle)));
+
+ rx_phandle_host[0] = of_read_number(&rx_phandle[0], na);
+ rx_phandle_host[1] = of_read_number(&rx_phandle[1], na);
+ rx_phandle_host[2] = of_read_number(&rx_phandle[2], na);
+ rx_phandle_host[3] = of_read_number(&rx_phandle[3], na);
+
+ assert((rx_phandle_host[1] == 1) && (rx_phandle_host[3] == 1));
+
+ __if->__if.fqid_rx_err = rx_phandle_host[0];
+ __if->__if.fqid_rx_def = rx_phandle_host[2];
+
+ /* Don't Extract the Tx FQIDs */
+ __if->__if.fqid_tx_err = 0;
+ __if->__if.fqid_tx_confirm = 0;
+
+ /* Obtain the buffer pool nodes used by Tx OH port */
+ tx_pools_phandle = of_get_property(tx_oh_node, "fsl,bman-buffer-pools",
+ &lenp);
+ if (!tx_pools_phandle) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ goto err;
+ }
+ assert(lenp && !(lenp % sizeof(phandle)));
+
+ /* For each pool, parse the corresponding node and add a pool object to
+ * the interface's "bpool_list".
+ */
+
+ while (lenp) {
+ size_t proplen;
+ const phandle *prop;
+ uint64_t bpool_host[6] = {0};
+
+ /* Allocate an object for the pool */
+ bpool = rte_malloc(NULL, sizeof(*bpool), RTE_CACHE_LINE_SIZE);
+ if (!bpool) {
+ FMAN_ERR(-ENOMEM, "malloc(%zu)\n", sizeof(*bpool));
+ goto err;
+ }
+
+ /* Find the pool node */
+ pool_node = of_find_node_by_phandle(*tx_pools_phandle);
+ if (!pool_node) {
+ FMAN_ERR(-ENXIO, "%s: bad fsl,bman-buffer-pools\n",
+ tx_oh_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+
+ /* Extract the BPID property */
+ prop = of_get_property(pool_node, "fsl,bpid", &proplen);
+ if (!prop) {
+ FMAN_ERR(-EINVAL, "%s: no fsl,bpid\n",
+ pool_node->full_name);
+ rte_free(bpool);
+ goto err;
+ }
+ assert(proplen == sizeof(*prop));
+
+ bpool->bpid = of_read_number(prop, na);
+
+ /* Extract the cfg property (count/size/addr). "fsl,bpool-cfg"
+ * indicates for the Bman driver to seed the pool.
+ * "fsl,bpool-ethernet-cfg" is used by the network driver. The
+ * two are mutually exclusive, so check for either of them.
+ */
+
+ prop = of_get_property(pool_node, "fsl,bpool-cfg", &proplen);
+ if (!prop)
+ prop = of_get_property(pool_node,
+ "fsl,bpool-ethernet-cfg",
+ &proplen);
+ if (!prop) {
+ /* It's OK for there to be no bpool-cfg */
+ bpool->count = bpool->size = bpool->addr = 0;
+ } else {
+ assert(proplen == (6 * sizeof(*prop)));
+
+ bpool_host[0] = of_read_number(&prop[0], na);
+ bpool_host[1] = of_read_number(&prop[1], na);
+ bpool_host[2] = of_read_number(&prop[2], na);
+ bpool_host[3] = of_read_number(&prop[3], na);
+ bpool_host[4] = of_read_number(&prop[4], na);
+ bpool_host[5] = of_read_number(&prop[5], na);
+
+ bpool->count = ((uint64_t)bpool_host[0] << 32) |
+ bpool_host[1];
+ bpool->size = ((uint64_t)bpool_host[2] << 32) |
+ bpool_host[3];
+ bpool->addr = ((uint64_t)bpool_host[4] << 32) |
+ bpool_host[5];
+ }
+
+ /* Parsing of the pool is complete, add it to the interface
+ * list.
+ */
+ list_add_tail(&bpool->node, &__if->__if.bpool_list);
+ lenp -= sizeof(phandle);
+ tx_pools_phandle++;
+ }
+
+ fman_if_vsp_init(__if);
+
+ /* Parsing of the network interface is complete, add it to the list. */
+ DPAA_BUS_DEBUG("Found %s, Tx Channel = %x, FMAN = %x, Port ID = %x",
+ dpa_node->full_name, __if->__if.tx_channel_id,
+ __if->__if.fman_idx, __if->__if.mac_idx);
+
list_add_tail(&__if->__if.node, &__ifs);
return 0;
err:
@@ -830,6 +1116,13 @@ fman_init(void)
}
}
+ for_each_compatible_node(dpa_node, NULL, "fsl,dpa-ethernet-generic") {
+ /* it is a oNIC interface */
+ _errno = fman_if_init_onic(dpa_node);
+ if (_errno)
+ FMAN_ERR(_errno, "if_init(%s)\n", dpa_node->full_name);
+ }
+
return 0;
err:
fman_finish();
@@ -847,7 +1140,7 @@ fman_finish(void)
int _errno;
/* No need to disable Offline port */
- if (__if->__if.mac_type == fman_offline)
+ if (__if->__if.mac_type == fman_offline_internal)
continue;
/* disable Rx and Tx */
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 1f61ae406b..cbb0491d70 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -88,8 +88,9 @@ fman_if_add_hash_mac_addr(struct fman_if *p, uint8_t *eth)
struct __fman_if *__if = container_of(p, struct __fman_if, __if);
- /* Add hash mac addr not supported on Offline port */
- if (__if->__if.mac_type == fman_offline)
+ /* Add hash mac addr not supported on Offline port and onic port */
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic)
return 0;
eth_addr = ETH_ADDR_TO_UINT64(eth);
@@ -115,9 +116,10 @@ fman_if_get_primary_mac_addr(struct fman_if *p, uint8_t *eth)
u32 val = in_be32(mac_reg);
int i;
- /* Get mac addr not supported on Offline port */
+ /* Get mac addr not supported on Offline port and onic port */
/* Return NULL mac address */
- if (__if->__if.mac_type == fman_offline) {
+ if (__if->__if.mac_type == fman_offline_internal ||
+ __if->__if.mac_type == fman_onic) {
for (i = 0; i < 6; i++)
eth[i] = 0x0;
return 0;
@@ -143,8 +145,9 @@ fman_if_clear_mac_addr(struct fman_if *p, uint8_t addr_num)
struct __fman_if *m = container_of(p, struct __fman_if, __if);
void *reg;
- /* Clear mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Clear mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return;
if (addr_num) {
@@ -169,8 +172,9 @@ fman_if_add_mac_addr(struct fman_if *p, uint8_t *eth, uint8_t addr_num)
void *reg;
u32 val;
- /* Set mac addr not supported on Offline port */
- if (m->__if.mac_type == fman_offline)
+ /* Set mac addr not supported on Offline port and onic port */
+ if (m->__if.mac_type == fman_offline_internal ||
+ m->__if.mac_type == fman_onic)
return 0;
memcpy(&m->__if.mac_addr, eth, ETHER_ADDR_LEN);
diff --git a/drivers/bus/dpaa/base/fman/netcfg_layer.c b/drivers/bus/dpaa/base/fman/netcfg_layer.c
index e6a6ed1eb6..ffb37825c2 100644
--- a/drivers/bus/dpaa/base/fman/netcfg_layer.c
+++ b/drivers/bus/dpaa/base/fman/netcfg_layer.c
@@ -44,7 +44,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\n+ Fman %d, MAC %d (%s);\n",
__if->fman_idx, __if->mac_idx,
- (__if->mac_type == fman_offline) ? "OFFLINE" :
+ (__if->mac_type == fman_offline_internal) ? "OFFLINE" :
(__if->mac_type == fman_mac_1g) ? "1G" :
(__if->mac_type == fman_mac_2_5g) ? "2.5G" : "10G");
@@ -57,7 +57,7 @@ dump_netcfg(struct netcfg_info *cfg_ptr, FILE *f)
fprintf(f, "\tfqid_rx_def: 0x%x\n", p_cfg->rx_def);
fprintf(f, "\tfqid_rx_err: 0x%x\n", __if->fqid_rx_err);
- if (__if->mac_type != fman_offline) {
+ if (__if->mac_type != fman_offline_internal) {
fprintf(f, "\tfqid_tx_err: 0x%x\n", __if->fqid_tx_err);
fprintf(f, "\tfqid_tx_confirm: 0x%x\n", __if->fqid_tx_confirm);
fman_if_for_each_bpool(bpool, __if)
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index 6e4ec90670..9ffbe07c93 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -171,8 +171,10 @@ dpaa_create_device_list(void)
struct fm_eth_port_cfg *cfg;
struct fman_if *fman_intf;
+ rte_dpaa_bus.device_count = 0;
+
/* Creating Ethernet Devices */
- for (i = 0; i < dpaa_netcfg->num_ethports; i++) {
+ for (i = 0; dpaa_netcfg && (i < dpaa_netcfg->num_ethports); i++) {
dev = calloc(1, sizeof(struct rte_dpaa_device));
if (!dev) {
DPAA_BUS_LOG(ERR, "Failed to allocate ETH devices");
@@ -204,9 +206,12 @@ dpaa_create_device_list(void)
/* Create device name */
memset(dev->name, 0, RTE_ETH_NAME_MAX_LEN);
- if (fman_intf->mac_type == fman_offline)
+ if (fman_intf->mac_type == fman_offline_internal)
sprintf(dev->name, "fm%d-oh%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
+ else if (fman_intf->mac_type == fman_onic)
+ sprintf(dev->name, "fm%d-onic%d",
+ (fman_intf->fman_idx + 1), fman_intf->mac_idx);
else
sprintf(dev->name, "fm%d-mac%d",
(fman_intf->fman_idx + 1), fman_intf->mac_idx);
@@ -216,7 +221,7 @@ dpaa_create_device_list(void)
dpaa_add_to_device_list(dev);
}
- rte_dpaa_bus.device_count = i;
+ rte_dpaa_bus.device_count += i;
/* Unlike case of ETH, RTE_LIBRTE_DPAA_MAX_CRYPTODEV SEC devices are
* constantly created only if "sec" property is found in the device
@@ -477,6 +482,11 @@ rte_dpaa_bus_parse(const char *name, void *out)
i >= 2 || j >= 16)
return -EINVAL;
max_name_len = sizeof("fm.-oh..") - 1;
+ } else if (strncmp("onic", &name[dev_delta], 4) == 0) {
+ if (sscanf(&name[delta], "fm%u-onic%u", &i, &j) != 2 ||
+ i >= 2 || j >= 16)
+ return -EINVAL;
+ max_name_len = sizeof("fm.-onic..") - 1;
} else {
if (sscanf(&name[delta], "fm%u-mac%u", &i, &j) != 2 ||
i >= 2 || j >= 16)
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 377f73bf0d..01556cf2a8 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -78,7 +78,7 @@ TAILQ_HEAD(rte_fman_if_list, __fman_if);
/* Represents the different flavour of network interface */
enum fman_mac_type {
- fman_offline = 0,
+ fman_offline_internal = 0,
fman_mac_1g,
fman_mac_10g,
fman_mac_2_5g,
@@ -366,6 +366,16 @@ struct fman_port_qmi_regs {
uint32_t fmqm_pndcc; /**< PortID n Dequeue Confirm Counter */
};
+struct onic_port_cfg {
+ char macless_name[IF_NAME_MAX_LEN];
+ uint32_t rx_start;
+ uint32_t rx_count;
+ uint32_t tx_start;
+ uint32_t tx_count;
+ struct rte_ether_addr src_mac;
+ struct rte_ether_addr peer_mac;
+};
+
/* This struct exports parameters about an Fman network interface, determined
* from the device-tree.
*/
@@ -401,6 +411,9 @@ struct fman_if {
uint32_t fqid_tx_err;
uint32_t fqid_tx_confirm;
+ /* oNIC port info */
+ struct onic_port_cfg onic_info;
+
struct list_head bpool_list;
/* The node for linking this interface into "fman_if_list" */
struct list_head node;
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index f8196ddd14..133fbd5bc9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -295,7 +295,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
max_rx_pktlen = DPAA_MAX_RX_PKT_LEN;
}
- if (fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_maxfrm(dev->process_private, max_rx_pktlen);
if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
@@ -315,7 +316,8 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
}
/* Disable interrupt support on offline port*/
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* if the interrupts were configured on this devices*/
@@ -467,10 +469,11 @@ static int dpaa_eth_dev_start(struct rte_eth_dev *dev)
else
dev->tx_pkt_burst = dpaa_eth_queue_tx;
- fman_if_bmi_stats_enable(fif);
- fman_if_bmi_stats_reset(fif);
- fman_if_enable_rx(fif);
-
+ if (fif->mac_type != fman_onic) {
+ fman_if_bmi_stats_enable(fif);
+ fman_if_bmi_stats_reset(fif);
+ fman_if_enable_rx(fif);
+ }
for (i = 0; i < dev->data->nb_rx_queues; i++)
dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
for (i = 0; i < dev->data->nb_tx_queues; i++)
@@ -535,7 +538,8 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
ret = dpaa_eth_dev_stop(dev);
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return 0;
/* Reset link to autoneg */
@@ -651,11 +655,14 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
| RTE_ETH_LINK_SPEED_1G
| RTE_ETH_LINK_SPEED_2_5G
| RTE_ETH_LINK_SPEED_10G;
- } else if (fif->mac_type == fman_offline) {
+ } else if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
| RTE_ETH_LINK_SPEED_10M
| RTE_ETH_LINK_SPEED_100M_HD
- | RTE_ETH_LINK_SPEED_100M;
+ | RTE_ETH_LINK_SPEED_100M
+ | RTE_ETH_LINK_SPEED_1G
+ | RTE_ETH_LINK_SPEED_2_5G;
} else {
DPAA_PMD_ERR("invalid link_speed: %s, %d",
dpaa_intf->name, fif->mac_type);
@@ -757,7 +764,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
ioctl_version = dpaa_get_ioctl_version_number();
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline) {
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic) {
for (count = 0; count <= MAX_REPEAT_TIME; count++) {
ret = dpaa_get_link_status(__fif->node_name, link);
if (ret)
@@ -770,7 +778,8 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
}
} else {
link->link_status = dpaa_intf->valid;
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic) {
/*Max supported rate for O/H port is 3.75Mpps*/
link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
@@ -933,8 +942,16 @@ dpaa_xstats_get_names_by_id(
static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_enable(dev->process_private);
return 0;
@@ -942,8 +959,16 @@ static int dpaa_eth_promiscuous_enable(struct rte_eth_dev *dev)
static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable promiscuous mode not supported on ONIC "
+ "port");
+ return 0;
+ }
+
fman_if_promiscuous_disable(dev->process_private);
return 0;
@@ -951,8 +976,15 @@ static int dpaa_eth_promiscuous_disable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Enable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_set_mcast_filter_table(dev->process_private);
return 0;
@@ -960,8 +992,15 @@ static int dpaa_eth_multicast_enable(struct rte_eth_dev *dev)
static int dpaa_eth_multicast_disable(struct rte_eth_dev *dev)
{
+ struct fman_if *fif = dev->process_private;
+
PMD_INIT_FUNC_TRACE();
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Disable Multicast not supported on ONIC port");
+ return 0;
+ }
+
fman_if_reset_mcast_filter_table(dev->process_private);
return 0;
@@ -1095,7 +1134,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
/* For shared interface, it's done in kernel, skip.*/
- if (!fif->is_shared_mac && fif->mac_type != fman_offline)
+ if (!fif->is_shared_mac && fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_fman_if_pool_setup(dev);
if (fif->num_profiles) {
@@ -1126,8 +1166,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
dpaa_intf->valid = 1;
- DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
- fman_if_get_sg_enable(fif), max_rx_pktlen);
+ if (fif->mac_type != fman_onic)
+ DPAA_PMD_DEBUG("if:%s sg_on = %d, max_frm =%d", dpaa_intf->name,
+ fman_if_get_sg_enable(fif), max_rx_pktlen);
/* checking if push mode only, no error check for now */
if (!rxq->is_static &&
dpaa_push_mode_max_queue > dpaa_push_queue_idx) {
@@ -1242,7 +1283,8 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
/* Enable main queue to receive error packets also by default */
- if (fif->mac_type != fman_offline)
+ if (fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
fman_if_set_err_fqid(fif, rxq->fqid);
return 0;
@@ -1394,7 +1436,8 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
else
return dpaa_eth_dev_stop(dev);
@@ -1411,7 +1454,8 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
__fif = container_of(fif, struct __fman_if, __if);
if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC &&
- fif->mac_type != fman_offline)
+ fif->mac_type != fman_offline_internal &&
+ fif->mac_type != fman_onic)
dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
else
dpaa_eth_dev_start(dev);
@@ -1510,11 +1554,16 @@ dpaa_dev_add_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Add MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Add MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private,
addr->addr_bytes, index);
@@ -1531,11 +1580,16 @@ dpaa_dev_remove_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Remove MAC Address not supported on O/H port");
return;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Remove MAC Address not supported on ONIC port");
+ return;
+ }
+
fman_if_clear_mac_addr(dev->process_private, index);
}
@@ -1548,11 +1602,16 @@ dpaa_dev_set_mac_addr(struct rte_eth_dev *dev,
PMD_INIT_FUNC_TRACE();
- if (fif->mac_type == fman_offline) {
+ if (fif->mac_type == fman_offline_internal) {
DPAA_PMD_DEBUG("Set MAC Address not supported on O/H port");
return 0;
}
+ if (fif->mac_type == fman_onic) {
+ DPAA_PMD_INFO("Set MAC Address not supported on ONIC port");
+ return 0;
+ }
+
ret = fman_if_add_mac_addr(dev->process_private, addr->addr_bytes, 0);
if (ret)
DPAA_PMD_ERR("Setting the MAC ADDR failed %d", ret);
@@ -1903,7 +1962,8 @@ static int dpaa_tx_queue_init(struct qman_fq *fq,
/* Set B0V bit in contextA to set ASPID to 0 */
opts.fqd.context_a.hi |= DPAA_FQD_CTX_A_B0_FIELD_VALID;
- if (fman_intf->mac_type == fman_offline) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
opts.fqd.context_a.lo |= DPAA_FQD_CTX_A2_VSPE_BIT;
opts.fqd.context_b = fm_default_vsp_id(fman_intf) <<
DPAA_FQD_CTX_B_SHIFT_BITS;
@@ -2156,6 +2216,11 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
goto free_rx;
}
if (!num_rx_fqs) {
+ if (fman_intf->mac_type == fman_offline_internal ||
+ fman_intf->mac_type == fman_onic) {
+ ret = -ENODEV;
+ goto free_rx;
+ }
DPAA_PMD_WARN("%s is not configured by FMC.",
dpaa_intf->name);
}
@@ -2323,7 +2388,8 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_DEBUG("All frame queues created");
/* Get the initial configuration for flow control */
- if (fman_intf->mac_type != fman_offline)
+ if (fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic)
dpaa_fc_set_default(dpaa_intf, fman_intf);
/* reset bpool list, initialize bpool dynamically */
@@ -2355,7 +2421,9 @@ dpaa_dev_init(struct rte_eth_dev *eth_dev)
DPAA_PMD_INFO("net: dpaa: %s: " RTE_ETHER_ADDR_PRT_FMT,
dpaa_device->name, RTE_ETHER_ADDR_BYTES(&fman_intf->mac_addr));
- if (!fman_intf->is_shared_mac && fman_intf->mac_type != fman_offline) {
+ if (!fman_intf->is_shared_mac &&
+ fman_intf->mac_type != fman_offline_internal &&
+ fman_intf->mac_type != fman_onic) {
/* Configure error packet handling */
fman_if_receive_rx_errors(fman_intf,
FM_FD_RX_STATUS_ERR_MASK);
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index cd8b4f51ea..9fd40fa47e 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -646,7 +646,11 @@ static inline int set_pcd_netenv_scheme(struct dpaa_if *dpaa_intf,
static inline int get_rx_port_type(struct fman_if *fif)
{
- if (fif->mac_type == fman_offline)
+ /* For onic ports, configure the VSP as offline ports so that
+ * kernel can configure correct port.
+ */
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
return e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
/* For 1G fm-mac9 and fm-mac10 ports, configure the VSP as 10G
* ports so that kernel can configure correct port.
@@ -961,25 +965,12 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
memset(&vsp_params, 0, sizeof(vsp_params));
vsp_params.h_fm = fman_handle;
vsp_params.relative_profile_id = vsp_id;
- if (fif->mac_type == fman_offline)
+ if (fif->mac_type == fman_offline_internal ||
+ fif->mac_type == fman_onic)
vsp_params.port_params.port_id = fif->mac_idx;
else
vsp_params.port_params.port_id = idx;
- if (fif->mac_type == fman_mac_1g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX;
- } else if (fif->mac_type == fman_mac_2_5g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_2_5G;
- } else if (fif->mac_type == fman_mac_10g) {
- vsp_params.port_params.port_type = e_FM_PORT_TYPE_RX_10G;
- } else if (fif->mac_type == fman_offline) {
- vsp_params.port_params.port_type =
- e_FM_PORT_TYPE_OH_OFFLINE_PARSING;
- } else {
- DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
- return -1;
- }
-
vsp_params.port_params.port_type = get_rx_port_type(fif);
if (vsp_params.port_params.port_type == e_FM_PORT_TYPE_DUMMY) {
DPAA_PMD_ERR("Mac type %d error", fif->mac_type);
diff --git a/drivers/net/dpaa/dpaa_fmc.c b/drivers/net/dpaa/dpaa_fmc.c
index d80ea1010a..7dc42f6e23 100644
--- a/drivers/net/dpaa/dpaa_fmc.c
+++ b/drivers/net/dpaa/dpaa_fmc.c
@@ -215,7 +215,7 @@ dpaa_port_fmc_port_parse(struct fman_if *fif,
if (pport->type == e_FM_PORT_TYPE_OH_OFFLINE_PARSING &&
pport->number == fif->mac_idx &&
- (fif->mac_type == fman_offline ||
+ (fif->mac_type == fman_offline_internal ||
fif->mac_type == fman_onic))
return current_port;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 16/18] net/dpaa: improve the dpaa port cleanup
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (14 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 15/18] bus/dpaa: add ONIC port mode for the DPAA eth Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
` (3 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Gagandeep Singh
From: Gagandeep Singh <g.singh@nxp.com>
During DPAA cleanup in FMCLESS mode, application can
see segmentation fault in device close API and in DPAA
destructor execution.
Segmentation fault in device close is because driver
reducing the number of queues initialised during
device configuration without releasing the actual queues.
And segmentation fault in DPAA destruction is because
it is trying to access RTE* devices whose memory has
been released in rte_eal_cleanup() call by the application.
This patch improves the behavior.
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 33 +++++++++++----------------------
drivers/net/dpaa/dpaa_flow.c | 5 ++---
2 files changed, 13 insertions(+), 25 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 133fbd5bc9..41ae033c75 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -561,10 +561,10 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
if (dpaa_intf->cgr_rx) {
for (loop = 0; loop < dpaa_intf->nb_rx_queues; loop++)
qman_delete_cgr(&dpaa_intf->cgr_rx[loop]);
+ rte_free(dpaa_intf->cgr_rx);
+ dpaa_intf->cgr_rx = NULL;
}
- rte_free(dpaa_intf->cgr_rx);
- dpaa_intf->cgr_rx = NULL;
/* Release TX congestion Groups */
if (dpaa_intf->cgr_tx) {
for (loop = 0; loop < MAX_DPAA_CORES; loop++)
@@ -578,6 +578,15 @@ static int dpaa_eth_dev_close(struct rte_eth_dev *dev)
rte_free(dpaa_intf->tx_queues);
dpaa_intf->tx_queues = NULL;
+ if (dpaa_intf->port_handle) {
+ if (dpaa_fm_deconfig(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM "
+ "deconfig failed\n");
+ }
+ if (fif->num_profiles) {
+ if (dpaa_port_vsp_cleanup(dpaa_intf, fif))
+ DPAA_PMD_WARN("DPAA FM vsp cleanup failed\n");
+ }
return ret;
}
@@ -2607,26 +2616,6 @@ static void __attribute__((destructor(102))) dpaa_finish(void)
return;
if (!(default_q || fmc_q)) {
- unsigned int i;
-
- for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (rte_eth_devices[i].dev_ops == &dpaa_devops) {
- struct rte_eth_dev *dev = &rte_eth_devices[i];
- struct dpaa_if *dpaa_intf =
- dev->data->dev_private;
- struct fman_if *fif =
- dev->process_private;
- if (dpaa_intf->port_handle)
- if (dpaa_fm_deconfig(dpaa_intf, fif))
- DPAA_PMD_WARN("DPAA FM "
- "deconfig failed");
- if (fif->num_profiles) {
- if (dpaa_port_vsp_cleanup(dpaa_intf,
- fif))
- DPAA_PMD_WARN("DPAA FM vsp cleanup failed");
- }
- }
- }
if (is_global_init)
if (dpaa_fm_term())
DPAA_PMD_WARN("DPAA FM term failed");
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 9fd40fa47e..8d0875f5ec 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -13,6 +13,7 @@
#include <rte_dpaa_logs.h>
#include <fmlib/fm_port_ext.h>
#include <fmlib/fm_vsp_ext.h>
+#include <rte_pmd_dpaa.h>
#define DPAA_MAX_NUM_ETH_DEV 8
@@ -796,8 +797,6 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
return -1;
}
- dpaa_intf->nb_rx_queues = dev->data->nb_rx_queues;
-
/* Open FM Port and set it in port info */
ret = set_fm_port_handle(dpaa_intf, req_dist_set, fif);
if (ret) {
@@ -806,7 +805,7 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set)
}
if (fif->num_profiles) {
- for (i = 0; i < dpaa_intf->nb_rx_queues; i++)
+ for (i = 0; i < dev->data->nb_rx_queues; i++)
dpaa_intf->rx_queues[i].vsp_id =
fm_default_vsp_id(fif);
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 17/18] net/dpaa: improve dpaa errata A010022 handling
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (15 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 16/18] net/dpaa: improve the dpaa port cleanup Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-01 11:03 ` [PATCH v5 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
` (2 subsequent siblings)
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Jun Yang
From: Jun Yang <jun.yang@nxp.com>
This patch improves the errata handling for
"RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022"
Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 40 ++++++++++++++++++++++++++++--------
1 file changed, 32 insertions(+), 8 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index d82c6f3be2..1d7efdef88 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1258,6 +1258,35 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
return new_mbufs[0];
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+/* In case the data offset is not multiple of 16,
+ * FMAN can stall because of an errata. So reallocate
+ * the buffer in such case.
+ */
+static inline int
+dpaa_eth_ls1043a_mbuf_realloc(struct rte_mbuf *mbuf)
+{
+ uint64_t len, offset;
+
+ if (dpaa_svr_family != SVR_LS1043A_FAMILY)
+ return 0;
+
+ while (mbuf) {
+ len = mbuf->data_len;
+ offset = mbuf->data_off;
+ if ((mbuf->next &&
+ !rte_is_aligned((void *)len, 16)) ||
+ !rte_is_aligned((void *)offset, 16)) {
+ DPAA_PMD_DEBUG("Errata condition hit");
+
+ return 1;
+ }
+ mbuf = mbuf->next;
+ }
+ return 0;
+}
+#endif
+
uint16_t
dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
{
@@ -1296,14 +1325,6 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
DPAA_TX_BURST_SIZE : nb_bufs;
for (loop = 0; loop < frames_to_send; loop++) {
mbuf = *(bufs++);
- /* In case the data offset is not multiple of 16,
- * FMAN can stall because of an errata. So reallocate
- * the buffer in such case.
- */
- if (dpaa_svr_family == SVR_LS1043A_FAMILY &&
- (mbuf->data_off & 0x7F) != 0x0)
- realloc_mbuf = 1;
-
fd_arr[loop].cmd = 0;
if (dpaa_ieee_1588) {
fd_arr[loop].cmd |= DPAA_FD_CMD_FCO |
@@ -1311,6 +1332,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
fd_arr[loop].cmd |= DPAA_FD_CMD_RPD |
DPAA_FD_CMD_UPD;
}
+#ifdef RTE_LIBRTE_DPAA_ERRATA_LS1043_A010022
+ realloc_mbuf = dpaa_eth_ls1043a_mbuf_realloc(mbuf);
+#endif
seqn = *dpaa_seqn(mbuf);
if (seqn != DPAA_INVALID_MBUF_SEQN) {
index = seqn - 1;
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* [PATCH v5 18/18] net/dpaa: fix reallocate_mbuf handling
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (16 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 17/18] net/dpaa: improve dpaa errata A010022 handling Hemant Agrawal
@ 2024-10-01 11:03 ` Hemant Agrawal
2024-10-02 0:41 ` [PATCH v5 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
2024-10-04 14:03 ` David Marchand
19 siblings, 0 replies; 129+ messages in thread
From: Hemant Agrawal @ 2024-10-01 11:03 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Vanshika Shukla, stable
From: Vanshika Shukla <vanshika.shukla@nxp.com>
This patch fixes the bug in the reallocate_mbuf code
handling. The source location is corrected when copying
the data in the new mbuf.
Fixes: f8c7a17a48c9 ("net/dpaa: support Tx scatter gather for non-DPAA buffer")
Cc: stable@dpdk.org
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 1d7efdef88..247e7b92ba 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1223,7 +1223,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
/* Copy the data */
data = rte_pktmbuf_append(new_mbufs[0], bytes_to_copy);
- rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(mbuf,
+ rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(temp_mbuf,
void *, offset1), bytes_to_copy);
/* Set new offsets and the temp buffers */
--
2.25.1
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v5 00/18] NXP DPAA ETH driver enhancement and fixes
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (17 preceding siblings ...)
2024-10-01 11:03 ` [PATCH v5 18/18] net/dpaa: fix reallocate_mbuf handling Hemant Agrawal
@ 2024-10-02 0:41 ` Ferruh Yigit
2024-10-04 14:03 ` David Marchand
19 siblings, 0 replies; 129+ messages in thread
From: Ferruh Yigit @ 2024-10-02 0:41 UTC (permalink / raw)
To: Hemant Agrawal, dev
On 10/1/2024 12:03 PM, Hemant Agrawal wrote:
> v5: fix individual patch compilation and checkpatch warning
> v4: fix clang compilation issues
> v3: addressed Ferruh's comments
> - dropped Tx rate limit API patch
> - added one small bug fix
> - fixed removal/add of fman_offline type
>
> v2: address review comments
> - improve commit message
> - add documentarion for new functions
> - make IEEE1588 config runtime
>
> This series adds several enhancement to the NXP DPAA Ethernet driver.
>
> Primarily:
> 1. timestamp and IEEE 1588 support
> 2. OH and ONIC based virtual port config in DPAA
> 3. frame display and debugging infra
>
>
> Gagandeep Singh (3):
> bus/dpaa: fix PFDRs leaks due to FQRNIs
> net/dpaa: support mempool debug
> net/dpaa: improve the dpaa port cleanup
>
> Hemant Agrawal (5):
> bus/dpaa: fix VSP for 1G fm1-mac9 and 10
> bus/dpaa: fix the fman details status
> bus/dpaa: add port buffer manager stats
> net/dpaa: implement detailed packet parsing
> net/dpaa: enhance DPAA frame display
>
> Jun Yang (2):
> net/dpaa: share MAC FMC scheme and CC parse
> net/dpaa: improve dpaa errata A010022 handling
>
> Rohit Raj (3):
> net/dpaa: fix typecasting ch ID to u32
> bus/dpaa: add OH port mode for dpaa eth
> bus/dpaa: add ONIC port mode for the DPAA eth
>
> Vanshika Shukla (5):
> net/dpaa: support Tx confirmation to enable PTP
> net/dpaa: add support to separate Tx conf queues
> net/dpaa: support Rx/Tx timestamp read
> net/dpaa: support IEEE 1588 PTP
> net/dpaa: fix reallocate_mbuf handling
>
Adding implied ack from Hemant:
For series,
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 129+ messages in thread
* Re: [PATCH v5 00/18] NXP DPAA ETH driver enhancement and fixes
2024-10-01 11:03 ` [PATCH v5 " Hemant Agrawal
` (18 preceding siblings ...)
2024-10-02 0:41 ` [PATCH v5 00/18] NXP DPAA ETH driver enhancement and fixes Ferruh Yigit
@ 2024-10-04 14:03 ` David Marchand
19 siblings, 0 replies; 129+ messages in thread
From: David Marchand @ 2024-10-04 14:03 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev, ferruh.yigit
On Tue, Oct 1, 2024 at 1:03 PM Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
>
> v5: fix individual patch compilation and checkpatch warning
> v4: fix clang compilation issues
> v3: addressed Ferruh's comments
> - dropped Tx rate limit API patch
> - added one small bug fix
> - fixed removal/add of fman_offline type
>
> v2: address review comments
> - improve commit message
> - add documentarion for new functions
> - make IEEE1588 config runtime
>
> This series adds several enhancement to the NXP DPAA Ethernet driver.
>
> Primarily:
> 1. timestamp and IEEE 1588 support
> 2. OH and ONIC based virtual port config in DPAA
> 3. frame display and debugging infra
>
>
> Gagandeep Singh (3):
> bus/dpaa: fix PFDRs leaks due to FQRNIs
> net/dpaa: support mempool debug
> net/dpaa: improve the dpaa port cleanup
>
> Hemant Agrawal (5):
> bus/dpaa: fix VSP for 1G fm1-mac9 and 10
> bus/dpaa: fix the fman details status
> bus/dpaa: add port buffer manager stats
> net/dpaa: implement detailed packet parsing
> net/dpaa: enhance DPAA frame display
>
> Jun Yang (2):
> net/dpaa: share MAC FMC scheme and CC parse
> net/dpaa: improve dpaa errata A010022 handling
>
> Rohit Raj (3):
> net/dpaa: fix typecasting ch ID to u32
> bus/dpaa: add OH port mode for dpaa eth
> bus/dpaa: add ONIC port mode for the DPAA eth
>
> Vanshika Shukla (5):
> net/dpaa: support Tx confirmation to enable PTP
> net/dpaa: add support to separate Tx conf queues
> net/dpaa: support Rx/Tx timestamp read
> net/dpaa: support IEEE 1588 PTP
> net/dpaa: fix reallocate_mbuf handling
>
> doc/guides/nics/dpaa.rst | 64 ++-
> doc/guides/nics/features/dpaa.ini | 2 +
> drivers/bus/dpaa/base/fman/fman.c | 583 +++++++++++++++++++---
> drivers/bus/dpaa/base/fman/fman_hw.c | 102 +++-
> drivers/bus/dpaa/base/fman/netcfg_layer.c | 19 +-
> drivers/bus/dpaa/base/qbman/qman.c | 46 +-
> drivers/bus/dpaa/dpaa_bus.c | 37 +-
> drivers/bus/dpaa/include/fman.h | 112 ++++-
> drivers/bus/dpaa/include/fsl_fman.h | 12 +
> drivers/bus/dpaa/include/fsl_qman.h | 4 +-
> drivers/bus/dpaa/version.map | 4 +
> drivers/net/dpaa/dpaa_ethdev.c | 428 +++++++++++++---
> drivers/net/dpaa/dpaa_ethdev.h | 68 ++-
> drivers/net/dpaa/dpaa_flow.c | 66 +--
> drivers/net/dpaa/dpaa_fmc.c | 421 ++++++++++------
> drivers/net/dpaa/dpaa_ptp.c | 118 +++++
> drivers/net/dpaa/dpaa_rxtx.c | 378 ++++++++++++--
> drivers/net/dpaa/dpaa_rxtx.h | 152 +++---
> drivers/net/dpaa/meson.build | 1 +
> 19 files changed, 2105 insertions(+), 512 deletions(-)
> create mode 100644 drivers/net/dpaa/dpaa_ptp.c
Please Hemant, I see this series reintroduces \n issue in
drivers/bus/dpaa/base/fman (use of macro FMAN_ERR).
Can you send fixes against next-net ?
Thanks.
--
David Marchand
^ permalink raw reply [flat|nested] 129+ messages in thread