From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 263EBA0C4C; Wed, 3 Nov 2021 01:52:55 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 338604111D; Wed, 3 Nov 2021 01:52:36 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6C88A41130 for ; Wed, 3 Nov 2021 01:52:34 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 1A2La47q013415 for ; Tue, 2 Nov 2021 17:52:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vpIMp8DFtZJEPT+KfPhp89rGjv6OU/B3Iev9BDQNx3E=; b=YHLF6rI+cJd//PebeUDkI4s4GaQNgqmARWIJK6l/jjwApHQ8CsZj23v9ekLgA+ZEZnzH 2ZAuqCFtwli6SnNs3YZvgVvG8JP7QtR5OwKzCrD8kLCfZ9khunvVkIy76urH7iUu92mu zpRtR7vfGCSwN+KY9dK6QgzZKfZaDQVZbK6TIJp9jGJH1BKD/+2SYl22YqBjd6KdgSPU Px2MTYtzdmQXRnWqjTt+a9aMm7O8FguLweje/rOIz7GMjicixOYt9Uj4P+dsRkOndTiK 30SuSPrAubInYzSF/C7U9BG5gdO3GL6FowuYbIhSsCtbKiLmSy4yzrjfhMr0Cb5pk+2m 6Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3c3dcs8mrc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Tue, 02 Nov 2021 17:52:33 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 2 Nov 2021 17:52:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 2 Nov 2021 17:52:31 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 9197C5B6957; Tue, 2 Nov 2021 17:52:30 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: Date: Wed, 3 Nov 2021 06:22:13 +0530 Message-ID: <20211103005213.2066-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211103005213.2066-1-pbhagavatula@marvell.com> References: <20210902070034.1086-1-pbhagavatula@marvell.com> <20211103005213.2066-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: t365335AnXZ43AI0vRW92wDkZkXLdn7w X-Proofpoint-ORIG-GUID: t365335AnXZ43AI0vRW92wDkZkXLdn7w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-11-02_08,2021-11-02_01,2020-04-07_01 Subject: [dpdk-dev] [PATCH v2 5/5] event/cnxk: rework enqueue path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Rework SSO enqueue path for CN9K make it similar to CN10K enqueue interface. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn9k_eventdev.c | 28 ++----- drivers/event/cnxk/cn9k_worker.c | 21 ++--- drivers/event/cnxk/cn9k_worker.h | 78 +++++++++---------- drivers/event/cnxk/cn9k_worker_deq.c | 4 +- drivers/event/cnxk/cn9k_worker_deq_ca.c | 4 +- drivers/event/cnxk/cn9k_worker_deq_tmo.c | 4 +- drivers/event/cnxk/cn9k_worker_dual_deq.c | 16 ++-- drivers/event/cnxk/cn9k_worker_dual_deq_ca.c | 19 +++-- drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c | 26 +++---- drivers/event/cnxk/cnxk_eventdev.h | 25 +----- 10 files changed, 96 insertions(+), 129 deletions(-) diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 6e2787252e..b68ce6c0a4 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -27,17 +27,6 @@ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] \ [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]) -static void -cn9k_init_hws_ops(struct cn9k_sso_hws_state *ws, uintptr_t base) -{ - ws->tag_op = base + SSOW_LF_GWS_TAG; - ws->wqp_op = base + SSOW_LF_GWS_WQP; - ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK0; - ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH; - ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM; - ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED; -} - static int cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) { @@ -95,7 +84,7 @@ cn9k_sso_hws_setup(void *arg, void *hws, uintptr_t grp_base) uint64_t val; /* Set get_work tmo for HWS */ - val = NSEC2USEC(dev->deq_tmo_ns) - 1; + val = dev->deq_tmo_ns ? NSEC2USEC(dev->deq_tmo_ns) - 1 : 0; if (dev->dual_ws) { dws = hws; dws->grp_base = grp_base; @@ -148,7 +137,6 @@ cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg); struct cn9k_sso_hws_dual *dws; - struct cn9k_sso_hws_state *st; struct cn9k_sso_hws *ws; uint64_t cq_ds_cnt = 1; uint64_t aq_cnt = 1; @@ -170,22 +158,21 @@ cn9k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, if (dev->dual_ws) { dws = hws; - st = &dws->ws_state[0]; ws_base = dws->base[0]; } else { ws = hws; - st = (struct cn9k_sso_hws_state *)ws; ws_base = ws->base; } while (aq_cnt || cq_ds_cnt || ds_cnt) { - plt_write64(req, st->getwrk_op); - cn9k_sso_hws_get_work_empty(st, &ev); + plt_write64(req, ws_base + SSOW_LF_GWS_OP_GET_WORK0); + cn9k_sso_hws_get_work_empty(ws_base, &ev); if (fn != NULL && ev.u64 != 0) fn(arg, ev); if (ev.sched_type != SSO_TT_EMPTY) - cnxk_sso_hws_swtag_flush(st->tag_op, - st->swtag_flush_op); + cnxk_sso_hws_swtag_flush( + ws_base + SSOW_LF_GWS_TAG, + ws_base + SSOW_LF_GWS_OP_SWTAG_FLUSH); do { val = plt_read64(ws_base + SSOW_LF_GWS_PENDSTATE); } while (val & BIT_ULL(56)); @@ -674,8 +661,6 @@ cn9k_sso_init_hws_mem(void *arg, uint8_t port_id) &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 0)); dws->base[1] = roc_sso_hws_base_get( &dev->sso, CN9K_DUAL_WS_PAIR_ID(port_id, 1)); - cn9k_init_hws_ops(&dws->ws_state[0], dws->base[0]); - cn9k_init_hws_ops(&dws->ws_state[1], dws->base[1]); dws->hws_id = port_id; dws->swtag_req = 0; dws->vws = 0; @@ -695,7 +680,6 @@ cn9k_sso_init_hws_mem(void *arg, uint8_t port_id) /* First cache line is reserved for cookie */ ws = RTE_PTR_ADD(ws, sizeof(struct cnxk_sso_hws_cookie)); ws->base = roc_sso_hws_base_get(&dev->sso, port_id); - cn9k_init_hws_ops((struct cn9k_sso_hws_state *)ws, ws->base); ws->hws_id = port_id; ws->swtag_req = 0; diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index 32f7cc0343..a981bc986f 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -19,7 +19,8 @@ cn9k_sso_hws_enq(void *port, const struct rte_event *ev) cn9k_sso_hws_forward_event(ws, ev); break; case RTE_EVENT_OP_RELEASE: - cnxk_sso_hws_swtag_flush(ws->tag_op, ws->swtag_flush_op); + cnxk_sso_hws_swtag_flush(ws->base + SSOW_LF_GWS_TAG, + ws->base + SSOW_LF_GWS_OP_SWTAG_FLUSH); break; default: return 0; @@ -67,17 +68,18 @@ uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev) { struct cn9k_sso_hws_dual *dws = port; - struct cn9k_sso_hws_state *vws; + uint64_t base; - vws = &dws->ws_state[!dws->vws]; + base = dws->base[!dws->vws]; switch (ev->op) { case RTE_EVENT_OP_NEW: return cn9k_sso_hws_dual_new_event(dws, ev); case RTE_EVENT_OP_FORWARD: - cn9k_sso_hws_dual_forward_event(dws, vws, ev); + cn9k_sso_hws_dual_forward_event(dws, base, ev); break; case RTE_EVENT_OP_RELEASE: - cnxk_sso_hws_swtag_flush(vws->tag_op, vws->swtag_flush_op); + cnxk_sso_hws_swtag_flush(base + SSOW_LF_GWS_TAG, + base + SSOW_LF_GWS_OP_SWTAG_FLUSH); break; default: return 0; @@ -114,7 +116,7 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], struct cn9k_sso_hws_dual *dws = port; RTE_SET_USED(nb_events); - cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev); + cn9k_sso_hws_dual_forward_event(dws, dws->base[!dws->vws], ev); return 1; } @@ -126,7 +128,8 @@ cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) RTE_SET_USED(nb_events); - return cn9k_cpt_crypto_adapter_enqueue(ws->tag_op, ev->event_ptr); + return cn9k_cpt_crypto_adapter_enqueue(ws->base + SSOW_LF_GWS_TAG, + ev->event_ptr); } uint16_t __rte_hot @@ -136,6 +139,6 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) RTE_SET_USED(nb_events); - return cn9k_cpt_crypto_adapter_enqueue(dws->ws_state[!dws->vws].tag_op, - ev->event_ptr); + return cn9k_cpt_crypto_adapter_enqueue( + dws->base[!dws->vws] + SSOW_LF_GWS_TAG, ev->event_ptr); } diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index aaf612e814..9377fa50e7 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -37,12 +37,12 @@ cn9k_sso_hws_new_event(struct cn9k_sso_hws *ws, const struct rte_event *ev) } static __rte_always_inline void -cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws, - const struct rte_event *ev) +cn9k_sso_hws_fwd_swtag(uint64_t base, const struct rte_event *ev) { const uint32_t tag = (uint32_t)ev->event; const uint8_t new_tt = ev->sched_type; - const uint8_t cur_tt = CNXK_TT_FROM_TAG(plt_read64(vws->tag_op)); + const uint8_t cur_tt = + CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_TAG)); /* CNXK model * cur_tt/new_tt SSO_TT_ORDERED SSO_TT_ATOMIC SSO_TT_UNTAGGED @@ -54,24 +54,24 @@ cn9k_sso_hws_fwd_swtag(struct cn9k_sso_hws_state *vws, if (new_tt == SSO_TT_UNTAGGED) { if (cur_tt != SSO_TT_UNTAGGED) - cnxk_sso_hws_swtag_untag( - CN9K_SSOW_GET_BASE_ADDR(vws->getwrk_op) + - SSOW_LF_GWS_OP_SWTAG_UNTAG); + cnxk_sso_hws_swtag_untag(base + + SSOW_LF_GWS_OP_SWTAG_UNTAG); } else { - cnxk_sso_hws_swtag_norm(tag, new_tt, vws->swtag_norm_op); + cnxk_sso_hws_swtag_norm(tag, new_tt, + base + SSOW_LF_GWS_OP_SWTAG_NORM); } } static __rte_always_inline void -cn9k_sso_hws_fwd_group(struct cn9k_sso_hws_state *ws, - const struct rte_event *ev, const uint16_t grp) +cn9k_sso_hws_fwd_group(uint64_t base, const struct rte_event *ev, + const uint16_t grp) { const uint32_t tag = (uint32_t)ev->event; const uint8_t new_tt = ev->sched_type; - plt_write64(ev->u64, CN9K_SSOW_GET_BASE_ADDR(ws->getwrk_op) + - SSOW_LF_GWS_OP_UPD_WQP_GRP1); - cnxk_sso_hws_swtag_desched(tag, new_tt, grp, ws->swtag_desched_op); + plt_write64(ev->u64, base + SSOW_LF_GWS_OP_UPD_WQP_GRP1); + cnxk_sso_hws_swtag_desched(tag, new_tt, grp, + base + SSOW_LF_GWS_OP_SWTAG_DESCHED); } static __rte_always_inline void @@ -80,8 +80,8 @@ cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev) const uint8_t grp = ev->queue_id; /* Group hasn't changed, Use SWTAG to forward the event */ - if (CNXK_GRP_FROM_TAG(plt_read64(ws->tag_op)) == grp) { - cn9k_sso_hws_fwd_swtag((struct cn9k_sso_hws_state *)ws, ev); + if (CNXK_GRP_FROM_TAG(plt_read64(ws->base + SSOW_LF_GWS_TAG)) == grp) { + cn9k_sso_hws_fwd_swtag(ws->base, ev); ws->swtag_req = 1; } else { /* @@ -89,8 +89,7 @@ cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev) * Use deschedule/add_work operation to transfer the event to * new group/core */ - cn9k_sso_hws_fwd_group((struct cn9k_sso_hws_state *)ws, ev, - grp); + cn9k_sso_hws_fwd_group(ws->base, ev, grp); } } @@ -115,15 +114,14 @@ cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws, } static __rte_always_inline void -cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, - struct cn9k_sso_hws_state *vws, +cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, uint64_t base, const struct rte_event *ev) { const uint8_t grp = ev->queue_id; /* Group hasn't changed, Use SWTAG to forward the event */ - if (CNXK_GRP_FROM_TAG(plt_read64(vws->tag_op)) == grp) { - cn9k_sso_hws_fwd_swtag(vws, ev); + if (CNXK_GRP_FROM_TAG(plt_read64(base + SSOW_LF_GWS_TAG)) == grp) { + cn9k_sso_hws_fwd_swtag(base, ev); dws->swtag_req = 1; } else { /* @@ -131,7 +129,7 @@ cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, * Use deschedule/add_work operation to transfer the event to * new group/core */ - cn9k_sso_hws_fwd_group(vws, ev, grp); + cn9k_sso_hws_fwd_group(base, ev, grp); } } @@ -149,8 +147,7 @@ cn9k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id, } static __rte_always_inline uint16_t -cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws, - struct cn9k_sso_hws_state *ws_pair, +cn9k_sso_hws_dual_get_work(uint64_t base, uint64_t pair_base, struct rte_event *ev, const uint32_t flags, const void *const lookup_mem, struct cnxk_timesync_info *const tstamp) @@ -177,14 +174,15 @@ cn9k_sso_hws_dual_get_work(struct cn9k_sso_hws_state *ws, " prfm pldl1keep, [%[mbuf]] \n" : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]), [mbuf] "=&r"(mbuf) - : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op), - [gw] "r"(set_gw), [pong] "r"(ws_pair->getwrk_op)); + : [tag_loc] "r"(base + SSOW_LF_GWS_TAG), + [wqp_loc] "r"(base + SSOW_LF_GWS_WQP), [gw] "r"(set_gw), + [pong] "r"(pair_base + SSOW_LF_GWS_OP_GET_WORK0)); #else - gw.u64[0] = plt_read64(ws->tag_op); + gw.u64[0] = plt_read64(base + SSOW_LF_GWS_TAG); while ((BIT_ULL(63)) & gw.u64[0]) - gw.u64[0] = plt_read64(ws->tag_op); - gw.u64[1] = plt_read64(ws->wqp_op); - plt_write64(set_gw, ws_pair->getwrk_op); + gw.u64[0] = plt_read64(base + SSOW_LF_GWS_TAG); + gw.u64[1] = plt_read64(base + SSOW_LF_GWS_WQP); + plt_write64(set_gw, pair_base + SSOW_LF_GWS_OP_GET_WORK0); mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf)); #endif @@ -236,7 +234,7 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, plt_write64(BIT_ULL(16) | /* wait for work. */ 1, /* Use Mask set 0. */ - ws->getwrk_op); + ws->base + SSOW_LF_GWS_OP_GET_WORK0); if (flags & NIX_RX_OFFLOAD_PTYPE_F) rte_prefetch_non_temporal(lookup_mem); @@ -255,13 +253,14 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, " prfm pldl1keep, [%[mbuf]] \n" : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]), [mbuf] "=&r"(mbuf) - : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op)); + : [tag_loc] "r"(ws->base + SSOW_LF_GWS_TAG), + [wqp_loc] "r"(ws->base + SSOW_LF_GWS_WQP)); #else - gw.u64[0] = plt_read64(ws->tag_op); + gw.u64[0] = plt_read64(ws->base + SSOW_LF_GWS_TAG); while ((BIT_ULL(63)) & gw.u64[0]) - gw.u64[0] = plt_read64(ws->tag_op); + gw.u64[0] = plt_read64(ws->base + SSOW_LF_GWS_TAG); - gw.u64[1] = plt_read64(ws->wqp_op); + gw.u64[1] = plt_read64(ws->base + SSOW_LF_GWS_WQP); mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf)); #endif @@ -303,7 +302,7 @@ cn9k_sso_hws_get_work(struct cn9k_sso_hws *ws, struct rte_event *ev, /* Used in cleaning up workslot. */ static __rte_always_inline uint16_t -cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev) +cn9k_sso_hws_get_work_empty(uint64_t base, struct rte_event *ev) { union { __uint128_t get_work; @@ -325,13 +324,14 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev) " sub %[mbuf], %[wqp], #0x80 \n" : [tag] "=&r"(gw.u64[0]), [wqp] "=&r"(gw.u64[1]), [mbuf] "=&r"(mbuf) - : [tag_loc] "r"(ws->tag_op), [wqp_loc] "r"(ws->wqp_op)); + : [tag_loc] "r"(base + SSOW_LF_GWS_TAG), + [wqp_loc] "r"(base + SSOW_LF_GWS_WQP)); #else - gw.u64[0] = plt_read64(ws->tag_op); + gw.u64[0] = plt_read64(base + SSOW_LF_GWS_TAG); while ((BIT_ULL(63)) & gw.u64[0]) - gw.u64[0] = plt_read64(ws->tag_op); + gw.u64[0] = plt_read64(base + SSOW_LF_GWS_TAG); - gw.u64[1] = plt_read64(ws->wqp_op); + gw.u64[1] = plt_read64(base + SSOW_LF_GWS_WQP); mbuf = (uint64_t)((char *)gw.u64[1] - sizeof(struct rte_mbuf)); #endif diff --git a/drivers/event/cnxk/cn9k_worker_deq.c b/drivers/event/cnxk/cn9k_worker_deq.c index d65c72af7a..ba6fd05381 100644 --- a/drivers/event/cnxk/cn9k_worker_deq.c +++ b/drivers/event/cnxk/cn9k_worker_deq.c @@ -16,7 +16,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ @@ -32,7 +32,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c index b5d0263559..ffe7a7c9e2 100644 --- a/drivers/event/cnxk/cn9k_worker_deq_ca.c +++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c @@ -16,7 +16,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ @@ -42,7 +42,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ diff --git a/drivers/event/cnxk/cn9k_worker_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_deq_tmo.c index b41a590fb7..5147c1933a 100644 --- a/drivers/event/cnxk/cn9k_worker_deq_tmo.c +++ b/drivers/event/cnxk/cn9k_worker_deq_tmo.c @@ -16,7 +16,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return ret; \ } \ \ @@ -46,7 +46,7 @@ \ if (ws->swtag_req) { \ ws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait(ws->tag_op); \ + cnxk_sso_hws_swtag_wait(ws->base + SSOW_LF_GWS_TAG); \ return ret; \ } \ \ diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq.c b/drivers/event/cnxk/cn9k_worker_dual_deq.c index 440b66edca..ed134ab779 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq.c @@ -16,14 +16,14 @@ RTE_SET_USED(timeout_ticks); \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ gw = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ - ev, flags, dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, flags, \ + dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ return gw; \ } \ @@ -37,14 +37,14 @@ RTE_SET_USED(timeout_ticks); \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ gw = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ - ev, flags, dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, flags, \ + dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ return gw; \ } diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c index b66e2cfc08..22e148be73 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c @@ -16,15 +16,14 @@ RTE_SET_USED(timeout_ticks); \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ - gw = cn9k_sso_hws_dual_get_work(&dws->ws_state[dws->vws], \ - &dws->ws_state[!dws->vws], ev, \ - flags | CPT_RX_WQE_F, \ - dws->lookup_mem, dws->tstamp); \ + gw = cn9k_sso_hws_dual_get_work( \ + dws->base[dws->vws], dws->base[!dws->vws], ev, \ + flags | CPT_RX_WQE_F, dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ return gw; \ } \ @@ -48,14 +47,14 @@ RTE_SET_USED(timeout_ticks); \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return 1; \ } \ \ gw = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ - ev, flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ + dws->base[dws->vws], dws->base[!dws->vws], ev, \ + flags | NIX_RX_MULTI_SEG_F | CPT_RX_WQE_F, \ dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ return gw; \ diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c index 78a4b3d127..e5ba3feb22 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c @@ -16,20 +16,19 @@ \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return ret; \ } \ \ ret = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ - ev, flags, dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, flags, \ + dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) { \ ret = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], \ - &dws->ws_state[!dws->vws], ev, flags, \ - dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, \ + flags, dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ } \ \ @@ -55,20 +54,19 @@ \ if (dws->swtag_req) { \ dws->swtag_req = 0; \ - cnxk_sso_hws_swtag_wait( \ - dws->ws_state[!dws->vws].tag_op); \ + cnxk_sso_hws_swtag_wait(dws->base[!dws->vws] + \ + SSOW_LF_GWS_TAG); \ return ret; \ } \ \ ret = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], &dws->ws_state[!dws->vws], \ - ev, flags, dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, flags, \ + dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) { \ ret = cn9k_sso_hws_dual_get_work( \ - &dws->ws_state[dws->vws], \ - &dws->ws_state[!dws->vws], ev, flags, \ - dws->lookup_mem, dws->tstamp); \ + dws->base[dws->vws], dws->base[!dws->vws], ev, \ + flags, dws->lookup_mem, dws->tstamp); \ dws->vws = !dws->vws; \ } \ \ diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index d9f52d03e0..305c6a3b9e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -136,19 +136,9 @@ struct cn10k_sso_hws { uint8_t tx_adptr_data[]; } __rte_cache_aligned; -/* CN9K HWS ops */ -#define CN9K_SSO_HWS_OPS \ - uintptr_t swtag_desched_op; \ - uintptr_t swtag_flush_op; \ - uintptr_t swtag_norm_op; \ - uintptr_t getwrk_op; \ - uintptr_t tag_op; \ - uintptr_t wqp_op - /* Event port a.k.a GWS */ struct cn9k_sso_hws { - /* Get Work Fastpath data */ - CN9K_SSO_HWS_OPS; + uint64_t base; /* PTP timestamp */ struct cnxk_timesync_info *tstamp; void *lookup_mem; @@ -159,17 +149,11 @@ struct cn9k_sso_hws { uint64_t *fc_mem; uintptr_t grp_base; /* Tx Fastpath data */ - uint64_t base __rte_cache_aligned; - uint8_t tx_adptr_data[]; + uint8_t tx_adptr_data[] __rte_cache_aligned; } __rte_cache_aligned; -struct cn9k_sso_hws_state { - CN9K_SSO_HWS_OPS; -}; - struct cn9k_sso_hws_dual { - /* Get Work Fastpath data */ - struct cn9k_sso_hws_state ws_state[2]; /* Ping and Pong */ + uint64_t base[2]; /* Ping and Pong */ /* PTP timestamp */ struct cnxk_timesync_info *tstamp; void *lookup_mem; @@ -181,8 +165,7 @@ struct cn9k_sso_hws_dual { uint64_t *fc_mem; uintptr_t grp_base; /* Tx Fastpath data */ - uint64_t base[2] __rte_cache_aligned; - uint8_t tx_adptr_data[]; + uint8_t tx_adptr_data[] __rte_cache_aligned; } __rte_cache_aligned; struct cnxk_sso_hws_cookie { -- 2.17.1