From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 306C045AA0; Thu, 3 Oct 2024 15:23:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C80A84067D; Thu, 3 Oct 2024 15:23:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1711F40666 for ; Thu, 3 Oct 2024 15:23:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 492HUWE9017716 for ; Thu, 3 Oct 2024 06:23:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=t 4WTth2Wy6O+4oMpk4GbmHYCHEL7fqPi/nsJ2Kww8TE=; b=gTAa956B4mBVg7F0V Wi/rqsXoBo7Mgwav3iFhSv7eM9KAy3wpbR/5wA15mZ2AZQiyXDHl5rBqCVxdZCgH bBZF0hbmL22mFkvGwhtFqdlc5aNf8zq3U7gRR1d0UTrQQO7DbVY0cmLBI82O+UIf PLrB3LAv0IdITu4k2ZFOQaye7NeXSGjqtViPPsTyhUuj9qa8k17cgIR5L4qiQyb/ wplKBHjaGZthBHXdXn1PvMJmweCS1bPnuW/y4aiNqRYlpsdEESB0pKH8i/Nuuxpn 9nQ9tP+ah595Oekr73Y9MKTuR2nHaWgKQvA87OXt+uT94e5i+VXY8XGPPTMqI7DZ y+ctw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 421amr9uds-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 03 Oct 2024 06:23:07 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 3 Oct 2024 06:23:06 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 3 Oct 2024 06:23:06 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 827135C704D; Thu, 3 Oct 2024 06:23:04 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: Subject: [PATCH 10/20] event/cnxk: add CN20K device start Date: Thu, 3 Oct 2024 18:52:27 +0530 Message-ID: <20241003132237.20193-10-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241003132237.20193-1-pbhagavatula@marvell.com> References: <20241003132237.20193-1-pbhagavatula@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-ORIG-GUID: rnhJZQEzLp-oGBmu8xFNFiDxKoAEFve0 X-Proofpoint-GUID: rnhJZQEzLp-oGBmu8xFNFiDxKoAEFve0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add CN20K start function along with few cleanup API's to maintain sanity. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 103 +-------------------------- drivers/event/cnxk/cn20k_eventdev.c | 76 ++++++++++++++++++++ drivers/event/cnxk/cnxk_common.h | 104 ++++++++++++++++++++++++++++ 3 files changed, 183 insertions(+), 100 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 1963896aa2..217b7deb92 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -153,83 +153,6 @@ cn10k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, return 0; } -static void -cn10k_sso_hws_reset(void *arg, void *hws) -{ - struct cnxk_sso_evdev *dev = arg; - struct cn10k_sso_hws *ws = hws; - uintptr_t base = ws->base; - uint64_t pend_state; - union { - __uint128_t wdata; - uint64_t u64[2]; - } gw; - uint8_t pend_tt; - bool is_pend; - - roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); - plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); - /* Wait till getwork/swtp/waitw/desched completes. */ - is_pend = false; - /* Work in WQE0 is always consumed, unless its a SWTAG. */ - pend_state = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); - if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || - ws->swtag_req) - is_pend = true; - - do { - pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); - } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | - BIT_ULL(56) | BIT_ULL(54))); - pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ - if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) - cnxk_sso_hws_swtag_untag(base + - SSOW_LF_GWS_OP_SWTAG_UNTAG); - plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED); - } else if (pend_tt != SSO_TT_EMPTY) { - plt_write64(0, base + SSOW_LF_GWS_OP_SWTAG_FLUSH); - } - - /* Wait for desched to complete. */ - do { - pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); - } while (pend_state & (BIT_ULL(58) | BIT_ULL(56))); - - switch (dev->gw_mode) { - case CNXK_GW_MODE_PREF: - case CNXK_GW_MODE_PREF_WFE: - while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) - ; - break; - case CNXK_GW_MODE_NONE: - default: - break; - } - - if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) != - SSO_TT_EMPTY) { - plt_write64(BIT_ULL(16) | 1, - ws->base + SSOW_LF_GWS_OP_GET_WORK0); - do { - roc_load_pair(gw.u64[0], gw.u64[1], - ws->base + SSOW_LF_GWS_WQE0); - } while (gw.u64[0] & BIT_ULL(63)); - pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); - if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ - if (pend_tt == SSO_TT_ATOMIC || - pend_tt == SSO_TT_ORDERED) - cnxk_sso_hws_swtag_untag( - base + SSOW_LF_GWS_OP_SWTAG_UNTAG); - plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED); - } - } - - plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); - roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); - rte_mb(); -} - static void cn10k_sso_set_rsrc(void *arg) { @@ -701,24 +624,6 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); } -static void -cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev) -{ - struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - struct roc_sso_hwgrp_stash stash[dev->stash_cnt]; - int i, rc; - - plt_sso_dbg(); - for (i = 0; i < dev->stash_cnt; i++) { - stash[i].hwgrp = dev->stash_parse_data[i].queue; - stash[i].stash_offset = dev->stash_parse_data[i].stash_offset; - stash[i].stash_count = dev->stash_parse_data[i].stash_length; - } - rc = roc_sso_hwgrp_stash_config(&dev->sso, stash, dev->stash_cnt); - if (rc < 0) - plt_warn("failed to configure HWGRP WQE stashing rc = %d", rc); -} - static int cn10k_sso_start(struct rte_eventdev *event_dev) { @@ -730,9 +635,8 @@ cn10k_sso_start(struct rte_eventdev *event_dev) if (rc < 0) return rc; - cn10k_sso_configure_queue_stash(event_dev); - rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset, - cn10k_sso_hws_flush_events); + cnxk_sso_configure_queue_stash(event_dev); + rc = cnxk_sso_start(event_dev, cnxk_sso_hws_reset, cn10k_sso_hws_flush_events); if (rc < 0) return rc; cn10k_sso_fp_fns_set(event_dev); @@ -753,8 +657,7 @@ cn10k_sso_stop(struct rte_eventdev *event_dev) for (i = 0; i < event_dev->data->nb_ports; i++) hws[i] = i; roc_sso_hws_gwc_invalidate(&dev->sso, hws, event_dev->data->nb_ports); - cnxk_sso_stop(event_dev, cn10k_sso_hws_reset, - cn10k_sso_hws_flush_events); + cnxk_sso_stop(event_dev, cnxk_sso_hws_reset, cn10k_sso_hws_flush_events); } static int diff --git a/drivers/event/cnxk/cn20k_eventdev.c b/drivers/event/cnxk/cn20k_eventdev.c index 079c7809f6..01be56469f 100644 --- a/drivers/event/cnxk/cn20k_eventdev.c +++ b/drivers/event/cnxk/cn20k_eventdev.c @@ -85,6 +85,61 @@ cn20k_sso_hws_release(void *arg, void *hws) memset(ws, 0, sizeof(*ws)); } +static int +cn20k_sso_hws_flush_events(void *hws, uint8_t queue_id, uintptr_t base, cnxk_handle_event_t fn, + void *arg) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(arg); + uint64_t retry = CNXK_SSO_FLUSH_RETRY_MAX; + struct cn20k_sso_hws *ws = hws; + uint64_t cq_ds_cnt = 1; + uint64_t aq_cnt = 1; + uint64_t ds_cnt = 1; + struct rte_event ev; + uint64_t val, req; + + plt_write64(0, base + SSO_LF_GGRP_QCTL); + + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); + req = queue_id; /* GGRP ID */ + req |= BIT_ULL(18); /* Grouped */ + req |= BIT_ULL(16); /* WAIT */ + + aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT); + ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT); + cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT); + cq_ds_cnt &= 0x3FFF3FFF0000; + + while (aq_cnt || cq_ds_cnt || ds_cnt) { + plt_write64(req, ws->base + SSOW_LF_GWS_OP_GET_WORK0); + cn20k_sso_hws_get_work_empty(ws, &ev, 0); + if (fn != NULL && ev.u64 != 0) + fn(arg, ev); + if (ev.sched_type != SSO_TT_EMPTY) + cnxk_sso_hws_swtag_flush(ws->base); + else if (retry-- == 0) + break; + do { + val = plt_read64(ws->base + SSOW_LF_GWS_PENDSTATE); + } while (val & BIT_ULL(56)); + aq_cnt = plt_read64(base + SSO_LF_GGRP_AQ_CNT); + ds_cnt = plt_read64(base + SSO_LF_GGRP_MISC_CNT); + cq_ds_cnt = plt_read64(base + SSO_LF_GGRP_INT_CNT); + /* Extract cq and ds count */ + cq_ds_cnt &= 0x3FFF3FFF0000; + } + + if (aq_cnt || cq_ds_cnt || ds_cnt) + return -EAGAIN; + + plt_write64(0, ws->base + SSOW_LF_GWS_OP_GWC_INVAL); + roc_sso_hws_gwc_invalidate(&dev->sso, &ws->hws_id, 1); + rte_mb(); + + return 0; +} + static void cn20k_sso_set_rsrc(void *arg) { @@ -313,6 +368,25 @@ cn20k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues return cn20k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); } +static int +cn20k_sso_start(struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint8_t hws[RTE_EVENT_MAX_PORTS_PER_DEV]; + int rc, i; + + cnxk_sso_configure_queue_stash(event_dev); + rc = cnxk_sso_start(event_dev, cnxk_sso_hws_reset, cn20k_sso_hws_flush_events); + if (rc < 0) + return rc; + cn20k_sso_fp_fns_set(event_dev); + for (i = 0; i < event_dev->data->nb_ports; i++) + hws[i] = i; + roc_sso_hws_gwc_invalidate(&dev->sso, hws, event_dev->data->nb_ports); + + return rc; +} + static struct eventdev_ops cn20k_sso_dev_ops = { .dev_infos_get = cn20k_sso_info_get, .dev_configure = cn20k_sso_dev_configure, @@ -331,6 +405,8 @@ static struct eventdev_ops cn20k_sso_dev_ops = { .port_link_profile = cn20k_sso_port_link_profile, .port_unlink_profile = cn20k_sso_port_unlink_profile, .timeout_ticks = cnxk_sso_timeout_ticks, + + .dev_start = cn20k_sso_start, }; static int diff --git a/drivers/event/cnxk/cnxk_common.h b/drivers/event/cnxk/cnxk_common.h index 5f6aeb37eb..f2610bd1a8 100644 --- a/drivers/event/cnxk/cnxk_common.h +++ b/drivers/event/cnxk/cnxk_common.h @@ -8,6 +8,15 @@ #include "cnxk_eventdev.h" #include "cnxk_worker.h" +struct cnxk_sso_hws_prf { + uint64_t base; + uint32_t gw_wdata; + void *lookup_mem; + uint64_t gw_rdata; + uint8_t swtag_req; + uint8_t hws_id; +}; + static uint32_t cnxk_sso_hws_prf_wdata(struct cnxk_sso_evdev *dev) { @@ -31,4 +40,99 @@ cnxk_sso_hws_prf_wdata(struct cnxk_sso_evdev *dev) return wdata; } +static void +cnxk_sso_hws_reset(void *arg, void *ws) +{ + struct cnxk_sso_evdev *dev = arg; + struct cnxk_sso_hws_prf *ws_prf; + uint64_t pend_state; + uint8_t swtag_req; + uintptr_t base; + uint8_t hws_id; + union { + __uint128_t wdata; + uint64_t u64[2]; + } gw; + uint8_t pend_tt; + bool is_pend; + + ws_prf = ws; + base = ws_prf->base; + hws_id = ws_prf->hws_id; + swtag_req = ws_prf->swtag_req; + + roc_sso_hws_gwc_invalidate(&dev->sso, &hws_id, 1); + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); + /* Wait till getwork/swtp/waitw/desched completes. */ + is_pend = false; + /* Work in WQE0 is always consumed, unless its a SWTAG. */ + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + if (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(54)) || swtag_req) + is_pend = true; + + do { + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (pend_state & + (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58) | BIT_ULL(56) | BIT_ULL(54))); + pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); + if (is_pend && pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) + cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); + plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED); + } else if (pend_tt != SSO_TT_EMPTY) { + plt_write64(0, base + SSOW_LF_GWS_OP_SWTAG_FLUSH); + } + + /* Wait for desched to complete. */ + do { + pend_state = plt_read64(base + SSOW_LF_GWS_PENDSTATE); + } while (pend_state & (BIT_ULL(58) | BIT_ULL(56))); + + switch (dev->gw_mode) { + case CNXK_GW_MODE_PREF: + case CNXK_GW_MODE_PREF_WFE: + while (plt_read64(base + SSOW_LF_GWS_PRF_WQE0) & BIT_ULL(63)) + ; + break; + case CNXK_GW_MODE_NONE: + default: + break; + } + + if (CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_PRF_WQE0)) != SSO_TT_EMPTY) { + plt_write64(BIT_ULL(16) | 1, base + SSOW_LF_GWS_OP_GET_WORK0); + do { + roc_load_pair(gw.u64[0], gw.u64[1], base + SSOW_LF_GWS_WQE0); + } while (gw.u64[0] & BIT_ULL(63)); + pend_tt = CNXK_TT_FROM_TAG(plt_read64(base + SSOW_LF_GWS_WQE0)); + if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (pend_tt == SSO_TT_ATOMIC || pend_tt == SSO_TT_ORDERED) + cnxk_sso_hws_swtag_untag(base + SSOW_LF_GWS_OP_SWTAG_UNTAG); + plt_write64(0, base + SSOW_LF_GWS_OP_DESCHED); + } + } + + plt_write64(0, base + SSOW_LF_GWS_OP_GWC_INVAL); + roc_sso_hws_gwc_invalidate(&dev->sso, &hws_id, 1); + rte_mb(); +} + +static void +cnxk_sso_configure_queue_stash(struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct roc_sso_hwgrp_stash stash[dev->stash_cnt]; + int i, rc; + + plt_sso_dbg(); + for (i = 0; i < dev->stash_cnt; i++) { + stash[i].hwgrp = dev->stash_parse_data[i].queue; + stash[i].stash_offset = dev->stash_parse_data[i].stash_offset; + stash[i].stash_count = dev->stash_parse_data[i].stash_length; + } + rc = roc_sso_hwgrp_stash_config(&dev->sso, stash, dev->stash_cnt); + if (rc < 0) + plt_warn("failed to configure HWGRP WQE stashing rc = %d", rc); +} + #endif /* __CNXK_COMMON_H__ */ -- 2.25.1