From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43E48A00C4; Mon, 25 Jul 2022 10:35:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DCBAC410FA; Mon, 25 Jul 2022 10:35:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1AF2C40684; Mon, 25 Jul 2022 10:35:52 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 26OMVIx0017154; Mon, 25 Jul 2022 01:35:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=M9zjcrWx6AanNvC+54f7vmhzPuPTZMGIgFBxaIl4l2Q=; b=cEBLGyFpwmqncfNEkHuHB+DQboBmmHjJieN+i9MPBzhkGa2kNC0dysnyumo6/RFSRQX3 7xo2aAf+6iIhaNwHr2c8LDr+TPe4J90qMcIWg7U3mgiyGrSxot30AClto3pwVAw7X86t azzCuoo7EElDVnWPHNcGhkuc/dCTECO8Nxfs/weNRYVxF4SOmER0n1gEQwpYqQIOYp2y IKHPVvSGPb7Wip3dVqT/5QXC8ZPrcs1SBJAfNelokB2ZBhzPNsi52DcXQTGA5DiBkLkj h6U02RQrWPta8YXdgLtMAY/hayPurACCWApFBkO46oeACk8jEHynhaPRLUAns24B7ocq sw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3hgebq5etn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 25 Jul 2022 01:35:51 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 25 Jul 2022 01:35:50 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 25 Jul 2022 01:35:50 -0700 Received: from MININT-80QBFE8.corp.innovium.com (unknown [10.28.161.88]) by maili.marvell.com (Postfix) with ESMTP id 695823F70AB; Mon, 25 Jul 2022 01:35:48 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: , Subject: [PATCH] event/cnxk: fix stale data in workslots Date: Mon, 25 Jul 2022 14:05:45 +0530 Message-ID: <20220725083545.2271-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: oWFVTf_eFyrJPXaMkkJLebHjwIwR9x3G X-Proofpoint-ORIG-GUID: oWFVTf_eFyrJPXaMkkJLebHjwIwR9x3G X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-07-25_07,2022-07-21_02,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Fix stale XAQ depth check pointers in workslot memory after XAQ pool resize. Fixes: bd64a963d2fc ("event/cnxk: use common XAQ pool functions") Cc: stable@dpdk.org Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 21 ++++++++++++++++--- drivers/event/cnxk/cn9k_eventdev.c | 31 +++++++++++++++++++++++------ drivers/event/cnxk/cnxk_tim_evdev.c | 6 +++++- drivers/event/cnxk/cnxk_tim_evdev.h | 6 +++++- 4 files changed, 53 insertions(+), 11 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index db8dc2a9ce..ea6dadd7b7 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -701,8 +701,11 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem, u for (i = 0; i < dev->nb_event_ports; i++) { struct cn10k_sso_hws *ws = event_dev->data->ports[i]; - ws->lookup_mem = lookup_mem; + ws->xaq_lmt = dev->xaq_lmt; + ws->fc_mem = (uint64_t *)dev->fc_iova; ws->tstamp = dev->tstamp; + if (lookup_mem) + ws->lookup_mem = lookup_mem; if (meta_aura) ws->meta_aura = meta_aura; } @@ -894,6 +897,7 @@ cn10k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, const struct rte_event *event) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int ret; RTE_SET_USED(event); @@ -903,7 +907,10 @@ cn10k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, dev->is_ca_internal_port = 1; cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); - return cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); + ret = cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); + cn10k_sso_set_priv_mem(event_dev, NULL, 0); + + return ret; } static int @@ -917,6 +924,14 @@ cn10k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, return cnxk_crypto_adapter_qp_del(cdev, queue_pair_id); } +static int +cn10k_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, + uint32_t *caps, const struct event_timer_adapter_ops **ops) +{ + return cnxk_tim_caps_get(evdev, flags, caps, ops, + cn10k_sso_set_priv_mem); +} + static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, @@ -950,7 +965,7 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .eth_tx_adapter_stop = cnxk_sso_tx_adapter_stop, .eth_tx_adapter_free = cnxk_sso_tx_adapter_free, - .timer_adapter_caps_get = cnxk_tim_caps_get, + .timer_adapter_caps_get = cn10k_tim_caps_get, .crypto_adapter_caps_get = cn10k_crypto_adapter_caps_get, .crypto_adapter_queue_pair_add = cn10k_crypto_adapter_qp_add, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 992a2a555c..5d527c3be8 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -942,7 +942,8 @@ cn9k_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, } static void -cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem) +cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem, + uint64_t aura __rte_unused) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); int i; @@ -951,12 +952,18 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem) if (dev->dual_ws) { struct cn9k_sso_hws_dual *dws = event_dev->data->ports[i]; - dws->lookup_mem = lookup_mem; + dws->xaq_lmt = dev->xaq_lmt; + dws->fc_mem = (uint64_t *)dev->fc_iova; dws->tstamp = dev->tstamp; + if (lookup_mem) + dws->lookup_mem = lookup_mem; } else { struct cn9k_sso_hws *ws = event_dev->data->ports[i]; - ws->lookup_mem = lookup_mem; + ws->xaq_lmt = dev->xaq_lmt; + ws->fc_mem = (uint64_t *)dev->fc_iova; ws->tstamp = dev->tstamp; + if (lookup_mem) + ws->lookup_mem = lookup_mem; } } } @@ -982,7 +989,7 @@ cn9k_sso_rx_adapter_queue_add( rxq = eth_dev->data->rx_queues[0]; lookup_mem = rxq->lookup_mem; - cn9k_sso_set_priv_mem(event_dev, lookup_mem); + cn9k_sso_set_priv_mem(event_dev, lookup_mem, 0); cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); return 0; @@ -1121,6 +1128,7 @@ cn9k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, int32_t queue_pair_id, const struct rte_event *event) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int ret; RTE_SET_USED(event); @@ -1130,7 +1138,10 @@ cn9k_crypto_adapter_qp_add(const struct rte_eventdev *event_dev, dev->is_ca_internal_port = 1; cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); - return cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); + ret = cnxk_crypto_adapter_qp_add(event_dev, cdev, queue_pair_id); + cn9k_sso_set_priv_mem(event_dev, NULL, 0); + + return ret; } static int @@ -1144,6 +1155,14 @@ cn9k_crypto_adapter_qp_del(const struct rte_eventdev *event_dev, return cnxk_crypto_adapter_qp_del(cdev, queue_pair_id); } +static int +cn9k_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, + uint32_t *caps, const struct event_timer_adapter_ops **ops) +{ + return cnxk_tim_caps_get(evdev, flags, caps, ops, + cn9k_sso_set_priv_mem); +} + static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, @@ -1175,7 +1194,7 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .eth_tx_adapter_stop = cnxk_sso_tx_adapter_stop, .eth_tx_adapter_free = cnxk_sso_tx_adapter_free, - .timer_adapter_caps_get = cnxk_tim_caps_get, + .timer_adapter_caps_get = cn9k_tim_caps_get, .crypto_adapter_caps_get = cn9k_crypto_adapter_caps_get, .crypto_adapter_queue_pair_add = cn9k_crypto_adapter_qp_add, diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c index f8a536e71a..5dd79cbd47 100644 --- a/drivers/event/cnxk/cnxk_tim_evdev.c +++ b/drivers/event/cnxk/cnxk_tim_evdev.c @@ -8,6 +8,7 @@ #include "cnxk_tim_evdev.h" static struct event_timer_adapter_ops cnxk_tim_ops; +static cnxk_sso_set_priv_mem_t sso_set_priv_mem_fn; static int cnxk_tim_chnk_pool_create(struct cnxk_tim_ring *tim_ring, @@ -265,6 +266,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr) cnxk_sso_updt_xae_cnt(cnxk_sso_pmd_priv(dev->event_dev), tim_ring, RTE_EVENT_TYPE_TIMER); cnxk_sso_xae_reconfigure(dev->event_dev); + sso_set_priv_mem_fn(dev->event_dev, NULL, 0); plt_tim_dbg( "Total memory used %" PRIu64 "MB\n", @@ -375,7 +377,8 @@ cnxk_tim_stats_reset(const struct rte_event_timer_adapter *adapter) int cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, - uint32_t *caps, const struct event_timer_adapter_ops **ops) + uint32_t *caps, const struct event_timer_adapter_ops **ops, + cnxk_sso_set_priv_mem_t priv_mem_fn) { struct cnxk_tim_evdev *dev = cnxk_tim_priv_get(); @@ -389,6 +392,7 @@ cnxk_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, cnxk_tim_ops.start = cnxk_tim_ring_start; cnxk_tim_ops.stop = cnxk_tim_ring_stop; cnxk_tim_ops.get_info = cnxk_tim_ring_info_get; + sso_set_priv_mem_fn = priv_mem_fn; if (dev->enable_stats) { cnxk_tim_ops.stats_get = cnxk_tim_stats_get; diff --git a/drivers/event/cnxk/cnxk_tim_evdev.h b/drivers/event/cnxk/cnxk_tim_evdev.h index 0fda9f4f13..0c192346c7 100644 --- a/drivers/event/cnxk/cnxk_tim_evdev.h +++ b/drivers/event/cnxk/cnxk_tim_evdev.h @@ -78,6 +78,9 @@ #define TIM_BUCKET_SEMA_WLOCK \ (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK)) +typedef void (*cnxk_sso_set_priv_mem_t)(const struct rte_eventdev *event_dev, + void *lookup_mem, uint64_t aura); + struct cnxk_tim_ctl { uint16_t ring; uint16_t chunk_slots; @@ -317,7 +320,8 @@ cnxk_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr, int cnxk_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps, - const struct event_timer_adapter_ops **ops); + const struct event_timer_adapter_ops **ops, + cnxk_sso_set_priv_mem_t priv_mem_fn); void cnxk_tim_init(struct roc_sso *sso); void cnxk_tim_fini(void); -- 2.35.1