From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D066AA0350 for ; Mon, 29 Jun 2020 03:33:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A603A1BDF8; Mon, 29 Jun 2020 03:33:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2752E1B94F; Mon, 29 Jun 2020 03:33:37 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 05T1PWeT020239; Sun, 28 Jun 2020 18:33:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nS57q6bmu/I0AT4mouZUkH1xBm+sAvnI5L7+KNNysY4=; b=jiqyQkMwwQ+u/YwyKOY2xkto49k47Fbbdl68haPgqxGxlq405rdmwCCgE72mYyq8eKpc QaYuSiy+W7SF/y/zTVqqZftEw/Z7hy/BWCURRqp4sWTEqLdQUnvffWxPPs/TZI4ViKkL LK1VyB/wn6L3pmYs4H4X2lPMw3D5azueJGt/egqSciG1CqYQjvHg1FpjOhsv4CoCo578 vmiOxK495fK7c31z8RMNJAtyBsSah8hX3SZhGuhEFfkZICvxrC02+T4h8feJMvFESkJg 6Bd0o0V+5p1/izt4P8UWdh2mxYx30/tkdZvRY9j33fJvjQjLKK18vB2BUB0wx7JWZzv6 Rg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 31y0wrrvd8-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 28 Jun 2020 18:33:36 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 28 Jun 2020 18:33:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 28 Jun 2020 18:33:35 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.161.240]) by maili.marvell.com (Postfix) with ESMTP id 764A73F703F; Sun, 28 Jun 2020 18:33:33 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Date: Mon, 29 Jun 2020 07:03:26 +0530 Message-ID: <20200629013329.5297-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.216, 18.0.687 definitions=2020-06-28_11:2020-06-26, 2020-06-28 signatures=0 Subject: [dpdk-stable] [dpdk-dev] [PATCH 1/3] event/octeontx2: fix device reconfigure X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" From: Pavan Nikhilesh When event device is re-configured maintain the event queue to event port links and event port status instead of resetting them. Fixes: cd24e70258bd ("event/octeontx2: add device configure function") Cc: stable@dpdk.org Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 60 +++++++++++++++++++++++----- 1 file changed, 50 insertions(+), 10 deletions(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 630073de5..b8b57c388 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -725,6 +725,46 @@ sso_clr_links(const struct rte_eventdev *event_dev) } } +static void +sso_restore_links(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint16_t *links_map; + int i, j; + + for (i = 0; i < dev->nb_event_ports; i++) { + links_map = event_dev->data->links_map; + /* Point links_map to this port specific area */ + links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws; + + ws = event_dev->data->ports[i]; + for (j = 0; j < dev->nb_event_queues; j++) { + if (links_map[j] == 0xdead) + continue; + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[0], j, true); + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[1], j, true); + sso_func_trace("Restoring port %d queue %d " + "link", i, j); + } + } else { + struct otx2_ssogws *ws; + + ws = event_dev->data->ports[i]; + for (j = 0; j < dev->nb_event_queues; j++) { + if (links_map[j] == 0xdead) + continue; + sso_port_link_modify(ws, j, true); + sso_func_trace("Restoring port %d queue %d " + "link", i, j); + } + } + } +} + static void sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base) { @@ -765,18 +805,15 @@ sso_configure_dual_ports(const struct rte_eventdev *event_dev) struct otx2_ssogws_dual *ws; uintptr_t base; - /* Free memory prior to re-allocation if needed */ if (event_dev->data->ports[i] != NULL) { ws = event_dev->data->ports[i]; - rte_free(ws); - ws = NULL; - } - - /* Allocate event port memory */ - ws = rte_zmalloc_socket("otx2_sso_ws", + } else { + /* Allocate event port memory */ + ws = rte_zmalloc_socket("otx2_sso_ws", sizeof(struct otx2_ssogws_dual), RTE_CACHE_LINE_SIZE, event_dev->data->socket_id); + } if (ws == NULL) { otx2_err("Failed to alloc memory for port=%d", i); rc = -ENOMEM; @@ -1061,8 +1098,11 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) return -EINVAL; } - if (dev->configured) + if (dev->configured) { sso_unregister_irqs(event_dev); + /* Clear any prior port-queue mapping. */ + sso_clr_links(event_dev); + } if (dev->nb_event_queues) { /* Finit any previous queues. */ @@ -1097,8 +1137,8 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) goto teardown_hwggrp; } - /* Clear any prior port-queue mapping. */ - sso_clr_links(event_dev); + /* Restore any prior port-queue mapping. */ + sso_restore_links(event_dev); rc = sso_ggrp_alloc_xaq(dev); if (rc < 0) { otx2_err("Failed to alloc xaq to ggrp %d", rc); -- 2.17.1