From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C05A7A0563; Wed, 15 Apr 2020 10:23:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 40B2E1D5C5; Wed, 15 Apr 2020 10:20:10 +0200 (CEST) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by dpdk.org (Postfix) with ESMTP id 409B11D593 for ; Wed, 15 Apr 2020 10:20:08 +0200 (CEST) Received: by mail-pl1-f193.google.com with SMTP id g2so989282plo.3 for ; Wed, 15 Apr 2020 01:20:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cRjQuqzass8FRufRmy/WWF1/b66z+dKeHC5gac7rKho=; b=ChTa8AxiyERfo/srZOte3YCsAHDAy2s2gTU3/G3vAAx2oJ79J4q9Yu5+vYWF8HylHo FK4XtRtNmfpN01jGzu7UxgEOkPvbTkbxeIko+Vc/poLDVx03+rr8PMRcs09CKG86f6Va unn+KogNfGsa94pFWnyAXTiFzjanj8uydGPBI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cRjQuqzass8FRufRmy/WWF1/b66z+dKeHC5gac7rKho=; b=DxrVf/hI47AscyP0J38X2jDlOSiirbdj7O+Z2v8RvlMaP6wNefbGyoMPIJUwsIc/Lj dB5B36AcGNeyJDtHg+YgiKv6Wjl3pOl+t0PFrg4MNqiL9z4ly3ovolIXGGPGuTCiVkiq hL6QmEFuYbluRYVXDFGjWVVlNWp0f1VRNNiMlJVOWB388BdOPHF7Xe4kuOVVARVBR7xE SDJAGD0Jo7fnQ8rlnkJb54TXVhE2cqjXUTFf1YceKCcwU6OpcmFiR09Q5tf6ckYnyRQE uyLVNJzLt9c3GhND6bf57j6ZdLFvKRIrdT0Kwohv0pPNMa508NHuV0Lsyazwe/lavZO0 aEzw== X-Gm-Message-State: AGi0PuZ93UawHyiyBPcoFXr4Kg5CsDgCssTCsGPqijm7dCQOOnC4KfFG 3OClNmIvO4GsIvQ8CIS170l9sRIsXcnkzqnumTJBAcVFtXXmpxd7fO7hNnAls/J0HvGfadJ9kCc m7kL1WkbuiGOmoeVO9ma0enkMDvZI+4+YYU1RgiEc+d3U/U/KK2U0ZHEINYil7IuGyezb X-Google-Smtp-Source: APiQypKEIHRdAlHG6FKDnvMDethzkPJ3/A0v3hxeqQq1Putlkk0R8RibHEdVdZpnfXZfwYVHZCzRnw== X-Received: by 2002:a17:90a:d901:: with SMTP id c1mr5202254pjv.120.1586938806832; Wed, 15 Apr 2020 01:20:06 -0700 (PDT) Received: from S60.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id fy21sm3819019pjb.25.2020.04.15.01.20.04 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2020 01:20:06 -0700 (PDT) From: Venkat Duvvuru To: dev@dpdk.org Cc: Venkat Duvvuru , Mike Baucom Date: Wed, 15 Apr 2020 13:48:54 +0530 Message-Id: <1586938751-32808-18-git-send-email-venkatkumar.duvvuru@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1586938751-32808-1-git-send-email-venkatkumar.duvvuru@broadcom.com> References: <1586852011-37536-1-git-send-email-venkatkumar.duvvuru@broadcom.com> <1586938751-32808-1-git-send-email-venkatkumar.duvvuru@broadcom.com> Subject: [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" A ULP session will contain all the resources needed to support rte flow offloads. A session is initialized as part of rte_eth_device start. A DPDK application can have multiple interfaces which means rte_eth_device start will be called for each of these devices. ULP session manager will make sure that a single ULP session is only initialized once. Apart from this, it also initializes MARK database, EEM table & flow database. ULP session manager also manages a list of all opened ULP sessions. This patch adds support for cleaning up resources initialized for ULP sessions. Signed-off-by: Venkat Duvvuru Signed-off-by: Mike Baucom Reviewed-by: Lance Richardson Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 3 + drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 167 ++++++++++++++++++++++++++++++++- drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 10 ++ drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c | 25 +++++ drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h | 8 ++ 5 files changed, 212 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1703ce3..2f08921 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -951,6 +951,9 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev) struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + if (bp->truflow) + bnxt_ulp_deinit(bp); + eth_dev->data->dev_started = 0; /* Prevent crashes when queues are still in use */ eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts; diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index 7afc6bf..3795c6d 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -28,6 +28,27 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list = static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER; /* + * Allow the deletion of context only for the bnxt device that + * created the session + * TBD - The implementation of the function should change to + * using the reference count once tf_session_attach functionality + * is fixed. + */ +bool +ulp_ctx_deinit_allowed(void *ptr) +{ + struct bnxt *bp = (struct bnxt *)ptr; + + if (!bp) + return 0; + + if (&bp->tfp == bp->ulp_ctx.g_tfp) + return 1; + + return 0; +} + +/* * Initialize an ULP session. * An ULP session will contain all the resources needed to support rte flow * offloads. A session is initialized as part of rte_eth_device start. @@ -67,6 +88,22 @@ ulp_ctx_session_open(struct bnxt *bp, return rc; } +/* + * Close the ULP session. + * It takes the ulp context pointer. + */ +static void +ulp_ctx_session_close(struct bnxt *bp, + struct bnxt_ulp_session_state *session) +{ + /* close the session in the hardware */ + if (session->session_opened) + tf_close_session(&bp->tfp); + session->session_opened = 0; + session->g_tfp = NULL; + bp->ulp_ctx.g_tfp = NULL; +} + static void bnxt_init_tbl_scope_parms(struct bnxt *bp, struct tf_alloc_tbl_scope_parms *params) @@ -138,6 +175,41 @@ ulp_eem_tbl_scope_init(struct bnxt *bp) return 0; } +/* Free Extended Exact Match host memory */ +static int32_t +ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx) +{ + struct tf_free_tbl_scope_parms params = {0}; + struct tf *tfp; + int32_t rc = 0; + + if (!ulp_ctx || !ulp_ctx->cfg_data) + return -EINVAL; + + /* Free the resources for the last device */ + if (!ulp_ctx_deinit_allowed(bp)) + return rc; + + tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx); + if (!tfp) { + BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n"); + return -EINVAL; + } + + rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, ¶ms.tbl_scope_id); + if (rc) { + BNXT_TF_DBG(ERR, "Failed to get the table scope id\n"); + return -EINVAL; + } + + rc = tf_free_tbl_scope(tfp, ¶ms); + if (rc) { + BNXT_TF_DBG(ERR, "Unable to free table scope\n"); + return -EINVAL; + } + return rc; +} + /* The function to free and deinit the ulp context data. */ static int32_t ulp_ctx_deinit(struct bnxt *bp, @@ -148,6 +220,9 @@ ulp_ctx_deinit(struct bnxt *bp, return -EINVAL; } + /* close the tf session */ + ulp_ctx_session_close(bp, session); + /* Free the contents */ if (session->cfg_data) { rte_free(session->cfg_data); @@ -211,6 +286,36 @@ ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx, return 0; } +static int32_t +ulp_ctx_detach(struct bnxt *bp, + struct bnxt_ulp_session_state *session) +{ + struct bnxt_ulp_context *ulp_ctx; + + if (!bp || !session) { + BNXT_TF_DBG(ERR, "Invalid Arguments\n"); + return -EINVAL; + } + ulp_ctx = &bp->ulp_ctx; + + if (!ulp_ctx->cfg_data) + return 0; + + /* TBD call TF_session_detach */ + + /* Increment the ulp context data reference count usage. */ + if (ulp_ctx->cfg_data->ref_cnt >= 1) { + ulp_ctx->cfg_data->ref_cnt--; + if (ulp_ctx_deinit_allowed(bp)) + ulp_ctx_deinit(bp, session); + ulp_ctx->cfg_data = NULL; + ulp_ctx->g_tfp = NULL; + return 0; + } + BNXT_TF_DBG(ERR, "context deatach on invalid data\n"); + return 0; +} + /* * Initialize the state of an ULP session. * If the state of an ULP session is not initialized, set it's state to @@ -297,6 +402,26 @@ ulp_session_init(struct bnxt *bp, } /* + * When a device is closed, remove it's associated session from the global + * session list. + */ +static void +ulp_session_deinit(struct bnxt_ulp_session_state *session) +{ + if (!session) + return; + + if (!session->cfg_data) { + pthread_mutex_lock(&bnxt_ulp_global_mutex); + STAILQ_REMOVE(&bnxt_ulp_session_list, session, + bnxt_ulp_session_state, next); + pthread_mutex_destroy(&session->bnxt_ulp_mutex); + rte_free(session); + pthread_mutex_unlock(&bnxt_ulp_global_mutex); + } +} + +/* * When a port is initialized by dpdk. This functions is called * and this function initializes the ULP context and rest of the * infrastructure associated with it. @@ -363,12 +488,52 @@ bnxt_ulp_init(struct bnxt *bp) return rc; jump_to_error: + bnxt_ulp_deinit(bp); return -ENOMEM; } /* Below are the access functions to access internal data of ulp context. */ -/* Function to set the Mark DB into the context. */ +/* + * When a port is deinit'ed by dpdk. This function is called + * and this function clears the ULP context and rest of the + * infrastructure associated with it. + */ +void +bnxt_ulp_deinit(struct bnxt *bp) +{ + struct bnxt_ulp_session_state *session; + struct rte_pci_device *pci_dev; + struct rte_pci_addr *pci_addr; + + /* Get the session first */ + pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device); + pci_addr = &pci_dev->addr; + pthread_mutex_lock(&bnxt_ulp_global_mutex); + session = ulp_get_session(pci_addr); + pthread_mutex_unlock(&bnxt_ulp_global_mutex); + + /* session not found then just exit */ + if (!session) + return; + + /* cleanup the eem table scope */ + ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx); + + /* cleanup the flow database */ + ulp_flow_db_deinit(&bp->ulp_ctx); + + /* Delete the Mark database */ + ulp_mark_db_deinit(&bp->ulp_ctx); + + /* Delete the ulp context and tf session */ + ulp_ctx_detach(bp, session); + + /* Finally delete the bnxt session*/ + ulp_session_deinit(session); +} + +/* Function to set the Mark DB into the context */ int32_t bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx, struct bnxt_ulp_mark_tbl *mark_tbl) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h index d88225f..b3e9e96 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h @@ -47,6 +47,16 @@ struct rte_tf_flow { uint32_t flow_id; }; +/* + * Allow the deletion of context only for the bnxt device that + * created the session + * TBD - The implementation of the function should change to + * using the reference count once tf_session_attach functionality + * is fixed. + */ +bool +ulp_ctx_deinit_allowed(void *bp); + /* Function to set the device id of the hardware. */ int32_t bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id); diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c index 3f28a73..9e4307e 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c +++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c @@ -92,3 +92,28 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt) return -ENOMEM; } + +/* + * Release all resources in the Mark Manager for this ulp context + * + * ctxt [in] The ulp context for the mark manager + * + */ +int32_t +ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt) +{ + struct bnxt_ulp_mark_tbl *mtbl; + + mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt); + + if (mtbl) { + rte_free(mtbl->gfid_tbl); + rte_free(mtbl->lfid_tbl); + rte_free(mtbl); + + /* Safe to ignore on deinit */ + (void)bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, NULL); + } + + return 0; +} diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h index b175abd..5948683 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h +++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h @@ -46,4 +46,12 @@ struct bnxt_ulp_mark_tbl { int32_t ulp_mark_db_init(struct bnxt_ulp_context *ctxt); +/* + * Release all resources in the Mark Manager for this ulp context + * + * ctxt [in] The ulp context for the mark manager + */ +int32_t +ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt); + #endif /* _ULP_MARK_MGR_H_ */ -- 2.7.4