From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 95FBCA0A0F for ; Mon, 28 Jun 2021 21:24:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8967241150; Mon, 28 Jun 2021 21:24:16 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2064.outbound.protection.outlook.com [40.107.93.64]) by mails.dpdk.org (Postfix) with ESMTP id C1C4D4003F; Mon, 28 Jun 2021 21:24:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jaB6OzUjUkvvSXuAmOwFVkXEh937dCQDvix/hIPxzUcxa0KsRX2tmb8l1jSRATWHvCZTBtjuu0OuaRPSSc0sawpAsMNXJjoxGRxQOFFsDntq6IUCmAmkE8n3QbG05aKbfu0q6Jm3S88dqVKK9Fjr4sFsuXu/ZXPZQDpUw9L6omSW1oi9ql4L+0J+xCwROkNfBm3HU81DfxK8RVmRJLAYwY6+UVqplsV6/5MdO8AA/EyY3sGmCPJVLehisYuVCVWktx5nYipCx8dZs8E3dIqz0dlOP0BmoGZ+IBmSfOjpm49Mj+rtHMK7G5Yqjm8XWAq3mkpJ7LBkZJSuSKTEaoST5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IxyynJ/B2TxYaUGvAyoVkUyZg3RQMfZ6WIEPNmXDOng=; b=jJ+1GuO8/+MMtsMhIy3TV+rk6GyYB67aj6K7WNHtv+gSYzgqQ+vugpNqWlVfV4JtJchaDAb31vrNUSl92KlLiwBvbep+Fy60Eal9qZbfhvAkZ0LrmkQz7t2HR9ySqbnR7WnwksMIkQ4qxwwjO3uUr6RZ/Inurij6Xghqp7ph/T8LdIUgopUqnZfj0NJRqnFdJOs7VJTKGF3bP2maZ92druYedbzK/niO/grQAbSnS4LTpM/EozgqEmp+GEJNgNXK+wzPxuH+EpyWwcTu8vDyjiF2GBVXsK0pDJUhmKf9VA5DvkzO5ObOjkmBBZ+FmVEUHjz6XTlyM6u4MBynSzSIeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IxyynJ/B2TxYaUGvAyoVkUyZg3RQMfZ6WIEPNmXDOng=; b=HneIW1/337itEcyU9DeEnq9O4X3MxaHwpl0O5idLjYUHj0yaZc/T1EA2bkqYrm330+SZzOL3qCfxPFTqcw4NrFgQfdZ+vanL2dg2E6n/tznNz47mSh5yjRMzpjfhJxBaAdZzb43wOOcmsHwSEIdUy+iIdl23yZJ7tjM7aakphSZ6vZTqX1M5OPbUsreL03FPouc/vsU7XgndnW8NwQe98fnEkIb5M/oCPpDzLPtAa2iCuQcQbyzPIqlhE1i6u3ComDKy9gSPhpy3L0vzaGSBP0hvgFZVAYDqDSTqNABPfTYvAFJnp9EGfLhd5N0sx11+HtaY3juxUNyvcx3wjC9rlg== Received: from DM5PR05CA0007.namprd05.prod.outlook.com (2603:10b6:3:d4::17) by SN1PR12MB2511.namprd12.prod.outlook.com (2603:10b6:802:23::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.20; Mon, 28 Jun 2021 19:24:12 +0000 Received: from DM6NAM11FT032.eop-nam11.prod.protection.outlook.com (2603:10b6:3:d4:cafe::37) by DM5PR05CA0007.outlook.office365.com (2603:10b6:3:d4::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.8 via Frontend Transport; Mon, 28 Jun 2021 19:24:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT032.mail.protection.outlook.com (10.13.173.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 19:24:11 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Jun 2021 19:24:10 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Date: Mon, 28 Jun 2021 22:23:45 +0300 Message-ID: <20210628192347.1825713-1-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 20003ede-c794-46ec-07c3-08d93a6a4f39 X-MS-TrafficTypeDiagnostic: SN1PR12MB2511: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ty2BbnqfWZDud1VIeYYpSp/S+bQrVMi8xc5pKwuZfBmlm+fzM7cL6RBwL62iEC+mIz37QJVfH2az8ixex3ikBj6/ChUIpEDlw2vXkyptNbOm9P4GPVZsvCKuhcP6Ll8fLU7H77IPEIVzk8IjCTqQurHTMw9ufpvbpP8p+DyhjtAHJ1WjK9fqhszT9YsBtIOmsn6OasV4KJ1lyT4FfB70IV6dnziQtAJC/9Vyw8aH0CL86/gVwf4rdzlRbXOAfFy5Z3f+eY75Sf9r/LwtLvii3qIWW11S1VFOscc1Ksw5nKP/1ixLPQrcdTBA/zPrV7+8aK1Mb7P2RyDlHMPiA6Qz65fyUYLLkABu2aEWbpA+z938O6K4t4WGKB72yXlEGVR1V3i3FMTgZZ0QHwRL6q5vHwkKc4vvCPCDD3AT8EfIipBk9l0immgZErpMRM0TGCIJQ3X+93ItLbn4jkO0rgsbZULZ4zNVScmW4r5/4SeyRJsytyc+fWg1goJ45m9AYqYKafxvq6JQMIfkCmplCT/UC2tuyjotQHRAkqGiskWDs71YsUIw2F2+CuFKHapMb97oINasGkBkVWFx4+2D1bRpa+0jKyscO26P/btX6zP5AgWVjK7qTC6Zkeduk1CNWBc4s0ZsokzwiJigd996Z7AOJvkFbgu5wOh6XYzIgdliVbkUAUOAasBhp0newba+1fRDWMClsrEZZuLi8wvJTvmf58lGBFiCYtYU6rH9IezWsGmEUK6h2+LPPzJ0CfGnMQu/fu3KdALKNFYNW1q+KeXQ9DdlrssNMRkeYedyGLwO5tHdbcVhQEs8ZmvKjokCzLJVf1BK1DUrw0e8qpYsPbI9xg== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(46966006)(36840700001)(7696005)(5660300002)(2616005)(2906002)(478600001)(47076005)(82310400003)(70206006)(1076003)(86362001)(6916009)(8936002)(36756003)(8676002)(36860700001)(70586007)(336012)(26005)(966005)(6666004)(450100002)(316002)(4326008)(83380400001)(54906003)(7636003)(6286002)(356005)(426003)(55016002)(186003)(82740400003)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 19:24:11.8448 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 20003ede-c794-46ec-07c3-08d93a6a4f39 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT032.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR12MB2511 Subject: [dpdk-stable] [PATCH 1/3] regex/mlx5: fix memory region unregistration X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" The issue can cause illegal physical address access while a huge-page A is released and huge-page B is allocated on the same virtual address. The old MR can be matched using the virtual address of huge-page B but the HW will access the physical address of huge-page A which is no more part of the DPDK process. Register a driver callback for memory event in order to free out all the MRs of memory that is going to be freed from the dpdk process. Fixes: cda883bbb655 ("regex/mlx5: add dynamic memory registration to datapath") Cc: stable@dpdk.org Signed-off-by: Michael Baum --- This series depends on this patch: https://patchwork.dpdk.org/project/dpdk/patch/20210628150614.1769507-1-michaelba@nvidia.com/ Please don't apply it only before this patch is integrated. drivers/regex/mlx5/mlx5_regex.c | 55 ++++++++++++++++++++++++ drivers/regex/mlx5/mlx5_regex.h | 2 + drivers/regex/mlx5/mlx5_regex_fastpath.c | 39 +++++++++++++++-- 3 files changed, 92 insertions(+), 4 deletions(-) diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index dcb2ced88e..0f12d94d7e 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -24,6 +25,10 @@ int mlx5_regex_logtype; +TAILQ_HEAD(regex_mem_event, mlx5_regex_priv) mlx5_mem_event_list = + TAILQ_HEAD_INITIALIZER(mlx5_mem_event_list); +static pthread_mutex_t mem_event_list_lock = PTHREAD_MUTEX_INITIALIZER; + const struct rte_regexdev_ops mlx5_regexdev_ops = { .dev_info_get = mlx5_regex_info_get, .dev_configure = mlx5_regex_configure, @@ -82,6 +87,40 @@ mlx5_regex_get_name(char *name, struct rte_pci_device *pci_dev __rte_unused) pci_dev->addr.devid, pci_dev->addr.function); } +/** + * Callback for memory event. + * + * @param event_type + * Memory event type. + * @param addr + * Address of memory. + * @param len + * Size of memory. + */ +static void +mlx5_regex_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, + size_t len, void *arg __rte_unused) +{ + struct mlx5_regex_priv *priv; + + /* Must be called from the primary process. */ + MLX5_ASSERT(rte_eal_process_type() == RTE_PROC_PRIMARY); + switch (event_type) { + case RTE_MEM_EVENT_FREE: + pthread_mutex_lock(&mem_event_list_lock); + /* Iterate all the existing mlx5 devices. */ + TAILQ_FOREACH(priv, &mlx5_mem_event_list, mem_event_cb) + mlx5_free_mr_by_addr(&priv->mr_scache, + priv->ctx->device->name, + addr, len); + pthread_mutex_unlock(&mem_event_list_lock); + break; + case RTE_MEM_EVENT_ALLOC: + default: + break; + } +} + static int mlx5_regex_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) @@ -193,6 +232,15 @@ mlx5_regex_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, rte_errno = ENOMEM; goto error; } + /* Register callback function for global shared MR cache management. */ + if (TAILQ_EMPTY(&mlx5_mem_event_list)) + rte_mem_event_callback_register("MLX5_MEM_EVENT_CB", + mlx5_regex_mr_mem_event_cb, + NULL); + /* Add device to memory callback list. */ + pthread_mutex_lock(&mem_event_list_lock); + TAILQ_INSERT_TAIL(&mlx5_mem_event_list, priv, mem_event_cb); + pthread_mutex_unlock(&mem_event_list_lock); DRV_LOG(INFO, "RegEx GGA is %s.", priv->has_umr ? "supported" : "unsupported"); return 0; @@ -225,6 +273,13 @@ mlx5_regex_pci_remove(struct rte_pci_device *pci_dev) return 0; priv = dev->data->dev_private; if (priv) { + /* Remove from memory callback device list. */ + pthread_mutex_lock(&mem_event_list_lock); + TAILQ_REMOVE(&mlx5_mem_event_list, priv, mem_event_cb); + pthread_mutex_unlock(&mem_event_list_lock); + if (TAILQ_EMPTY(&mlx5_mem_event_list)) + rte_mem_event_callback_unregister("MLX5_MEM_EVENT_CB", + NULL); if (priv->pd) mlx5_glue->dealloc_pd(priv->pd); if (priv->uar) diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h index 51a2101e53..61f59ba873 100644 --- a/drivers/regex/mlx5/mlx5_regex.h +++ b/drivers/regex/mlx5/mlx5_regex.h @@ -70,6 +70,8 @@ struct mlx5_regex_priv { uint32_t nb_engines; /* Number of RegEx engines. */ struct mlx5dv_devx_uar *uar; /* UAR object. */ struct ibv_pd *pd; + TAILQ_ENTRY(mlx5_regex_priv) mem_event_cb; + /**< Called by memory event callback. */ struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ uint8_t is_bf2; /* The device is BF2 device. */ uint8_t sq_ts_format; /* Whether SQ supports timestamp formats. */ diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index b57e7d7794..437009dcb6 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -109,6 +109,40 @@ set_wqe_ctrl_seg(struct mlx5_wqe_ctrl_seg *seg, uint16_t pi, uint8_t opcode, seg->imm = imm; } +/** + * Query LKey from a packet buffer for QP. If not found, add the mempool. + * + * @param priv + * Pointer to the priv object. + * @param mr_ctrl + * Pointer to per-queue MR control structure. + * @param op + * Pointer to the RegEx operations object. + * + * @return + * Searched LKey on success, UINT32_MAX on no match. + */ +static inline uint32_t +mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl, + struct rte_regex_ops *op) +{ + uintptr_t addr = rte_pktmbuf_mtod(op->mbuf, uintptr_t); + uint32_t lkey; + + /* Check generation bit to see if there's any change on existing MRs. */ + if (unlikely(*mr_ctrl->dev_gen_ptr != mr_ctrl->cur_gen)) + mlx5_mr_flush_local_cache(mr_ctrl); + /* Linear search on MR cache array. */ + lkey = mlx5_mr_lookup_lkey(mr_ctrl->cache, &mr_ctrl->mru, + MLX5_MR_CACHE_N, addr); + if (likely(lkey != UINT32_MAX)) + return lkey; + /* Take slower bottom-half on miss. */ + return mlx5_mr_addr2mr_bh(priv->pd, 0, &priv->mr_scache, mr_ctrl, addr, + !!(op->mbuf->ol_flags & EXT_ATTACHED_MBUF)); +} + + static inline void __prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq, struct rte_regex_ops *op, struct mlx5_regex_job *job, @@ -160,10 +194,7 @@ prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, struct mlx5_klm klm; klm.byte_count = rte_pktmbuf_data_len(op->mbuf); - klm.mkey = mlx5_mr_addr2mr_bh(priv->pd, 0, - &priv->mr_scache, &qp->mr_ctrl, - rte_pktmbuf_mtod(op->mbuf, uintptr_t), - !!(op->mbuf->ol_flags & EXT_ATTACHED_MBUF)); + klm.mkey = mlx5_regex_addr2mr(priv, &qp->mr_ctrl, op); klm.address = rte_pktmbuf_mtod(op->mbuf, uintptr_t); __prep_one(priv, sq, op, job, sq->pi, &klm); sq->db_pi = sq->pi; -- 2.25.1