From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A66FA0548; Thu, 27 May 2021 11:34:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68E494110E; Thu, 27 May 2021 11:34:30 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2040.outbound.protection.outlook.com [40.107.220.40]) by mails.dpdk.org (Postfix) with ESMTP id D5BE041103 for ; Thu, 27 May 2021 11:34:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fJeYJ0Ckp00hQFheLOQK8lrm0ZiHjYee4zafZL6UWqHb5C43xFUnuu1DoYtbpbFExaHAPriXTrkB4D96JbDxXU8Os8D+30vlNF9Q6/4uw4NHIWWuFXxZUYZHesYoeIxJ5Opt5h7gOCi3QLWxHFS8VcbUjGMzeltnKteIlnmZMU39YeRgAIUZMh5XERadxPdM+2JxZmt44vVp/fVtTAmGZx99ektRaBw543OsMp2+6cedZQQie1kbfbmyE30ozAjyxru+gB5x6xlC7Ch+tuDANcMNWQp3fpG7viivvFjIrDPpYBzPZ4nJThVGD7HKv7Xze2WEHfPh+SDurzJzReGeqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vZFHW/CgBUiD4ahwrEDizIldgFVG8KHilqM5XIB9dEo=; b=IIX1ok4fF/KZP6CY8Iv8pGlNc8003zIiHlHZ3OZUbZSqz4L98GtM89gUIm8uv8RlKC4NI5jYZjukxYeWnKcKgGM72nJ++iFqUauKbHzcQDpch7vOeqIxyEz+KlzgSkjROWnNynvaqkDKbv9pTU77ZT2WGPnTwGZYE5RFkTG84puUyjtkFuib8/66w2Cgz/AOqaEyQpLKvVpp4wRnyiZc1sRd4fVZ3A/9LrvDVegYoZMyu6KyYzX+OhERRm21BOQ7e957AkBbLrfRoAmJQXir9fvkjDamsx1XcCUCCE7+Td1m7ZIZ7WfYY+O0PVFHdxJlXCeRL8pbB3q+kM+Ysyu3Mg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vZFHW/CgBUiD4ahwrEDizIldgFVG8KHilqM5XIB9dEo=; b=AghpXxVRBCr3flppeQj6uTWJji0Z6d9k0qK4WhGwt5KQwh1yxu81qRVIi0C7DSo2OT985fq81vuqytDEhwoad+OQJ3SInwDZN1Sqhf/LLZ++o/9xgnLgGqcmO+s8KiNw6LOS4QOTysdwuyZUDKoaZsiNIuCNsVlb6FxCMyHMx6q76IQrQxDJLmbBZWLlLmIQMOKiThdsTpjmVVnQjznSXMc/LzvgfmZRw7IogFEV1edbGgzDVIg1WLCPWz7G37nPLW9Fzv0ae1NTPa3rafJ6Oh/Kz+JhhlbTQ0N7UhEWaVqWdYs75UI5c+WRgK+P/7z1qXwBBRwfpRalAoihnWF+nA== Received: from MW4PR04CA0284.namprd04.prod.outlook.com (2603:10b6:303:89::19) by BN7PR12MB2834.namprd12.prod.outlook.com (2603:10b6:408:31::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Thu, 27 May 2021 09:34:27 +0000 Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:303:89:cafe::3e) by MW4PR04CA0284.outlook.office365.com (2603:10b6:303:89::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.22 via Frontend Transport; Thu, 27 May 2021 09:34:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 27 May 2021 09:34:26 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 09:34:25 +0000 From: Suanming Mou To: , CC: , Date: Thu, 27 May 2021 12:34:02 +0300 Message-ID: <20210527093403.1153127-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210527093403.1153127-1-suanmingm@nvidia.com> References: <20210527093403.1153127-1-suanmingm@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e8adb3cc-a1a5-4caf-198e-08d920f29ec6 X-MS-TrafficTypeDiagnostic: BN7PR12MB2834: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:156; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: MQnuupq/2u4xw2nQlf41yWCKvebOBmnTxppE82xQKliYjlpC7++60QxPlV2tCPwACfTmQ3uYFGhbxrDwMqO3rsDIn9EhmbAN59RkZsufWDFso8Nsht2ZfhAa31GAZ/8oTpFHA59NQE3Meop7eq+KP1gNZCJdOLeL8+shTpCQvKx4ie1mFoi+MB8aK+i5ZZlxqK4xacVI7mIC/CFwjZMx8DSMGbG+R4SOxThSiIGe36rH/A+RRKw9yg2Yv/lmMBB/kZNDvVqErn3112hVRx6mtBXRqCImTIohD5yecw0b/eBcJ3SY4hZX4R+hgNX1AznaJIzfoZ0VEJpENpFruzck6H2CdnDuuXyZgCIJpT51bOmzc8XF5j4EF7pYLuOnAH57neu3tZH9iSoSEXVV+JTIZa95Q01daFgxb5P7MzvvWpY4ej6iKRliLHiGvuRWPcMl+d1IEcATitKlB8FMpfv7Mj6ylQBcHtWa6+MniUhZiNpmfMZtzqxG2k0S1/BLrEtnadBB6mIlLWt4B+CEiPqsjXXEneWQ4NbBZ2PAf4WR1KsQi5HF4pHDyylzNiaBc2DtaqIJVzNVNPYy7Za12UBmo7jRfQXO5QbjsqUUj1pU8k25y+FrUjtclUaTojkH0p5A14KWYlYv5xEKUEkvjGIVMA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(396003)(346002)(376002)(136003)(36840700001)(46966006)(478600001)(54906003)(7696005)(316002)(110136005)(6286002)(4326008)(36906005)(8936002)(86362001)(5660300002)(70206006)(1076003)(6636002)(26005)(55016002)(82740400003)(8676002)(426003)(356005)(2616005)(70586007)(36860700001)(2906002)(336012)(7636003)(82310400003)(83380400001)(36756003)(186003)(47076005)(6666004)(16526019); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 09:34:26.6513 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8adb3cc-a1a5-4caf-198e-08d920f29ec6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN7PR12MB2834 Subject: [dpdk-dev] [PATCH 3/4] net/mlx5: add index pool cache flush X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds the per-core cache indexed pool cache flush function. Signed-off-by: Suanming Mou --- drivers/net/mlx5/mlx5_utils.c | 70 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_utils.h | 13 +++++++ 2 files changed, 83 insertions(+) diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 900de2a831..ee6c2e293e 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -452,6 +452,21 @@ mlx5_ipool_grow_cache(struct mlx5_indexed_pool *pool) pool->idx_c[cidx] = pi; __atomic_fetch_add(&pool->n_trunk, n_grow, __ATOMIC_ACQUIRE); mlx5_ipool_unlock(pool); + /* Pre-allocate the bitmap. */ + if (pool->ibmp) + pool->cfg.free(pool->ibmp); + data_size = sizeof(*pool->ibmp); + data_size += rte_bitmap_get_memory_footprint(cur_max_idx); + /* rte_bitmap requires memory cacheline aligned. */ + pool->ibmp = pool->cfg.malloc(MLX5_MEM_ZERO, data_size, + RTE_CACHE_LINE_SIZE, rte_socket_id()); + if (!pool->ibmp) + return NULL; + pool->ibmp->num = cur_max_idx; + pool->ibmp->mem_size = data_size - sizeof(*pool->ibmp); + pool->ibmp->mem = (void *)(pool->ibmp + 1); + pool->ibmp->bmp = rte_bitmap_init(pool->ibmp->num, + pool->ibmp->mem, pool->ibmp->mem_size); allocate_trunk: /* Initialize the new trunk. */ trunk_size = sizeof(*trunk); @@ -787,6 +802,61 @@ mlx5_ipool_destroy(struct mlx5_indexed_pool *pool) return 0; } +void +mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool) +{ + uint32_t i; + struct rte_ring *ring_c; + char cache_name[64]; + union mlx5_indexed_qd qd; + uint32_t bmp_num, mem_size; + uint32_t num = 0; + + /* Create a new ring to save all cached index. */ + snprintf(cache_name, RTE_DIM(cache_name), "Ip_%p", + (void *)pool->idx_g->ring); + ring_c = rte_ring_create(cache_name, pool->ibmp->num, + SOCKET_ID_ANY, RING_F_SC_DEQ | RING_F_SP_ENQ); + if (!ring_c) + return; + /* Reset bmp. */ + bmp_num = mlx5_trunk_idx_offset_get(pool, pool->n_trunk_valid); + mem_size = rte_bitmap_get_memory_footprint(bmp_num); + pool->ibmp->bmp = rte_bitmap_init_with_all_set(bmp_num, + pool->ibmp->mem, mem_size); + /* Flush core cache. */ + for (i = 0; i < MLX5_IPOOL_MAX_CORES; i++) { + while (!rte_ring_dequeue(pool->l_idx_c[i], &qd.ptr)) { + rte_bitmap_clear(pool->ibmp->bmp, (qd.idx - 1)); + rte_ring_enqueue(ring_c, qd.ptr); + num++; + } + } + /* Flush global cache. */ + while (!rte_ring_dequeue(pool->idx_g->ring, &qd.ptr)) { + rte_bitmap_clear(pool->ibmp->bmp, (qd.idx - 1)); + rte_ring_enqueue(ring_c, qd.ptr); + num++; + } + rte_ring_free(pool->idx_g->ring); + pool->idx_g->ring = ring_c; +} + +void * +mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos) +{ + uint64_t slab = 0; + uint32_t iidx = *pos; + + if (!pool->ibmp || !rte_bitmap_scan(pool->ibmp->bmp, &iidx, &slab)) + return NULL; + iidx += __builtin_ctzll(slab); + rte_bitmap_clear(pool->ibmp->bmp, iidx); + iidx++; + *pos = iidx; + return mlx5_ipool_get(pool, iidx); +} + void mlx5_ipool_dump(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index 4fe82d4a5f..03d5164485 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -247,6 +247,13 @@ struct mlx5_indexed_cache { uint32_t res; }; +struct mlx5_indexed_bmp { + struct rte_bitmap *bmp; + void *mem; + uint32_t mem_size; + uint32_t num; +} __rte_cache_aligned; + struct mlx5_indexed_pool { struct mlx5_indexed_pool_config cfg; /* Indexed pool configuration. */ rte_spinlock_t lock; /* Pool lock for multiple thread usage. */ @@ -261,6 +268,7 @@ struct mlx5_indexed_pool { struct mlx5_indexed_cache *idx_g; struct mlx5_indexed_cache *idx_c[MLX5_IPOOL_MAX_CORES]; struct rte_ring *l_idx_c[MLX5_IPOOL_MAX_CORES]; + struct mlx5_indexed_bmp *ibmp; uint32_t free_list; /* Index to first free trunk. */ #ifdef POOL_DEBUG uint32_t n_entry; @@ -861,4 +869,9 @@ struct { \ (entry); \ idx++, (entry) = mlx5_l3t_get_next((tbl), &idx)) +#define MLX5_IPOOL_FOREACH(ipool, idx, entry) \ + for ((idx) = 1, mlx5_ipool_flush_cache((ipool)), \ + entry = mlx5_ipool_get_next((ipool), &idx); \ + (entry); idx++, entry = mlx5_ipool_get_next((ipool), &idx)) + #endif /* RTE_PMD_MLX5_UTILS_H_ */ -- 2.25.1