From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E9A554301F for ; Thu, 10 Aug 2023 02:07:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E22F540DD8; Thu, 10 Aug 2023 02:07:50 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2058.outbound.protection.outlook.com [40.107.220.58]) by mails.dpdk.org (Postfix) with ESMTP id B6A33406B6 for ; Thu, 10 Aug 2023 02:07:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MLr5mc9E6IgY3C0+XQsjrThMFDmzFT3VQ5SkG3488zH/52na96jnxrTQtF6BF/ZV/pMmFjt6ObFTOoy/fIf/qtHZZV1Q2mOfKxGWsPOksN8Ug4p3azPqlwZq5NQPpPcY1W5rSij5LS2HoZZ4EGnol9XFdEHvfNg6grZQZzfW/ITpj5W72uaCO2UITL6GXrQX/tzBpIVl9R2IVusK6TH/Zr0HBYUV99CmDh4vCCgqnjsQa5bMXlp3xFef+6R5WFnwNXZkHMkRTLA9uXF/Zdgs4xD8sGkwptasUTRpFIjLZbbtv/RGVo5sMswG8QggFC5/Jf8W4892mpScbYQnXArg7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WJkHYndoUAQxPfjLRoYUErQrjCfFn1Wn3k/xjPr41Q0=; b=WbFL8vzrdGAKnOxdimy8rTmclmJh2Y0qjUz0zMssz3XeJBPZTa1GC7BahIBYx4U5ug7zi0IXKgXeLk8rwTdgi+ycD+FE1mgridtL0g3CZtPvJX+P0UcjAUVAwGFcmcmRlBcQ6R+2FncV9qaYaV2an7eTthfpbkY3DYPlsZmTJm/A0xit7NLKnGlPs5T+AY3c8/y4l15/4mS2TJDKZMYTBbhkfa9zQwNJG9JNi8tSEvrjdZBsgFf3NLbqDHAesw3bm6Ekb2kjVx2oqf1nILCVSWJrM41wnHbs0eZ16ll0OvnnCyZb+AElHdt0md8T7XtdvxEoiLqNm8qrNpZYub9l3Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WJkHYndoUAQxPfjLRoYUErQrjCfFn1Wn3k/xjPr41Q0=; b=r2IT9BRUnlc1M4EU8mXGzLlHCYno1oaOmsgPxCX5hh2+loR6yHejhVf0lGOKuyW1FkXJKCcINiwIt4cUr9OFdA4apg98n9SqHlk6bXdPb7VUdWdwpNeC140WiVsXzl6FtC+KAIXYBIH8ihOGN4x7T6z1KZ8hK/tfqPbf1bPUOwrQHPOCUwtuCSgGoXJjQvwImQCEiXzG1C/4q6DJPkIMas1XNCISS2igsxFjZk5Y4ex1ZlR4MbPvLfruwKL14Ppi5INtQHexC5+s06xU+j6kbDKdlXJQdrwzFUoCaUESVmsUTTPFNndu6PjHX0blItlA4fKi7C29Ep9sR9eLdGnvzg== Received: from CY8PR12CA0068.namprd12.prod.outlook.com (2603:10b6:930:4c::8) by DS0PR12MB8527.namprd12.prod.outlook.com (2603:10b6:8:161::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6652.27; Thu, 10 Aug 2023 00:07:47 +0000 Received: from CY4PEPF0000E9DA.namprd05.prod.outlook.com (2603:10b6:930:4c:cafe::f3) by CY8PR12CA0068.outlook.office365.com (2603:10b6:930:4c::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6652.28 via Frontend Transport; Thu, 10 Aug 2023 00:07:47 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9DA.mail.protection.outlook.com (10.167.241.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6652.20 via Frontend Transport; Thu, 10 Aug 2023 00:07:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Wed, 9 Aug 2023 17:07:36 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Wed, 9 Aug 2023 17:07:34 -0700 From: Xueming Li To: Vikash Poddar CC: Ciara Power , dpdk stable Subject: patch 'common/qat: detach crypto from compress build' has been queued to stable release 22.11.3 Date: Thu, 10 Aug 2023 08:06:43 +0800 Message-ID: <20230810000717.2191-2-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230810000717.2191-1-xuemingl@nvidia.com> References: <20230625063544.11183-1-xuemingl@nvidia.com> <20230810000717.2191-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9DA:EE_|DS0PR12MB8527:EE_ X-MS-Office365-Filtering-Correlation-Id: 6e699c29-bad6-4b1b-03f3-08db9935d3b6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EzeYlE2nF5tziKiq1XtsUMZF+CFPCxqb/LVFZsTk3u9wqY+ARr03z0o9uOQgCcz2ilkMYbrUJZnx3KPGbXZGRfVHQCci+onMlRfjdMG6rAFYEaQlD1vftW16LuFO6OzfRK2wvwldfngtqdYbzrxtFRiixOYd0SYlxiO7lyq7pa3cD6foi/RkTr9RjysdCFf1FAkL/heemW5D6LH2RhALuF2ayt8ZxpotGGmQvdTOiPwvkscSavxfbyGNr9d+5iqox7Ea6ZlITVAubNJnw6jxsZjWq2s95SEHvDZuS5ZHLYjmbXqjFHSXDUoJZSkDUpLjrlXTCDMnr/rM8t1QQZmLDyC3QzRWbkpIdqTTTjEIwD4sm5VDdwtWlf7Nd0Mi5w2WpAM0PyVKv+aoS7wGQ8F5MCC9To1DWUUOwDdMXtFM8TtUecwGhrrIhFIKKcL+B9aUoOgAy5qDjyJ9TpHv5Cc8fICj1+GCeoNYVj1qllKXXjpuRy5+IerWntf0Vchk8Fl8tdUCKMUZZd7gDOOn5PPf2uBYozf5K8Ntc0lgS9OqWPvRM76IfLMFdZHIe/FOcGwgqh/SYNLIg1KTwArDfIeFgmsyqDkQMnVWNamVI06RLzAJF8gU+2JLRrNVd9Mla0Z1U1CI8qV7kgE15SL7Vp9p2Fus9GThoy+EgkT8rGfA9sKmfswMZj19GdL8ZFOtYZv7fU5YaTnnj87uRHfGxiGrnH/yUjvVyEOVe/wwZbHUKDn/IOgpfN9/PgkfdR6tXs9r7ce4s/gRPQycFPYY1oMN2Q== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(396003)(346002)(376002)(39860400002)(1800799006)(82310400008)(186006)(451199021)(40470700004)(36840700001)(46966006)(2906002)(41300700001)(8936002)(316002)(8676002)(5660300002)(40460700003)(30864003)(36756003)(40480700001)(55016003)(86362001)(82740400003)(356005)(2616005)(6666004)(7636003)(54906003)(478600001)(1076003)(336012)(26005)(6286002)(966005)(16526019)(53546011)(7696005)(70586007)(426003)(36860700001)(4326008)(6916009)(70206006)(83380400001)(47076005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Aug 2023 00:07:46.5888 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6e699c29-bad6-4b1b-03f3-08db9935d3b6 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9DA.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB8527 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 08/11/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=dc3ba9b6f98bca6cdef64d3660ac48f4bdbd2e7c Thanks. Xueming Li --- >From dc3ba9b6f98bca6cdef64d3660ac48f4bdbd2e7c Mon Sep 17 00:00:00 2001 From: Vikash Poddar Date: Mon, 26 Jun 2023 11:29:06 +0000 Subject: [PATCH] common/qat: detach crypto from compress build Cc: Xueming Li [ upstream commit 7cb939f6485ea8e4d6b1e8e82924fe2fcec96d60 ] qat_qp.c is a common file for QAT crypto and compress. Moved compress function from common file to compress QAT file qat_comp.c Bugzilla ID: 1237 Fixes: 2ca75c65af4c ("common/qat: build drivers from common folder") Signed-off-by: Vikash Poddar Acked-by: Ciara Power --- drivers/common/qat/meson.build | 8 -- drivers/common/qat/qat_qp.c | 187 -------------------------------- drivers/common/qat/qat_qp.h | 20 +++- drivers/compress/qat/qat_comp.c | 182 +++++++++++++++++++++++++++++++ drivers/compress/qat/qat_comp.h | 3 + 5 files changed, 201 insertions(+), 199 deletions(-) diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build index b84e5b3c6c..95b52b78c3 100644 --- a/drivers/common/qat/meson.build +++ b/drivers/common/qat/meson.build @@ -54,14 +54,6 @@ if libipsecmb.found() and libcrypto_3.found() endif endif -# The driver should not build if both compression and crypto are disabled -#FIXME common code depends on compression files so check only compress! -if not qat_compress # and not qat_crypto - build = false - reason = '' # rely on reason for compress/crypto above - subdir_done() -endif - deps += ['bus_pci', 'cryptodev', 'net', 'compressdev'] sources += files( 'qat_common.c', diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 9cbd19a481..e95df292e8 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -449,20 +449,6 @@ adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen) return 0; } -static inline void -txq_write_tail(enum qat_device_gen qat_dev_gen, - struct qat_qp *qp, struct qat_queue *q) -{ - struct qat_qp_hw_spec_funcs *ops = - qat_qp_hw_spec[qat_dev_gen]; - - /* - * Pointer check should be done during - * initialization - */ - ops->qat_qp_csr_write_tail(qp, q); -} - static inline void qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp, struct qat_queue *q, uint32_t new_head) @@ -631,179 +617,6 @@ kick_tail: return nb_ops_sent; } -/* Use this for compression only - but keep consistent with above common - * function as much as possible. - */ -uint16_t -qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops) -{ - register struct qat_queue *queue; - struct qat_qp *tmp_qp = (struct qat_qp *)qp; - register uint32_t nb_ops_sent = 0; - register int nb_desc_to_build; - uint16_t nb_ops_possible = nb_ops; - register uint8_t *base_addr; - register uint32_t tail; - - int descriptors_built, total_descriptors_built = 0; - int nb_remaining_descriptors; - int overflow = 0; - - if (unlikely(nb_ops == 0)) - return 0; - - /* read params used a lot in main loop into registers */ - queue = &(tmp_qp->tx_q); - base_addr = (uint8_t *)queue->base_addr; - tail = queue->tail; - - /* Find how many can actually fit on the ring */ - { - /* dequeued can only be written by one thread, but it may not - * be this thread. As it's 4-byte aligned it will be read - * atomically here by any Intel CPU. - * enqueued can wrap before dequeued, but cannot - * lap it as var size of enq/deq (uint32_t) > var size of - * max_inflights (uint16_t). In reality inflights is never - * even as big as max uint16_t, as it's <= ADF_MAX_DESC. - * On wrapping, the calculation still returns the correct - * positive value as all three vars are unsigned. - */ - uint32_t inflights = - tmp_qp->enqueued - tmp_qp->dequeued; - - /* Find how many can actually fit on the ring */ - overflow = (inflights + nb_ops) - tmp_qp->max_inflights; - if (overflow > 0) { - nb_ops_possible = nb_ops - overflow; - if (nb_ops_possible == 0) - return 0; - } - - /* QAT has plenty of work queued already, so don't waste cycles - * enqueueing, wait til the application has gathered a bigger - * burst or some completed ops have been dequeued - */ - if (tmp_qp->min_enq_burst_threshold && inflights > - QAT_QP_MIN_INFL_THRESHOLD && nb_ops_possible < - tmp_qp->min_enq_burst_threshold) { - tmp_qp->stats.threshold_hit_count++; - return 0; - } - } - - /* At this point nb_ops_possible is assuming a 1:1 mapping - * between ops and descriptors. - * Fewer may be sent if some ops have to be split. - * nb_ops_possible is <= burst size. - * Find out how many spaces are actually available on the qp in case - * more are needed. - */ - nb_remaining_descriptors = nb_ops_possible - + ((overflow >= 0) ? 0 : overflow * (-1)); - QAT_DP_LOG(DEBUG, "Nb ops requested %d, nb descriptors remaining %d", - nb_ops, nb_remaining_descriptors); - - while (nb_ops_sent != nb_ops_possible && - nb_remaining_descriptors > 0) { - struct qat_comp_op_cookie *cookie = - tmp_qp->op_cookies[tail >> queue->trailz]; - - descriptors_built = 0; - - QAT_DP_LOG(DEBUG, "--- data length: %u", - ((struct rte_comp_op *)*ops)->src.length); - - nb_desc_to_build = qat_comp_build_request(*ops, - base_addr + tail, cookie, tmp_qp->qat_dev_gen); - QAT_DP_LOG(DEBUG, "%d descriptors built, %d remaining, " - "%d ops sent, %d descriptors needed", - total_descriptors_built, nb_remaining_descriptors, - nb_ops_sent, nb_desc_to_build); - - if (unlikely(nb_desc_to_build < 0)) { - /* this message cannot be enqueued */ - tmp_qp->stats.enqueue_err_count++; - if (nb_ops_sent == 0) - return 0; - goto kick_tail; - } else if (unlikely(nb_desc_to_build > 1)) { - /* this op is too big and must be split - get more - * descriptors and retry - */ - - QAT_DP_LOG(DEBUG, "Build %d descriptors for this op", - nb_desc_to_build); - - nb_remaining_descriptors -= nb_desc_to_build; - if (nb_remaining_descriptors >= 0) { - /* There are enough remaining descriptors - * so retry - */ - int ret2 = qat_comp_build_multiple_requests( - *ops, tmp_qp, tail, - nb_desc_to_build); - - if (unlikely(ret2 < 1)) { - QAT_DP_LOG(DEBUG, - "Failed to build (%d) descriptors, status %d", - nb_desc_to_build, ret2); - - qat_comp_free_split_op_memzones(cookie, - nb_desc_to_build - 1); - - tmp_qp->stats.enqueue_err_count++; - - /* This message cannot be enqueued */ - if (nb_ops_sent == 0) - return 0; - goto kick_tail; - } else { - descriptors_built = ret2; - total_descriptors_built += - descriptors_built; - nb_remaining_descriptors -= - descriptors_built; - QAT_DP_LOG(DEBUG, - "Multiple descriptors (%d) built ok", - descriptors_built); - } - } else { - QAT_DP_LOG(ERR, "For the current op, number of requested descriptors (%d) " - "exceeds number of available descriptors (%d)", - nb_desc_to_build, - nb_remaining_descriptors + - nb_desc_to_build); - - qat_comp_free_split_op_memzones(cookie, - nb_desc_to_build - 1); - - /* Not enough extra descriptors */ - if (nb_ops_sent == 0) - return 0; - goto kick_tail; - } - } else { - descriptors_built = 1; - total_descriptors_built++; - nb_remaining_descriptors--; - QAT_DP_LOG(DEBUG, "Single descriptor built ok"); - } - - tail = adf_modulo(tail + (queue->msg_size * descriptors_built), - queue->modulo_mask); - ops++; - nb_ops_sent++; - } - -kick_tail: - queue->tail = tail; - tmp_qp->enqueued += total_descriptors_built; - tmp_qp->stats.enqueued_count += nb_ops_sent; - txq_write_tail(tmp_qp->qat_dev_gen, tmp_qp, queue); - return nb_ops_sent; -} - uint16_t qat_dequeue_op_burst(void *qp, void **ops, qat_op_dequeue_t qat_dequeue_process_response, uint16_t nb_ops) diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 66f00943a5..f911125e86 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -127,9 +127,6 @@ uint16_t qat_enqueue_op_burst(void *qp, qat_op_build_request_t op_build_request, void **ops, uint16_t nb_ops); -uint16_t -qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops); - uint16_t qat_dequeue_op_burst(void *qp, void **ops, qat_op_dequeue_t qat_dequeue_process_response, uint16_t nb_ops); @@ -201,6 +198,21 @@ struct qat_qp_hw_spec_funcs { qat_qp_get_hw_data_t qat_qp_get_hw_data; }; -extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[]; +extern struct qat_qp_hw_spec_funcs* + qat_qp_hw_spec[]; + +static inline void +txq_write_tail(enum qat_device_gen qat_dev_gen, + struct qat_qp *qp, struct qat_queue *q) +{ + struct qat_qp_hw_spec_funcs *ops = + qat_qp_hw_spec[qat_dev_gen]; + + /* + * Pointer check should be done during + * initialization + */ + ops->qat_qp_csr_write_tail(qp, q); +} #endif /* _QAT_QP_H_ */ diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c index fe4a4999c6..559948a46a 100644 --- a/drivers/compress/qat/qat_comp.c +++ b/drivers/compress/qat/qat_comp.c @@ -1144,3 +1144,185 @@ qat_comp_stream_free(struct rte_compressdev *dev, void *stream) } return -EINVAL; } + +/** + * Enqueue packets for processing on queue pair of a device + * + * @param qp + * qat queue pair + * @param ops + * Compressdev operation + * @param nb_ops + * number of operations + * @return + * - nb_ops_sent if successful + */ +uint16_t +qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops) +{ + register struct qat_queue *queue; + struct qat_qp *tmp_qp = (struct qat_qp *)qp; + register uint32_t nb_ops_sent = 0; + register int nb_desc_to_build; + uint16_t nb_ops_possible = nb_ops; + register uint8_t *base_addr; + register uint32_t tail; + + int descriptors_built, total_descriptors_built = 0; + int nb_remaining_descriptors; + int overflow = 0; + + if (unlikely(nb_ops == 0)) + return 0; + + /* read params used a lot in main loop into registers */ + queue = &(tmp_qp->tx_q); + base_addr = (uint8_t *)queue->base_addr; + tail = queue->tail; + + /* Find how many can actually fit on the ring */ + { + /* dequeued can only be written by one thread, but it may not + * be this thread. As it's 4-byte aligned it will be read + * atomically here by any Intel CPU. + * enqueued can wrap before dequeued, but cannot + * lap it as var size of enq/deq (uint32_t) > var size of + * max_inflights (uint16_t). In reality inflights is never + * even as big as max uint16_t, as it's <= ADF_MAX_DESC. + * On wrapping, the calculation still returns the correct + * positive value as all three vars are unsigned. + */ + uint32_t inflights = + tmp_qp->enqueued - tmp_qp->dequeued; + + /* Find how many can actually fit on the ring */ + overflow = (inflights + nb_ops) - tmp_qp->max_inflights; + if (overflow > 0) { + nb_ops_possible = nb_ops - overflow; + if (nb_ops_possible == 0) + return 0; + } + + /* QAT has plenty of work queued already, so don't waste cycles + * enqueueing, wait til the application has gathered a bigger + * burst or some completed ops have been dequeued + */ + if (tmp_qp->min_enq_burst_threshold && inflights > + QAT_QP_MIN_INFL_THRESHOLD && nb_ops_possible < + tmp_qp->min_enq_burst_threshold) { + tmp_qp->stats.threshold_hit_count++; + return 0; + } + } + + /* At this point nb_ops_possible is assuming a 1:1 mapping + * between ops and descriptors. + * Fewer may be sent if some ops have to be split. + * nb_ops_possible is <= burst size. + * Find out how many spaces are actually available on the qp in case + * more are needed. + */ + nb_remaining_descriptors = nb_ops_possible + + ((overflow >= 0) ? 0 : overflow * (-1)); + QAT_DP_LOG(DEBUG, "Nb ops requested %d, nb descriptors remaining %d", + nb_ops, nb_remaining_descriptors); + + while (nb_ops_sent != nb_ops_possible && + nb_remaining_descriptors > 0) { + struct qat_comp_op_cookie *cookie = + tmp_qp->op_cookies[tail >> queue->trailz]; + + descriptors_built = 0; + + QAT_DP_LOG(DEBUG, "--- data length: %u", + ((struct rte_comp_op *)*ops)->src.length); + + nb_desc_to_build = qat_comp_build_request(*ops, + base_addr + tail, cookie, tmp_qp->qat_dev_gen); + QAT_DP_LOG(DEBUG, "%d descriptors built, %d remaining, " + "%d ops sent, %d descriptors needed", + total_descriptors_built, nb_remaining_descriptors, + nb_ops_sent, nb_desc_to_build); + + if (unlikely(nb_desc_to_build < 0)) { + /* this message cannot be enqueued */ + tmp_qp->stats.enqueue_err_count++; + if (nb_ops_sent == 0) + return 0; + goto kick_tail; + } else if (unlikely(nb_desc_to_build > 1)) { + /* this op is too big and must be split - get more + * descriptors and retry + */ + + QAT_DP_LOG(DEBUG, "Build %d descriptors for this op", + nb_desc_to_build); + + nb_remaining_descriptors -= nb_desc_to_build; + if (nb_remaining_descriptors >= 0) { + /* There are enough remaining descriptors + * so retry + */ + int ret2 = qat_comp_build_multiple_requests( + *ops, tmp_qp, tail, + nb_desc_to_build); + + if (unlikely(ret2 < 1)) { + QAT_DP_LOG(DEBUG, + "Failed to build (%d) descriptors, status %d", + nb_desc_to_build, ret2); + + qat_comp_free_split_op_memzones(cookie, + nb_desc_to_build - 1); + + tmp_qp->stats.enqueue_err_count++; + + /* This message cannot be enqueued */ + if (nb_ops_sent == 0) + return 0; + goto kick_tail; + } else { + descriptors_built = ret2; + total_descriptors_built += + descriptors_built; + nb_remaining_descriptors -= + descriptors_built; + QAT_DP_LOG(DEBUG, + "Multiple descriptors (%d) built ok", + descriptors_built); + } + } else { + QAT_DP_LOG(ERR, "For the current op, number of requested descriptors (%d) " + "exceeds number of available descriptors (%d)", + nb_desc_to_build, + nb_remaining_descriptors + + nb_desc_to_build); + + qat_comp_free_split_op_memzones(cookie, + nb_desc_to_build - 1); + + /* Not enough extra descriptors */ + if (nb_ops_sent == 0) + return 0; + goto kick_tail; + } + } else { + descriptors_built = 1; + total_descriptors_built++; + nb_remaining_descriptors--; + QAT_DP_LOG(DEBUG, "Single descriptor built ok"); + } + + tail = adf_modulo(tail + (queue->msg_size * descriptors_built), + queue->modulo_mask); + ops++; + nb_ops_sent++; + } + +kick_tail: + queue->tail = tail; + tmp_qp->enqueued += total_descriptors_built; + tmp_qp->stats.enqueued_count += nb_ops_sent; + txq_write_tail(tmp_qp->qat_dev_gen, tmp_qp, queue); + return nb_ops_sent; +} diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h index da7b9a6eec..dc220cd6e3 100644 --- a/drivers/compress/qat/qat_comp.h +++ b/drivers/compress/qat/qat_comp.h @@ -141,5 +141,8 @@ qat_comp_stream_create(struct rte_compressdev *dev, int qat_comp_stream_free(struct rte_compressdev *dev, void *stream); +uint16_t +qat_enqueue_comp_op_burst(void *qp, void **ops, uint16_t nb_ops); + #endif #endif -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-08-09 21:51:20.611652600 +0800 +++ 0094-common-qat-detach-crypto-from-compress-build.patch 2023-08-09 21:51:18.264352000 +0800 @@ -1 +1 @@ -From 7cb939f6485ea8e4d6b1e8e82924fe2fcec96d60 Mon Sep 17 00:00:00 2001 +From dc3ba9b6f98bca6cdef64d3660ac48f4bdbd2e7c Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 7cb939f6485ea8e4d6b1e8e82924fe2fcec96d60 ] @@ -12 +14,0 @@ -Cc: stable@dpdk.org @@ -25 +27 @@ -index 0f72b6959b..edc793ba95 100644 +index b84e5b3c6c..95b52b78c3 100644 @@ -28 +30 @@ -@@ -70,14 +70,6 @@ else +@@ -54,14 +54,6 @@ if libipsecmb.found() and libcrypto_3.found() @@ -44 +46 @@ -index 0ba26d8580..094d684abc 100644 +index 9cbd19a481..e95df292e8 100644 @@ -47 +49 @@ -@@ -490,20 +490,6 @@ adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen) +@@ -449,20 +449,6 @@ adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen) @@ -68 +70 @@ -@@ -670,179 +656,6 @@ kick_tail: +@@ -631,179 +617,6 @@ kick_tail: @@ -249 +251 @@ -index d19fc387e4..ae18fb942e 100644 +index 66f00943a5..f911125e86 100644 @@ -262 +264 @@ -@@ -206,6 +203,21 @@ struct qat_qp_hw_spec_funcs { +@@ -201,6 +198,21 @@ struct qat_qp_hw_spec_funcs {