From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-bl2nam02on0040.outbound.protection.outlook.com [104.47.38.40]) by dpdk.org (Postfix) with ESMTP id 9B0305B12 for ; Fri, 9 Mar 2018 09:35:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=3zYCjwzDDbA28fN+s2mOOzYQTZ0G52iKHlnUi3Gj6Xw=; b=OGdIf73R9qU0PLAmqr960YZdglGUEdU/9QXmhUBXyv6RMeGUpKbe53oJRvk30zOL+hqnZrrjBybq5q2ZFt5MN5UKPXFL6Yctlg279I7D9c+Tx0kfn/0gZhR9/v2e7nL5SepHXE4ycNU0Ha0/zVVqP81uY22CKg3IJzw5ILjKLqA= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by MWHPR12MB1518.namprd12.prod.outlook.com (2603:10b6:301:b::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.548.14; Fri, 9 Mar 2018 08:35:49 +0000 From: Ravi Kumar To: dev@dpdk.org Cc: pablo.de.lara.guarch@intel.com Date: Fri, 9 Mar 2018 03:35:06 -0500 Message-Id: <1520584520-130522-6-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1520584520-130522-1-git-send-email-Ravi1.kumar@amd.com> References: <1515577379-18453-1-git-send-email-Ravi1.kumar@amd.com> <1520584520-130522-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: MA1PR0101CA0050.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:20::12) To MWHPR12MB1518.namprd12.prod.outlook.com (2603:10b6:301:b::22) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: bba5b9e6-d19f-46d5-065f-08d58598c372 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:MWHPR12MB1518; X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1518; 3:Ves3zhlrt8MUmRHMsCCFr/RfrxxtzO9q4SaN8L3eUlbbAZp+U2qA+YpB27VFTCTgVuDb3B2A3HTjRAJiG4ylVm2+z+wfymPmoUPJDobO5+XrVFy+pHhLxDYmQPK8J9ATSAeDrdZkhsBJ+T2GTsCNidxs74DMwEC2giTxCDkUTF7T2kd6Z5JWzQZacgGQHrWY3hMGW72tAcyNiaoAXBpOOSzMUjE3aatU2WDqc7ocYdKJtzocqOwIn132T6kzY9Sd; 25:OirNS1LFunExKmBDodT8w8ZUbu0fkFmKJEhOnLUtHak5T6ZQ/Nt5+kzCYHv1gI6EYlVJCGnucVrg+269SF2hQVW1tng6Y3nD6ty4ePvmujuMAxNLridkeNn+pJSQnemimoT2y38eFgFdCB5DpjNPSHhAq3cgCwMixR/UiNJqq5ARJbUbyac0vFFffkAuANzXB8yYVF0LRjXztp8OUCmbBztuh7qwm0vWZQnZc0H9ff7ZzpOO1EyUIa9P8JRfjQX1i6aurh3d/dWldRjOY4DK/XiE7XY1u1MslWfa3BDxSleARD1uRzjCNCQYHKIEytXRltrHWPaUcY2H3bML9D/Ezg==; 31:sNgoKYqKL2o5u6qvsycKXqMe8GHB3k7+uWToVGTsIEnEVwrV+yMMRHsvghxRKQdfQrsjFFvqzHNTvCQBcb4BMIYX0fWH9iPkzrWh8IK4JbOvK6sg9767NW91AIZ3hCOWEF1hmJJfcLcmz0SlCcxUO4vuSKjIYEYWKiVG9WFb68Y7UijmgPtzj62dSt0rhUqIQWjsPMT1srHaRtf5nMVD3QHZLs93PBXNVTH2Jtf0M+8= X-MS-TrafficTypeDiagnostic: MWHPR12MB1518: X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1518; 20:CLLHeA6fTTlKoZz+pUriWMi309FJvqo9ZovykictnHW7OOZVHZwHj4QFF6AbJ9LfIlEylzbhFnPOkUgitMLRjAfy4pPYzv7dN5jYutK61cOqxCg1Wuc/yrQLG86rU+SwtI5aU2WmW1y3alnFhjlbPncI5VC4H/zfQnPCYFQa8yqKkLOZRTinERxWc0tyJI2ai43JJpMDfcoOmzORnpRlMXbg1pHqt5Q+QO3oWdHF6OH4xtplHvwYzHfV11udzCDY1pvX4/yuJHyWGkLle98LsgNSzsOfCOo36O0CddbQxpB3Q1BJQYIKU9d30lijezfN9hvJBao4vUZpthBlYQtF/rIqLrPlOmdi9hJh/ZrjRyO3yGOKbzVbwdsYfpnWJfOZcphQNrd035cKTyehjAt4+LcI17U6iNr7QJbtSnkJTwK2kKhMtLGlCmAOCeML1LTEG3DIqrzdmQI3Fv2303B3NQ4zTwV2IF1aYl3aL3THHTFkS33V3svdTjR0AyVNTc+T; 4:co/WVg8ANsveNzgtz+hWF9uEc8hppHLbC9KKTtSBih0K1T3Osqy51GRddii233DFk8I8VGPF4dVvhJQIdZ+YkxhbIL+OHwGCN7Q5okvvety2et1jW/SgUhwkgrF9RLiyCHSGPTf74yM8dBHqzD4rj06n1TCyNLDG91I3eX40p2qQ4XvvAfAlFEPwMDxdM0cVcFqiGiJVvvoHQeXOFwOztPguqM7drBs/XFaKEK2PQ2FQof18dV4+lSYRW33pDRWgumH+C68L0IeRrMdTR7hZZ45D5e+Lfs75WVRF7aMCdF4MfYwWFbWFVBzew2yZlXo3 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(3231220)(944501244)(52105095)(10201501046)(93006095)(93001095)(6055026)(6041310)(20161123558120)(20161123562045)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(6072148)(201708071742011); SRVR:MWHPR12MB1518; BCL:0; PCL:0; RULEID:; SRVR:MWHPR12MB1518; X-Forefront-PRVS: 0606BBEB39 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(39860400002)(39380400002)(396003)(366004)(376002)(346002)(199004)(189003)(16586007)(26005)(81156014)(16526019)(8676002)(186003)(81166006)(25786009)(7736002)(3846002)(386003)(305945005)(4326008)(6116002)(6666003)(316002)(86362001)(76176011)(52116002)(8936002)(51416003)(575784001)(50226002)(5660300001)(2906002)(2950100002)(6916009)(59450400001)(7696005)(36756003)(106356001)(105586002)(6486002)(68736007)(48376002)(97736004)(53416004)(2361001)(2351001)(53936002)(50466002)(478600001)(47776003)(66066001)(72206003); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR12MB1518; H:wallaby-smavila.amd.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR12MB1518; 23:7l4tCQNIPTbw27o8aVgNyCbgfjXkBlOHqGasqYrkl?= =?us-ascii?Q?nSa3evSKYqHr0+kxwEkcMFG0O8rScqagAIuFf7zr+SgXzmnJZCgNhTJ2N9a9?= =?us-ascii?Q?Zb5XMmaGZefMBNUXtoORHA5qUOAgsm3Utekr7sr2K0uVmQpViyt1rBifUWUk?= =?us-ascii?Q?GwwMLkdZRt9lv+GwfHU6e59gn77HuabpdJ8hkKVt9tlu0bmY7UjRMaf09OFL?= =?us-ascii?Q?/8TMFYpmjvjsb+RN7Znau/fFHpJdSe/bKEoHGCq1YMgdn85XsUqpS+dlnwaW?= =?us-ascii?Q?3BVF3GII93gfKJxnU5fR/KU9aXvjDFRkaC9bZvS3UdKH/PZKPfmmGgXNP25r?= =?us-ascii?Q?PbJC3yGceCcqhopJwJu7b3TmzTwfjBldWukvR/Yo5USLZIgl7JeUFgFqIVqW?= =?us-ascii?Q?WU/T5iHpQ8vNaKGJAAWZrUntvZIV6ZIEmZhjz9ax/3bqJqlDldHZ7f5FU7Wb?= =?us-ascii?Q?l8YeRBNRQCeTgETqVwilE3MpZ9VUEzzO5QDkp7RtXfxQ3RCTwt8LwAEmbcou?= =?us-ascii?Q?FC/XiIx1yeE/f36GO93ocw/MSzXIQz0A6PAriyCMZqud9r9RLCL54Y+wGCIN?= =?us-ascii?Q?AWKYpEuEgyGSi8Gno/R8eqqqCiijPcRdSKlOu240icTnI0xlGAOSHV96NsFr?= =?us-ascii?Q?85r3i9JTSPkcu6ILtPktA/NFprVD9pz+RrLvUtLpmowQZwjtUfiLS+byYBT2?= =?us-ascii?Q?ov+gigwmnUZtlk+b5vxZh8aUC2r56olv2HxywH0ZVbFN7YUSxbyMt59Na2i/?= =?us-ascii?Q?ktmN0Rvy6ZrLy9ftNVW9i4v3c9afJO0CiU0HJhi+njvYWLyl+DwSqYzeIm0f?= =?us-ascii?Q?uZluZyGM9G15jrk54lleZFhfxljF4A6De0RoI3phgWX8fnwskAEmPuf0CYO3?= =?us-ascii?Q?PmncQZBWlRm4+xxlWIT8UD64tgaLoVU5sgU1smMA5FzlS/eOeC1zB6Wf2qA8?= =?us-ascii?Q?05QvVVKXPVYz18skAekOrE+0OKFMJ2Flu4dMwB4VXrO8mj+D6sf2HyR2oWxh?= =?us-ascii?Q?drXIWm1XmpD4OK0xSoycKPK3zJgr2zCLEsnMsh0y/JiA6qxOjWDj417gbWJj?= =?us-ascii?Q?Lv6Inj2IKRvPgKUl61J+l8pb73VQum4/RT+IhsxB86Ud34cv7HWP/O4rQf1K?= =?us-ascii?Q?MihEc+wI3RVpRT277lKJ4fvWxkWySQmTiQcUpZ7YMM07Ue0o946fySEOGeH7?= =?us-ascii?Q?tZWdjra6yTxwsc5YMQhhaD6ABhiHh0a35Dj?= X-Microsoft-Antispam-Message-Info: msRrGGIWSsnp2tqe5Pzi8bxcclCe9Vw0TGWDL4hfDMUth67xBz8xcn8IrO5Ex0NAwcNXb0p3IAvi5JiNF1nCZB3ZISk9SvT42nAN4k6po9fyrMP3AV67XCBVF0ochUmHbylnoAgmf3/kejxeaDXB7L7O+PQVAP1UPKATPyNE5sb818dKGr3a2CloDUDFqjFT X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1518; 6:/lwj1Sd5zzrg6Ivuue15nGCoVv0+BiAa5PBD/ePQ1L6fmelOXKFwcszjTSTGhUDiC2L5IdovamriOxiyKCVSn/clNRh3QXq2Sv56Bnww2UdIrzjMnbQ1yrARQAfB1TzuJmK3SdOo8zN78jSNV1pmlAAc83yrpt8Zd0TIzVLj5yidGWFd1bI9sbYwvFLJGARrX3un4n58njwshd8MJyk1RIbe4blmaDAbwJp2s1Ycu3SU10eZx8JF0dzMFemx5DfttRbIDw7zmK2TLLXSQi2sKqqn+4szAXPCvviSxBwga4KiaA8AjDQ9enB3IOqz7icxihGkRo6NEfXp2Pl7abp+qydLeTK890WNq8Z+DNcVnO4=; 5:VEgVzFbauQwC2iBWRjufEcB6WtCLMkL01rCZuF+RpBSG/929JcmudlpnO9SO4NVo+7hHADN6WsXTug+nL9skJFE7HNDJ5y1VRnjNsq/ORleyCd30MLEIVszFztqHDfled3m51tEPYpA/jKIriHaMNws/oDznDISkhprAF+PVsu4=; 24:kPpEh7l6KaBwYb+0qGb8VBpvAuaaNnwjF4XEgYB7MhwcVr9T4XPeqR7Pf2m8myEA2MIELS15PJsHsD6dVkWHclh5/QqvCTH7ebVtsB8n59w=; 7:CK5AtW4wFad3oGwjhQtA382UFr1Ya2kAUWHRt+ej883C3qWTkNZ6TS8r6aYvYQA+JJMjGPfFoChagNhg/2+y/ijs8Hp4R+MXxaIFopynADPJTFHll6w4zEHS7wXP8WrI6CU6OSqrCkMLR/Wi6XyEoAY7YyCDElyJL5pASswNDTG893wXGrgOAadQ5ChvIhK9pPn4i64+Js7LuBlaRF6nwPX0faXU9Xg+P0dxl7KOxWn3GqmOrWcMTugAApCxK/+5 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; MWHPR12MB1518; 20:QybqgIPCe3BHW3yDSNQqCuN4VODdGS+pS/FO3tGhimIpHHSg2sQdWYD9pJ+ByUYaFmUO7Q19+GCIkFzAeo/57KKP4fE8P6HL+SlJ9fZQefVvm/Mk6bzawr7MMfo5gijFHVUgVZgL63q3RUubT1AU5UoS01csqlaE8dow1Be2vH2OoeyFIzsHnx/O42F0Ua7Rqci56nPTUlSows7E71FT0KgH0tVjhCvZJdXa84ptBXUYW3xcqbXGiYWxL83zOxXm X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2018 08:35:49.4908 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bba5b9e6-d19f-46d5-065f-08d58598c372 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1518 Subject: [dpdk-dev] [PATCH v4 06/20] crypto/ccp: support crypto enqueue and dequeue burst api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Mar 2018 08:35:52 -0000 Signed-off-by: Ravi Kumar --- drivers/crypto/ccp/ccp_crypto.c | 360 +++++++++++++++++++++++++++++++++++++++ drivers/crypto/ccp/ccp_crypto.h | 35 ++++ drivers/crypto/ccp/ccp_dev.c | 27 +++ drivers/crypto/ccp/ccp_dev.h | 9 + drivers/crypto/ccp/rte_ccp_pmd.c | 64 ++++++- 5 files changed, 488 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index c365c0f..c17e84f 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -227,3 +227,363 @@ ccp_set_session_parameters(struct ccp_session *sess, } return ret; } + +/* calculate CCP descriptors requirement */ +static inline int +ccp_cipher_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + } + return count; +} + +static inline int +ccp_auth_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + } + + return count; +} + +static int +ccp_aead_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + } + return count; +} + +int +ccp_compute_slot_count(struct ccp_session *session) +{ + int count = 0; + + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + count = ccp_cipher_slot(session); + break; + case CCP_CMD_AUTH: + count = ccp_auth_slot(session); + break; + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + count = ccp_cipher_slot(session); + count += ccp_auth_slot(session); + break; + case CCP_CMD_COMBINED: + count = ccp_aead_slot(session); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + + } + + return count; +} + +static inline int +ccp_crypto_cipher(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + return -ENOTSUP; + } + return result; +} + +static inline int +ccp_crypto_auth(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + return -ENOTSUP; + } + + return result; +} + +static inline int +ccp_crypto_aead(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + return -ENOTSUP; + } + return result; +} + +int +process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req) +{ + int i, result = 0; + struct ccp_batch_info *b_info; + struct ccp_session *session; + + if (rte_mempool_get(qp->batch_mp, (void **)&b_info)) { + CCP_LOG_ERR("batch info allocation failed"); + return 0; + } + /* populate batch info necessary for dequeue */ + b_info->op_idx = 0; + b_info->lsb_buf_idx = 0; + b_info->desccnt = 0; + b_info->cmd_q = cmd_q; + b_info->lsb_buf_phys = + (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + rte_atomic64_sub(&b_info->cmd_q->free_slots, slots_req); + + b_info->head_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + for (i = 0; i < nb_ops; i++) { + session = (struct ccp_session *)get_session_private_data( + op[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_AUTH: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_CIPHER_HASH: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_HASH_CIPHER: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_COMBINED: + result = ccp_crypto_aead(op[i], cmd_q, b_info); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + result = -1; + } + if (unlikely(result < 0)) { + rte_atomic64_add(&b_info->cmd_q->free_slots, + (slots_req - b_info->desccnt)); + break; + } + b_info->op[i] = op[i]; + } + + b_info->opcnt = i; + b_info->tail_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + + rte_wmb(); + /* Write the new tail address back to the queue register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_TAIL_LO_BASE, + b_info->tail_offset); + /* Turn the queue back on using our cached control register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_CONTROL_BASE, + cmd_q->qcontrol | CMD_Q_RUN); + + rte_ring_enqueue(qp->processed_pkts, (void *)b_info); + + return i; +} + +static inline void ccp_auth_dq_prepare(struct rte_crypto_op *op) +{ + struct ccp_session *session; + uint8_t *digest_data, *addr; + struct rte_mbuf *m_last; + int offset, digest_offset; + uint8_t digest_le[64]; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + if (session->cmd_id == CCP_CMD_COMBINED) { + digest_data = op->sym->aead.digest.data; + digest_offset = op->sym->aead.data.offset + + op->sym->aead.data.length; + } else { + digest_data = op->sym->auth.digest.data; + digest_offset = op->sym->auth.data.offset + + op->sym->auth.data.length; + } + m_last = rte_pktmbuf_lastseg(op->sym->m_src); + addr = (uint8_t *)((char *)m_last->buf_addr + m_last->data_off + + m_last->data_len - session->auth.ctx_len); + + rte_mb(); + offset = session->auth.offset; + + if (session->auth.engine == CCP_ENGINE_SHA) + if ((session->auth.ut.sha_type != CCP_SHA_TYPE_1) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_224) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_256)) { + /* All other algorithms require byte + * swap done by host + */ + unsigned int i; + + offset = session->auth.ctx_len - + session->auth.offset - 1; + for (i = 0; i < session->auth.digest_length; i++) + digest_le[i] = addr[offset - i]; + offset = 0; + addr = digest_le; + } + + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if (session->auth.op == CCP_AUTH_OP_VERIFY) { + if (memcmp(addr + offset, digest_data, + session->auth.digest_length) != 0) + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + + } else { + if (unlikely(digest_data == 0)) + digest_data = rte_pktmbuf_mtod_offset( + op->sym->m_dst, uint8_t *, + digest_offset); + rte_memcpy(digest_data, addr + offset, + session->auth.digest_length); + } + /* Trim area used for digest from mbuf. */ + rte_pktmbuf_trim(op->sym->m_src, + session->auth.ctx_len); +} + +static int +ccp_prepare_ops(struct rte_crypto_op **op_d, + struct ccp_batch_info *b_info, + uint16_t nb_ops) +{ + int i, min_ops; + struct ccp_session *session; + + min_ops = RTE_MIN(nb_ops, b_info->opcnt); + + for (i = 0; i < min_ops; i++) { + op_d[i] = b_info->op[b_info->op_idx++]; + session = (struct ccp_session *)get_session_private_data( + op_d[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + op_d[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + case CCP_CMD_AUTH: + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + case CCP_CMD_COMBINED: + ccp_auth_dq_prepare(op_d[i]); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + } + } + + b_info->opcnt -= min_ops; + return min_ops; +} + +int +process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops) +{ + struct ccp_batch_info *b_info; + uint32_t cur_head_offset; + + if (qp->b_info != NULL) { + b_info = qp->b_info; + if (unlikely(b_info->op_idx > 0)) + goto success; + } else if (rte_ring_dequeue(qp->processed_pkts, + (void **)&b_info)) + return 0; + cur_head_offset = CCP_READ_REG(b_info->cmd_q->reg_base, + CMD_Q_HEAD_LO_BASE); + + if (b_info->head_offset < b_info->tail_offset) { + if ((cur_head_offset >= b_info->head_offset) && + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } else { + if ((cur_head_offset >= b_info->head_offset) || + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } + + +success: + nb_ops = ccp_prepare_ops(op, b_info, nb_ops); + rte_atomic64_add(&b_info->cmd_q->free_slots, b_info->desccnt); + b_info->desccnt = 0; + if (b_info->opcnt > 0) { + qp->b_info = b_info; + } else { + rte_mempool_put(qp->batch_mp, (void *)b_info); + qp->b_info = NULL; + } + + return nb_ops; +} diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h index 346d5ee..4455497 100644 --- a/drivers/crypto/ccp/ccp_crypto.h +++ b/drivers/crypto/ccp/ccp_crypto.h @@ -264,4 +264,39 @@ struct ccp_qp; int ccp_set_session_parameters(struct ccp_session *sess, const struct rte_crypto_sym_xform *xform); +/** + * Find count of slots + * + * @param session CCP private session + * @return count of free slots available + */ +int ccp_compute_slot_count(struct ccp_session *session); + +/** + * process crypto ops to be enqueued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param cmd_q CCP cmd queue + * @param nb_ops No. of ops to be submitted + * @return 0 on success otherwise -1 + */ +int process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req); + +/** + * process crypto ops to be dequeued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param nb_ops requested no. of ops + * @return 0 on success otherwise -1 + */ +int process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops); + #endif /* _CCP_CRYPTO_H_ */ diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 57bccf4..fee90e3 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -61,6 +61,33 @@ ccp_dev_start(struct rte_cryptodev *dev) return 0; } +struct ccp_queue * +ccp_allot_queue(struct rte_cryptodev *cdev, int slot_req) +{ + int i, ret = 0; + struct ccp_device *dev; + struct ccp_private *priv = cdev->data->dev_private; + + dev = TAILQ_NEXT(priv->last_dev, next); + if (unlikely(dev == NULL)) + dev = TAILQ_FIRST(&ccp_list); + priv->last_dev = dev; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + for (i = 0; i < dev->cmd_q_count; i++) { + dev->qidx++; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + } + return NULL; +} + static const struct rte_memzone * ccp_queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h index a16ba81..cfb3b03 100644 --- a/drivers/crypto/ccp/ccp_dev.h +++ b/drivers/crypto/ccp/ccp_dev.h @@ -445,4 +445,13 @@ int ccp_dev_start(struct rte_cryptodev *dev); */ int ccp_probe_devices(const struct rte_pci_id *ccp_id); +/** + * allocate a ccp command queue + * + * @dev rte crypto device + * @param slot_req number of required + * @return allotted CCP queue on success otherwise NULL + */ +struct ccp_queue *ccp_allot_queue(struct rte_cryptodev *dev, int slot_req); + #endif /* _CCP_DEV_H_ */ diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index cc35a97..ed6ca5d 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -38,6 +38,7 @@ #include #include +#include "ccp_crypto.h" #include "ccp_dev.h" #include "ccp_pmd_private.h" @@ -47,23 +48,72 @@ static unsigned int ccp_pmd_init_done; uint8_t ccp_cryptodev_driver_id; +static struct ccp_session * +get_ccp_session(struct ccp_qp *qp __rte_unused, struct rte_crypto_op *op) +{ + struct ccp_session *sess = NULL; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + if (unlikely(op->sym->session == NULL)) + return NULL; + + sess = (struct ccp_session *) + get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + } + + return sess; +} + static uint16_t -ccp_pmd_enqueue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { - uint16_t enq_cnt = 0; + struct ccp_session *sess = NULL; + struct ccp_qp *qp = queue_pair; + struct ccp_queue *cmd_q; + struct rte_cryptodev *dev = qp->dev; + uint16_t i, enq_cnt = 0, slots_req = 0; + + if (nb_ops == 0) + return 0; + + if (unlikely(rte_ring_full(qp->processed_pkts) != 0)) + return 0; + + for (i = 0; i < nb_ops; i++) { + sess = get_ccp_session(qp, ops[i]); + if (unlikely(sess == NULL) && (i == 0)) { + qp->qp_stats.enqueue_err_count++; + return 0; + } else if (sess == NULL) { + nb_ops = i; + break; + } + slots_req += ccp_compute_slot_count(sess); + } + + cmd_q = ccp_allot_queue(dev, slots_req); + if (unlikely(cmd_q == NULL)) + return 0; + enq_cnt = process_ops_to_enqueue(qp, ops, cmd_q, nb_ops, slots_req); + qp->qp_stats.enqueued_count += enq_cnt; return enq_cnt; } static uint16_t -ccp_pmd_dequeue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { + struct ccp_qp *qp = queue_pair; uint16_t nb_dequeued = 0; + nb_dequeued = process_ops_to_dequeue(qp, ops, nb_ops); + + qp->qp_stats.dequeued_count += nb_dequeued; + return nb_dequeued; } -- 2.7.4