From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0046.outbound.protection.outlook.com [104.47.41.46]) by dpdk.org (Postfix) with ESMTP id 95129AA94 for ; Mon, 19 Mar 2018 13:24:29 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=08/XrIIkeYlso80zjgWk6J+b7yfNrzKBuzsAvrBZN2E=; b=MpLUW2GgpPVILBw6aBdRnpE1Tpb5CHbySWxlcJYglaDVcdssbD4fdvSxUi7C06Mg4JAztZdTh4tk6Dkbk/ax71kZwdLFvu/ASAL0F+KaDtPCuOcLVzT+dsXS4DZbXeacDBRkoLWmiyJHKWKGZGF0UUYwPFQzOfWEnSkU57PodvY= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by BN6PR12MB1505.namprd12.prod.outlook.com (10.172.24.146) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.588.14; Mon, 19 Mar 2018 12:24:26 +0000 From: Ravi Kumar To: dev@dpdk.org Cc: pablo.de.lara.guarch@intel.com, hemant.agrawal@nxp.com Date: Mon, 19 Mar 2018 08:23:40 -0400 Message-Id: <1521462233-13590-6-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1521462233-13590-1-git-send-email-Ravi1.kumar@amd.com> References: <1520584520-130522-1-git-send-email-Ravi1.kumar@amd.com> <1521462233-13590-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: MAXPR0101CA0025.INDPRD01.PROD.OUTLOOK.COM (10.174.62.139) To BN6PR12MB1505.namprd12.prod.outlook.com (10.172.24.146) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 529bca5c-6fc7-40d8-c411-08d58d945be0 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7153060)(7193020); SRVR:BN6PR12MB1505; X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 3:/Gcg7Gq5bVItRQHwXKgNXc4VOGwkZCMCVGOxS3cvkHE0yJIZmkRe0QOXQXC2+y/Ogoqw21weSyMb16NaRalAWBjNOLPZSfDyUoGpJJxMtTljFwi8nPnHfqFm7edqKRv6rt3aFsXh4eGRDW+GhcgBrhh1je/W5F7mBYCkmalDladiw+3+E3AU0CMc05u/Bns0va4Kj0nso80VkeF3GC2dQ6xpASieuXMeMUn9SqNjFzB7v8zQreC3nUO6yj1iox5N; 25:UjqAt/fcM/m1kshfOVar4gbe+jfgKgZI+0o+NO4wLBMZGD4g5Y3VSd+6HNXED8a7eacbVclqj+85phON+hZzD/c9OKDk8SC0k/olk3ToypHMGtfm+ic9Jfea4YchFTJeiVXDdk2o/UUFj4dtJIpswbDBh8Kn4BG+PIQM+Yw+dozJ+IdcjW/CABKDIrjqSiwuIm47NTm7e7pppqJCajY7S4MpdLAbnft4xhZ2b2BR9m00cDVA39pCNHU8N+JsVGqECtvr9ZH9B7kpaLPXQEypXXll42W6jAFAmb//m85upa9vxQA7hq2LHEazlC5whPu3PBUdBsUBlLh2/RAcK+S/Lg==; 31:2Mx/p1ci+1j/S5yWA/Bcoc11Y517/UefiTpbPtQkySX5G9vcuYTNa8j11xf1MNzqPmiRUM3WE69XIOiNq3YgqGxjW6OmHLOPbY2n39Pgy0BkDBmKQioOVS47m7m+FImHumRvaoTey9NRadxsjuw2V9Eovb5tCzNQuYKFPJcDEiZ43ZugxfhhwGdCbHusnCTnYhY1Py+WhGrOmNM+DL88+asTufuaHlVTW/3oj1yf3Qo= X-MS-TrafficTypeDiagnostic: BN6PR12MB1505: X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 20:dApaTZ910Vejb+Y6wyeLuAFKrD+dG7N+jfG5hw3E1ba6nu6vlYQyGFuvCEvdrhO/d4gQ7YtEEvhqpSkzK/nUAq3mR91OUqCYqc82mEzW6+XKHwW72AkRhrqGvS2Tvq9+XvypCSeIf9o457ejn+Rnfrt49o0Mk20edHlCZwegcvpPBlC5sovswC7EzzWEXyLyLzGhFDI6MZmeixRSqbhvkiVi5r+uEcYqQY23qGPt8pOBq2EFXAEIT3jOeinBSjIS4VgZTbHkd9ABIghzMbUt5P0Ot6ZQwHzB+/c5wcNiql/McVCc0m17x4NGvZoMMs/G/Dg0hAtbj1ff4nlzDkM6MaWaJvtBvoOj8N3SJOmx6WTh7Ji7OIoMSTA3Cq+EyPXdLGIJ+WGpTc9+tiCre0xJSQkg4ab6QhIk42ooMhP8txm8Db9cunak5Frrf07Vdo3HFrKcAVatxwRlZOFf/XwAeZb5vipunTmEkLmoFPj5w9YrdsB5OB988Kijx5PLcNOJ; 4:w1Kza0Rhu9bn+ZNPiIB/0w1l8AfSnZYkdGglbrGgOp/J4kxT4eA4BZZzCaejT4i5132mZGKY3CouVKb8J+OZuWVcxV33aDFhsBL22yaHBPvJGRoOESxLsv7eXBAzQOyz+pIjPQKccIxZa4o8iGgWH59mW0CcT8Zq8RjzjvX163Ka5wWPoRwTsRLAzMqbxT+Glc9KC871GVfiM/4TjheX1y6/e+Dfgh8GRXHmFksceFmGOrNxnF8Ze1he/DB8fH50AhFqNg0TtlG7Agmp30TUovAEEHiS0FSz6W4CKf7b6JgQK2QmP2t0U9Mfh0y6NtYe X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231221)(944501300)(52105095)(3002001)(93006095)(93001095)(10201501046)(6055026)(6041310)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123562045)(20161123558120)(6072148)(201708071742011); SRVR:BN6PR12MB1505; BCL:0; PCL:0; RULEID:; SRVR:BN6PR12MB1505; X-Forefront-PRVS: 06167FAD59 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(396003)(39380400002)(39860400002)(346002)(376002)(366004)(199004)(189003)(97736004)(105586002)(2361001)(2351001)(72206003)(48376002)(4326008)(50226002)(76176011)(59450400001)(8676002)(26005)(316002)(8936002)(53936002)(106356001)(16586007)(186003)(7696005)(51416003)(16526019)(52116002)(386003)(81166006)(81156014)(8656006)(25786009)(575784001)(66066001)(68736007)(86362001)(47776003)(6486002)(5660300001)(6666003)(7736002)(53416004)(6116002)(305945005)(3846002)(478600001)(2950100002)(36756003)(6916009)(2906002)(50466002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR12MB1505; H:wallaby-smavila.amd.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR12MB1505; 23:tVxB5ZyLCO94bb0N+K2BY+XtkNBZWFtGePxVZs8/O?= =?us-ascii?Q?TFIJ9oG/wUlUWGJtFcgxLE4v1/Qix4s/ZwQ2GQ5fHNLcFBVIlafyAgxhjAJi?= =?us-ascii?Q?KCPeh8QyiUIOc7OtFoC4mdc+e2bhMKiPH6nWBwr+c2eFlvMKAhMX0w4cJ5aY?= =?us-ascii?Q?tgI204vrA+6DvZPgEK1KFjD7gURPKPE0QcXxjZdnOk+VjkW7XlVU1pidq/MC?= =?us-ascii?Q?HRMTtgM37CkGbE9VLvM2wVSbjtsVDzr6Ad47kMpg6M+8RrW1eC51wTjN3+bn?= =?us-ascii?Q?++tErOI8RO1fRTDZJAhefeAjBaj6sJk8qHo+l9MvP07B7N1hFrrE5Kuj2PAr?= =?us-ascii?Q?FNdB2ZMzczeAc/fIJOHi4HvABCAyVYRMouezdTJC0wXa6glne50ucrX2lrtq?= =?us-ascii?Q?qoVKvRZw8wbE5PXsY5vWjZL0SkzGtN6V3zdlU04g4+daEkFxNHIgYjV3kj0w?= =?us-ascii?Q?K10f6JpsUg4le8Ws9OdlyvAQQXTy/JrQNJvgAiiQw+EJNZuDDFm7EcWr/WLb?= =?us-ascii?Q?Zx33ayESbNmmrkr/FlaymgwGPlCjxON+g/dea6gD+vTb93uipzTknnI6cpVe?= =?us-ascii?Q?MliyuARFIeBq3vWs+xMJBMB+D/WSjQPCOk+1vF0FAE81ykl2TxlRlAKLHswN?= =?us-ascii?Q?ao1N3iwg396HdZtNQaaX8i+qfFaZhTSaECCnAC2g55P0+cf66hmnGY3oJ6lX?= =?us-ascii?Q?seF+YHjvMZsdeHJOZ9GCIUmqOgxcNXqrfQxWq8lgC+81BMTku4Wy7IYh1LTM?= =?us-ascii?Q?NqmOwTy7U3ivaOeVK77FArPitYB2tkdftx3J2W73CP6cGK3XHpX8kFMyXq4u?= =?us-ascii?Q?V1lo/gv60jzOEfhdYfdtMgnuKEZMZUr+IdHc8nKCDqDuWObMrBTBICJhL0vJ?= =?us-ascii?Q?dCVoj5nsQUcwPZoyQhzb1U3VbragPXJoJB6qf0qI33UdMFBgEFpFW5y92u4B?= =?us-ascii?Q?UfLTwgXyfEUg9uO1LQA5AYkJC8aydUSpH1BsqhLGLa+NRQ9fWwuDWCkyeb9a?= =?us-ascii?Q?OLP5j68bZW6fZA5n+DewYllUl5BoUNONr0tj9bPNZu17YcxUUpH+bk1hhMjd?= =?us-ascii?Q?StqXveeuEIesnbRJB+cEoThshQbEQpFCT3RQW/YnO/kWo8lJ0MhY2N2JR71R?= =?us-ascii?Q?E9RfbVZGtxTmjripa4V/UusxVJqAtrfRMMlz43VJ6YzWgwAaR7GUtpwXe7Fj?= =?us-ascii?Q?EhJlncF4PA+GwnitSzNzHXMjcyEgWtHQz4r+bJI+3x7eLLn3oY9BLXQeA=3D?= =?us-ascii?Q?=3D?= X-Microsoft-Antispam-Message-Info: pLp7zLAiFWyNYxjlCcZGSUwd/2y0xK4HQJZhvaKb15okI/LlEpXKjQGFCv4gkwLYEe9MaHznCAezdMNWRYwoqOJj3AIHlPkhe/KhR8hqSbI77P8dXz1+sc6+xWUXRw6QRtmrHlmQtN2l2yDCCJElasWUDIxJph97WYSfx2iphsoaxswjqMuG4knirSrrkokO X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 6:TRRx8JMrinKxmAVtUp20X+DMZPVu+Rc2FKh0JQBewEIfP7zzinpTJGKklu5wEDJxmOWMq03f/wblOkY0osL/YfJcX+rK+15wnpscO/cWUiOUZWcy4hjJtJCSmWyCv5r22RP8ZA9+Ek8RayGh3O8ZFpAbqg2Yh0iNuQW1QDf8Yd4n1SS5LmNpUzplsqUOQVuiDmY0f+qyHrBVZpCj5N4AqHUw3zPojsPPoD5B4Q87YD7HchsuOwES/eGUQoxNALE2ZULzb3Y8H0LJl3Fzi05KsRIVR/ihFH3fhrhg5oCBBGfXzfjUPC9Rd6TbFLp1m16QfZeu1WclYg5M/4BKsL0WSWpozn4mCMV5auFhhIMatF4=; 5:WSAk8cAoVeJAE/m9sD+NlvQRb+RSU4TTf9jFxuOBewoFIEo/LvX/chTtVxEmg57OtidAQwB4YnSpCyhKM5e7dX0wIVHZhCUqgJb0+W12tzfBUDksXl646KThWFOVxEz0YM5626hCkdEVqdfeu0tJg8zcGUXjXltiMr8ENxm289A=; 24:1PS8OyQXI8/yjDYvX2O0R0N07BtfO8OTBWw2tMtSixXsWQIyDr7x3CxpqKAsHtRoENyLSqAvqf+mdQd0BY0lOpI3j9XBNlkvk4okZzKI9Ng=; 7:/Ymg3GH9Ip85+43PcsDU5RRYm53nIPa6VwgW1+96WXPip9SV9v6amLTmCPKz0CL/RZ5UGFEVM3a4u+hG4BVBHxPEbLVYoty6Od6QufGw9XoZpnVUINuB0/E2Xwee8hQ1xfsW3YcB3do24mTo94GyFi8HMVf4mqYWRMWgQbb9395q/4XyZ9dghBgCy9K3WiStM2jOzdDzMVk3woy8x8NEsA8jPokMdtpc/Pps8d28aYtnXmAxduFKxDiVi1BNhGbc SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1505; 20:mORZ4lfrHqZAcXGtOX3rCpN4WGMC/O5HnuN21I3qHporO25LYWEFXzroOe3YzW30oHBQue2Vgx1JZnOULs9FfUz532JSooK6E/csDHLnDtAYs2qF0hKwlXo2vZzg0FuWM0BsSMDnoTupRhtBPbQ9dr/5jtXxWHtGHUrjYo5TdPWj2fR6VEt/CI5HM2bBx6hSYsz49dwdqW+eiOWROf5XYeerW9BrIOk1FpBCk9TgpCxNVT6Sm5bkOBSW9CTXXvs9 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Mar 2018 12:24:26.5945 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 529bca5c-6fc7-40d8-c411-08d58d945be0 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1505 Subject: [dpdk-dev] [PATCH v5 06/19] crypto/ccp: support crypto enqueue and dequeue burst api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Mar 2018 12:24:30 -0000 Signed-off-by: Ravi Kumar --- drivers/crypto/ccp/ccp_crypto.c | 360 +++++++++++++++++++++++++++++++++++++++ drivers/crypto/ccp/ccp_crypto.h | 35 ++++ drivers/crypto/ccp/ccp_dev.c | 27 +++ drivers/crypto/ccp/ccp_dev.h | 9 + drivers/crypto/ccp/rte_ccp_pmd.c | 64 ++++++- 5 files changed, 488 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index 8bf4ce1..7ced435 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -201,3 +201,363 @@ ccp_set_session_parameters(struct ccp_session *sess, } return ret; } + +/* calculate CCP descriptors requirement */ +static inline int +ccp_cipher_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + } + return count; +} + +static inline int +ccp_auth_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + } + + return count; +} + +static int +ccp_aead_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + } + return count; +} + +int +ccp_compute_slot_count(struct ccp_session *session) +{ + int count = 0; + + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + count = ccp_cipher_slot(session); + break; + case CCP_CMD_AUTH: + count = ccp_auth_slot(session); + break; + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + count = ccp_cipher_slot(session); + count += ccp_auth_slot(session); + break; + case CCP_CMD_COMBINED: + count = ccp_aead_slot(session); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + + } + + return count; +} + +static inline int +ccp_crypto_cipher(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + return -ENOTSUP; + } + return result; +} + +static inline int +ccp_crypto_auth(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + return -ENOTSUP; + } + + return result; +} + +static inline int +ccp_crypto_aead(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + return -ENOTSUP; + } + return result; +} + +int +process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req) +{ + int i, result = 0; + struct ccp_batch_info *b_info; + struct ccp_session *session; + + if (rte_mempool_get(qp->batch_mp, (void **)&b_info)) { + CCP_LOG_ERR("batch info allocation failed"); + return 0; + } + /* populate batch info necessary for dequeue */ + b_info->op_idx = 0; + b_info->lsb_buf_idx = 0; + b_info->desccnt = 0; + b_info->cmd_q = cmd_q; + b_info->lsb_buf_phys = + (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + rte_atomic64_sub(&b_info->cmd_q->free_slots, slots_req); + + b_info->head_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + for (i = 0; i < nb_ops; i++) { + session = (struct ccp_session *)get_session_private_data( + op[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_AUTH: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_CIPHER_HASH: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_HASH_CIPHER: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_COMBINED: + result = ccp_crypto_aead(op[i], cmd_q, b_info); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + result = -1; + } + if (unlikely(result < 0)) { + rte_atomic64_add(&b_info->cmd_q->free_slots, + (slots_req - b_info->desccnt)); + break; + } + b_info->op[i] = op[i]; + } + + b_info->opcnt = i; + b_info->tail_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + + rte_wmb(); + /* Write the new tail address back to the queue register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_TAIL_LO_BASE, + b_info->tail_offset); + /* Turn the queue back on using our cached control register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_CONTROL_BASE, + cmd_q->qcontrol | CMD_Q_RUN); + + rte_ring_enqueue(qp->processed_pkts, (void *)b_info); + + return i; +} + +static inline void ccp_auth_dq_prepare(struct rte_crypto_op *op) +{ + struct ccp_session *session; + uint8_t *digest_data, *addr; + struct rte_mbuf *m_last; + int offset, digest_offset; + uint8_t digest_le[64]; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + if (session->cmd_id == CCP_CMD_COMBINED) { + digest_data = op->sym->aead.digest.data; + digest_offset = op->sym->aead.data.offset + + op->sym->aead.data.length; + } else { + digest_data = op->sym->auth.digest.data; + digest_offset = op->sym->auth.data.offset + + op->sym->auth.data.length; + } + m_last = rte_pktmbuf_lastseg(op->sym->m_src); + addr = (uint8_t *)((char *)m_last->buf_addr + m_last->data_off + + m_last->data_len - session->auth.ctx_len); + + rte_mb(); + offset = session->auth.offset; + + if (session->auth.engine == CCP_ENGINE_SHA) + if ((session->auth.ut.sha_type != CCP_SHA_TYPE_1) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_224) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_256)) { + /* All other algorithms require byte + * swap done by host + */ + unsigned int i; + + offset = session->auth.ctx_len - + session->auth.offset - 1; + for (i = 0; i < session->auth.digest_length; i++) + digest_le[i] = addr[offset - i]; + offset = 0; + addr = digest_le; + } + + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if (session->auth.op == CCP_AUTH_OP_VERIFY) { + if (memcmp(addr + offset, digest_data, + session->auth.digest_length) != 0) + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + + } else { + if (unlikely(digest_data == 0)) + digest_data = rte_pktmbuf_mtod_offset( + op->sym->m_dst, uint8_t *, + digest_offset); + rte_memcpy(digest_data, addr + offset, + session->auth.digest_length); + } + /* Trim area used for digest from mbuf. */ + rte_pktmbuf_trim(op->sym->m_src, + session->auth.ctx_len); +} + +static int +ccp_prepare_ops(struct rte_crypto_op **op_d, + struct ccp_batch_info *b_info, + uint16_t nb_ops) +{ + int i, min_ops; + struct ccp_session *session; + + min_ops = RTE_MIN(nb_ops, b_info->opcnt); + + for (i = 0; i < min_ops; i++) { + op_d[i] = b_info->op[b_info->op_idx++]; + session = (struct ccp_session *)get_session_private_data( + op_d[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + op_d[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + case CCP_CMD_AUTH: + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + case CCP_CMD_COMBINED: + ccp_auth_dq_prepare(op_d[i]); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + } + } + + b_info->opcnt -= min_ops; + return min_ops; +} + +int +process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops) +{ + struct ccp_batch_info *b_info; + uint32_t cur_head_offset; + + if (qp->b_info != NULL) { + b_info = qp->b_info; + if (unlikely(b_info->op_idx > 0)) + goto success; + } else if (rte_ring_dequeue(qp->processed_pkts, + (void **)&b_info)) + return 0; + cur_head_offset = CCP_READ_REG(b_info->cmd_q->reg_base, + CMD_Q_HEAD_LO_BASE); + + if (b_info->head_offset < b_info->tail_offset) { + if ((cur_head_offset >= b_info->head_offset) && + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } else { + if ((cur_head_offset >= b_info->head_offset) || + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } + + +success: + nb_ops = ccp_prepare_ops(op, b_info, nb_ops); + rte_atomic64_add(&b_info->cmd_q->free_slots, b_info->desccnt); + b_info->desccnt = 0; + if (b_info->opcnt > 0) { + qp->b_info = b_info; + } else { + rte_mempool_put(qp->batch_mp, (void *)b_info); + qp->b_info = NULL; + } + + return nb_ops; +} diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h index 8d9de07..c2fce1d 100644 --- a/drivers/crypto/ccp/ccp_crypto.h +++ b/drivers/crypto/ccp/ccp_crypto.h @@ -238,4 +238,39 @@ struct ccp_qp; int ccp_set_session_parameters(struct ccp_session *sess, const struct rte_crypto_sym_xform *xform); +/** + * Find count of slots + * + * @param session CCP private session + * @return count of free slots available + */ +int ccp_compute_slot_count(struct ccp_session *session); + +/** + * process crypto ops to be enqueued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param cmd_q CCP cmd queue + * @param nb_ops No. of ops to be submitted + * @return 0 on success otherwise -1 + */ +int process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req); + +/** + * process crypto ops to be dequeued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param nb_ops requested no. of ops + * @return 0 on success otherwise -1 + */ +int process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops); + #endif /* _CCP_CRYPTO_H_ */ diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 546f0a7..2e702de 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -35,6 +35,33 @@ ccp_dev_start(struct rte_cryptodev *dev) return 0; } +struct ccp_queue * +ccp_allot_queue(struct rte_cryptodev *cdev, int slot_req) +{ + int i, ret = 0; + struct ccp_device *dev; + struct ccp_private *priv = cdev->data->dev_private; + + dev = TAILQ_NEXT(priv->last_dev, next); + if (unlikely(dev == NULL)) + dev = TAILQ_FIRST(&ccp_list); + priv->last_dev = dev; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + for (i = 0; i < dev->cmd_q_count; i++) { + dev->qidx++; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + } + return NULL; +} + static const struct rte_memzone * ccp_queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h index d88f70c..abc788c 100644 --- a/drivers/crypto/ccp/ccp_dev.h +++ b/drivers/crypto/ccp/ccp_dev.h @@ -419,4 +419,13 @@ int ccp_dev_start(struct rte_cryptodev *dev); */ int ccp_probe_devices(const struct rte_pci_id *ccp_id); +/** + * allocate a ccp command queue + * + * @dev rte crypto device + * @param slot_req number of required + * @return allotted CCP queue on success otherwise NULL + */ +struct ccp_queue *ccp_allot_queue(struct rte_cryptodev *dev, int slot_req); + #endif /* _CCP_DEV_H_ */ diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index ce64dbc..fd7d2d3 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -12,6 +12,7 @@ #include #include +#include "ccp_crypto.h" #include "ccp_dev.h" #include "ccp_pmd_private.h" @@ -21,23 +22,72 @@ static unsigned int ccp_pmd_init_done; uint8_t ccp_cryptodev_driver_id; +static struct ccp_session * +get_ccp_session(struct ccp_qp *qp __rte_unused, struct rte_crypto_op *op) +{ + struct ccp_session *sess = NULL; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + if (unlikely(op->sym->session == NULL)) + return NULL; + + sess = (struct ccp_session *) + get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + } + + return sess; +} + static uint16_t -ccp_pmd_enqueue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { - uint16_t enq_cnt = 0; + struct ccp_session *sess = NULL; + struct ccp_qp *qp = queue_pair; + struct ccp_queue *cmd_q; + struct rte_cryptodev *dev = qp->dev; + uint16_t i, enq_cnt = 0, slots_req = 0; + + if (nb_ops == 0) + return 0; + + if (unlikely(rte_ring_full(qp->processed_pkts) != 0)) + return 0; + + for (i = 0; i < nb_ops; i++) { + sess = get_ccp_session(qp, ops[i]); + if (unlikely(sess == NULL) && (i == 0)) { + qp->qp_stats.enqueue_err_count++; + return 0; + } else if (sess == NULL) { + nb_ops = i; + break; + } + slots_req += ccp_compute_slot_count(sess); + } + + cmd_q = ccp_allot_queue(dev, slots_req); + if (unlikely(cmd_q == NULL)) + return 0; + enq_cnt = process_ops_to_enqueue(qp, ops, cmd_q, nb_ops, slots_req); + qp->qp_stats.enqueued_count += enq_cnt; return enq_cnt; } static uint16_t -ccp_pmd_dequeue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { + struct ccp_qp *qp = queue_pair; uint16_t nb_dequeued = 0; + nb_dequeued = process_ops_to_dequeue(qp, ops, nb_ops); + + qp->qp_stats.dequeued_count += nb_dequeued; + return nb_dequeued; } -- 2.7.4