From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0071.outbound.protection.outlook.com [104.47.32.71]) by dpdk.org (Postfix) with ESMTP id CDDA01B16B for ; Fri, 5 Jan 2018 10:40:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=D2Oqh4ZxfBOpxcwpaYU/I0lp3cLKcwTnJKAYlEtaTuU=; b=BXPLaX7DN1RvOyRJrYNh2YL105m4CmaE+5Wgn7tossr4HdYK6HGCia8QxEJyKxDxXRD6Uo2YoLJuF6Q5TLESgh75YJ0MT9orBV372YAVBJVEDJP4lVJKXwXc0Jivx3fqQyUs6GJ8jn1oNFytocV4b3dNx641/kmcfv3DA7paYcU= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Ravi1.Kumar@amd.com; Received: from wallaby-smavila.amd.com (202.56.249.162) by BN6PR12MB1508.namprd12.prod.outlook.com (10.172.24.149) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.386.5; Fri, 5 Jan 2018 09:40:31 +0000 From: Ravi Kumar To: dev@dpdk.org Cc: pablo.de.lara.guarch@intel.com Date: Fri, 5 Jan 2018 04:39:44 -0500 Message-Id: <1515145198-97367-6-git-send-email-Ravi1.kumar@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515145198-97367-1-git-send-email-Ravi1.kumar@amd.com> References: <1512047553-118101-1-git-send-email-Ravi1.kumar@amd.com> <1515145198-97367-1-git-send-email-Ravi1.kumar@amd.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [202.56.249.162] X-ClientProxiedBy: PN1PR01CA0084.INDPRD01.PROD.OUTLOOK.COM (10.174.144.152) To BN6PR12MB1508.namprd12.prod.outlook.com (10.172.24.149) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: cc93ede7-67ed-4bdb-2766-08d554205d2f X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(48565401081)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060); SRVR:BN6PR12MB1508; X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1508; 3:qHfJAoFnARCwdGCSDZL9vOsOAxEbrxaCnjpnQEljU4jvAeAjBrn+d1ainyRT8ColX5vpJUizSypePQPtuWgLkNicPENPYpC8snwFjCro1SdfymUcSR2EOVbxtJsg1bfw5rQPpdRunRWUdyIqmFO79AT0bKPMc5b2qlqVb5jBQeABekLe2dcCOXlSZ/WsG0wdILMXxoS44GaWNHA8FBZ9LoFOozCQu0RjIKho0WqX8053dMtkMdk1DwOkRo3I6Bmc; 25:2Q80fEa1qX6uQ/amTPIZB3E3mrxgpt+3qsHZRRC0u1jYOjygQZO2MhOPzWdYd5jBwZRbnXSyxHJdSZSznmuaW7ylInn4P/e6/+GHhyNExFE9+XiI0Jm/eyBziEtoTxzpkTQ3jyPZSCmy+bxF4qThK1kDo9XiGL0LvJt2RDhfcp2dOQJgqeFFeiPQwhqZGhuetr9hURVVRVnzF81Fzlxiw5QgGrkURrMtUzc6Ge5Wl2sZwm4YoXTaVFcrZIZHcZ9kQhXN8PQ/goQPFzhV79HI2/1iJjmbbATVjuFjRsDGfj4uNfFiBa6sSDzKIDG+3dl8triamHwnrIj2QlsIyMrKQA==; 31:Xu40ux6YarTtXCgG473jFqEnBPZCoY05vNjo8A0jnzKf4fWqPFhobpL9gEYJ+38kiemGYs0Mb5MaohUAWuF5X1rPbFBVNPZNr6T/nmsrvuS48t14qypau2FwYlZN/TuCymhrhHviLDyXip+LZBr+H3qVm3TJ2tmtJRuE0AwnKhJ+u/ynHcA432SwOBKZaCMfNvhWepQ+EBcB5Cb00GK+MgXvWeN+YCdCg3Am1rtIOwI= X-MS-TrafficTypeDiagnostic: BN6PR12MB1508: X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1508; 20:WLOmiWMaGj4vmut8Wk+8B4V7YKB6yy6JkttuUq37bmBabnv89dbeme5YvIFQwVwhsoKVkc2dZKpUqjP9MKb8omP/i2LKEpRlPhy/+Ntt49iia4TOooXbVMBMUYEIKkz4+CyCcjffn04WncIxEmwInSwn4UZFyDxgqYehU100F3wPX2ROsRdZINt2206QxioB5nsShMbLY3aq2B7dXnuv+g7NFz3A0T3K4hn6TqnW2sR5aVq0AJK0vX0y/bWcglPSR+VmChLFTwCLXTcidM6qcBU2uAT0e7Bh3v1A7yOIwgEPJeRa1Oc1zv9k6/LkHz/cuVrxHeH5gQ+zm0EzivfhFA8fdM3cRQUxrnndTP9/6s1rzwhOVf2Mtl64Z0AhRkVfI91klMT4QhZbzejsrmyoTv4LySl7DxB944zDtJfFV1JLrW8VWB8vnFM/i1XiDXVMO3oLYsJZ0v5oNqibwRBrR7Fn5PED/whZ2YgkwKZAr8WAaxtAbrcWznf0c/+ZPC7H; 4:V/U/p25xhATcdD7g51cDSVgVT2iXiuB0k14cqmFup3n+Snm7CLb9pLWWflXYADpfNLsFXqqkCFK/wyXRb3zQx/8TNPOfCVO8ezbVoQ4ss+F7B3FbvojGKdDTeOdeEx9W1nwowlMNmCd5ptZm7L0F9Y1m93LztrJXVGfTFxbVamO5jnquzao59VMY+kHhj5VzfmyPPr86wx0VpOQ9DOJN8d7D+TgtcZEw6uLmVaU4vs3rrlT8K2zm5vQ50cuSlBLUA4flzHZaY9mzzyF1KH8+kQQqN2+jvhT3J65qYcy0LVlkiReZcnp9Pq7o3iCuqV8o X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(93006095)(93001095)(10201501046)(3002001)(3231023)(944501075)(6055026)(6041268)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(6072148)(201708071742011); SRVR:BN6PR12MB1508; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:BN6PR12MB1508; X-Forefront-PRVS: 05437568AA X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(396003)(39860400002)(366004)(346002)(39380400002)(376002)(199004)(189003)(6486002)(2361001)(6116002)(106356001)(386003)(105586002)(3846002)(6916009)(2950100002)(16586007)(6666003)(16526018)(36756003)(4326008)(2906002)(316002)(97736004)(25786009)(478600001)(86362001)(68736007)(8676002)(48376002)(51416003)(81166006)(2351001)(81156014)(52116002)(8936002)(53936002)(305945005)(50226002)(76176011)(72206003)(50466002)(53416004)(7696005)(47776003)(66066001)(59450400001)(5660300001)(7736002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR12MB1508; H:wallaby-smavila.amd.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR12MB1508; 23:aZ4202J2SiMpIbAeaW/ulCVlBH7tTPcrduj0cfzNA?= =?us-ascii?Q?UXi1jGiVp0JTiSmS7EoPnu0DvAXWyMutRq7/Wty15eEtayRcxTHQWStDq8gj?= =?us-ascii?Q?AQMZQWPFkMdCmettGYY7BtbMtYUT2U+4zVutRsVICH/JQ/H3X4krCN2/PSkf?= =?us-ascii?Q?fabYk24jQaktLYQXuu2ZRcgP7rwnS5fdghTtK99/fokxZErrLpc+6T+hcNHY?= =?us-ascii?Q?BOfDtquJkmxfm0DftVS2hd119hGCp5cCSD85vj2LIxuQWHI+1G+/OCjgNW4V?= =?us-ascii?Q?82VmX6h7XX0B10b6WdGw4KpTPnxn6PMPJlP84x9XIVE/HcjDIc5hViffbt7z?= =?us-ascii?Q?haEFr0pVsT53qFIqR48IVJwrrr9STbLqvbOMmshOMsZh2XPSndxvixfCDbdq?= =?us-ascii?Q?yH3B6JROr4EfScdi5qxV4Mn1hqqxhpyEkxHz6I/PeszOr7kHhkr2+VXuLWlj?= =?us-ascii?Q?wF/HYjWbrd/B7KiLCCn9VVVnr3Nn9fdAnp0EjIDehRpfhVzlgr5AU4ESlAEP?= =?us-ascii?Q?tqISD3lMjn9PZD3ZXWhWVfZThBMyNp7P8LooA3RGCAuBOtxauAZsxh3ATbHM?= =?us-ascii?Q?92CAdq5KsSDb+7//A7JzkFJHy8t5q6P+71Yo7Tud4roENEOHxMLlaH3tvtiB?= =?us-ascii?Q?9F2vIGUCPUidETYkn13ParqAcU9dGUGZKycUA7fU+ED2dt7Hm13srYlJ8hLN?= =?us-ascii?Q?zGm4pKh6R4aBlZNFgV8wVq+npmc2pyj+t3Yjd9h6JbS8nftyD7uX9Ys3hOnC?= =?us-ascii?Q?s7mpoI7ZOUSK8A5+ft+hFyK/1SMFb4EZJP2SedcHpd5hUESXgxG/Qzn8b2A+?= =?us-ascii?Q?RoPoJZaEUPIhH5xDhOdLcEgRnDSbhSGoR/fXtajazhyC62/A3z1kAeg6gPaP?= =?us-ascii?Q?9zO89Uq3knouDqjsL4EHV4qYhqRh3zffMHvtLSYMetXOSUeYx7I+qCqOdsGt?= =?us-ascii?Q?Hy8cTK78h5UQ2N/odTzrn6qAx1WC9w2hQmFmvSShamKAlb3T905TFGhfqexW?= =?us-ascii?Q?Om8Y8heSfuoul0b5ZrEAJvrjvv/ikwV/aUpVTu9ihgRDHNR3Ef0fplUCGGhW?= =?us-ascii?Q?AbsWILA9fspp9R0CoJ13jJtng/0BgvyMV6LwXeLB290tRaT0uk5506zrrqo7?= =?us-ascii?Q?L8jVlpzLpLCTjDR4/bbM45nmQQTXPQksB91ifNlWRrdRfaWcidHvQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1508; 6:edalID36tZodOXnkOamtbT3AqEvSXCdoJHnwAmp4CFEwvbyTL3Op1zjwYzat9a5z1IzjLwA0RbyAPdpYYhfCaFaxjSSW3dv3CGDaBqchLAL9+2Qfnwfn3gOyCHSOcU+OrXdJONr7lotrxlMHD/aIeatBhv9bKlm653yAW9M5tVBGeiaKfw8bTZlknsx3TMpE7ccQs3JqSo0cH/EEizBjVqTvTO5ojvlGg62oFYaNKLBCUXzJyiopLfGHPqNbyMbuMMu8WFiHu636m5tWHVYU7i3MZUdE7K+MqODfMEM8DGZ7OpiAold2Rdwua2/P5CST42vqmmh3mWrgBQ4Uhr9pDzTLhQB+iI4lP3bPvrI4tyQ=; 5:hzk6iyfTh9JwcfxTd37Of8vOjZgVaXgeY2K6CEz0xhr6arswOwa1mFWQ3V2ZDJpyZZP0ASLVuvcZOOgefsscOg6BGh5c1Xp67SDSqhm+tX6u/NW8t7YkRHTNxLqRdKHJlP64KGAFUNvyD9zjFZT4vGyuK9yOn055W8SUctgy1OY=; 24:rZUdC2Yjz69NJAUP9EXQeMuGyhhuvJpazmd8pll+libtbCBwUyTjEDcBHSypDr9taqM2z4IfpTGZxiltjXGtHUke4gKv0jlL6ogT6wpRn88=; 7:yTQD9ncMRIH5mbugsXNlYTrALmmnMo9xS9OV2ury5KCZwOncLOFnfNQnFkEdCf/kWGnTzfLHImKTUJnFxQ2D0B9QTwAKnET/gZyIUfXB+bEk1ZtLnAjXoJaXwTBC4N2n/3LD40LYHb2k6q7+EPUGHFzodaRPwDbg2CA/iHb7SlW443WBAhW6rCmcYtUG+iGHmvAqmllgs0lA1WxAnkqnR+wFnCHCRBdH/iGz0Az+ttpb3whjTmDOf7/bzARW34Pg SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN6PR12MB1508; 20:ryl3n/UfquDlKdtrvFpnBJjSRhZ5tELxTFEIb6K/gmHAJ6FOqejNFQ8zv1Rzx9ZHd1B3vRZxkW9BswZ/5TgJVXFdL9QWa4FGwdFa3SKVBhpM4gWb4buNrf89iHeew1ka9Bhk0FurmVB/HAj+scKv6CKfjO/EbKsohyLQu3thIc2QAML1+5Cl9WEf7LOJz6agWyI2N2iJtfmzuK82YCSw365gJcoIFe/639tNLhxGti+WQ5LyZv94BUmvmUyjJUIG X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jan 2018 09:40:31.2631 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc93ede7-67ed-4bdb-2766-08d554205d2f X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR12MB1508 Subject: [dpdk-dev] [PATCH v2 06/20] crypto/ccp: add crypto enqueue and dequeue burst api support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 05 Jan 2018 09:40:34 -0000 Signed-off-by: Ravi Kumar --- drivers/crypto/ccp/ccp_crypto.c | 359 +++++++++++++++++++++++++++++++++++++++ drivers/crypto/ccp/ccp_crypto.h | 35 ++++ drivers/crypto/ccp/ccp_dev.c | 27 +++ drivers/crypto/ccp/ccp_dev.h | 9 + drivers/crypto/ccp/rte_ccp_pmd.c | 64 ++++++- 5 files changed, 487 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index 80abcdc..c17e84f 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -228,3 +228,362 @@ ccp_set_session_parameters(struct ccp_session *sess, return ret; } +/* calculate CCP descriptors requirement */ +static inline int +ccp_cipher_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + } + return count; +} + +static inline int +ccp_auth_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + } + + return count; +} + +static int +ccp_aead_slot(struct ccp_session *session) +{ + int count = 0; + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + } + return count; +} + +int +ccp_compute_slot_count(struct ccp_session *session) +{ + int count = 0; + + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + count = ccp_cipher_slot(session); + break; + case CCP_CMD_AUTH: + count = ccp_auth_slot(session); + break; + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + count = ccp_cipher_slot(session); + count += ccp_auth_slot(session); + break; + case CCP_CMD_COMBINED: + count = ccp_aead_slot(session); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + + } + + return count; +} + +static inline int +ccp_crypto_cipher(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->cipher.algo) { + default: + CCP_LOG_ERR("Unsupported cipher algo %d", + session->cipher.algo); + return -ENOTSUP; + } + return result; +} + +static inline int +ccp_crypto_auth(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->auth.algo) { + default: + CCP_LOG_ERR("Unsupported auth algo %d", + session->auth.algo); + return -ENOTSUP; + } + + return result; +} + +static inline int +ccp_crypto_aead(struct rte_crypto_op *op, + struct ccp_queue *cmd_q __rte_unused, + struct ccp_batch_info *b_info __rte_unused) +{ + int result = 0; + struct ccp_session *session; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + switch (session->aead_algo) { + default: + CCP_LOG_ERR("Unsupported aead algo %d", + session->aead_algo); + return -ENOTSUP; + } + return result; +} + +int +process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req) +{ + int i, result = 0; + struct ccp_batch_info *b_info; + struct ccp_session *session; + + if (rte_mempool_get(qp->batch_mp, (void **)&b_info)) { + CCP_LOG_ERR("batch info allocation failed"); + return 0; + } + /* populate batch info necessary for dequeue */ + b_info->op_idx = 0; + b_info->lsb_buf_idx = 0; + b_info->desccnt = 0; + b_info->cmd_q = cmd_q; + b_info->lsb_buf_phys = + (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + rte_atomic64_sub(&b_info->cmd_q->free_slots, slots_req); + + b_info->head_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + for (i = 0; i < nb_ops; i++) { + session = (struct ccp_session *)get_session_private_data( + op[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_AUTH: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_CIPHER_HASH: + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_auth(op[i], cmd_q, b_info); + break; + case CCP_CMD_HASH_CIPHER: + result = ccp_crypto_auth(op[i], cmd_q, b_info); + if (result) + break; + result = ccp_crypto_cipher(op[i], cmd_q, b_info); + break; + case CCP_CMD_COMBINED: + result = ccp_crypto_aead(op[i], cmd_q, b_info); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + result = -1; + } + if (unlikely(result < 0)) { + rte_atomic64_add(&b_info->cmd_q->free_slots, + (slots_req - b_info->desccnt)); + break; + } + b_info->op[i] = op[i]; + } + + b_info->opcnt = i; + b_info->tail_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * + Q_DESC_SIZE); + + rte_wmb(); + /* Write the new tail address back to the queue register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_TAIL_LO_BASE, + b_info->tail_offset); + /* Turn the queue back on using our cached control register */ + CCP_WRITE_REG(cmd_q->reg_base, CMD_Q_CONTROL_BASE, + cmd_q->qcontrol | CMD_Q_RUN); + + rte_ring_enqueue(qp->processed_pkts, (void *)b_info); + + return i; +} + +static inline void ccp_auth_dq_prepare(struct rte_crypto_op *op) +{ + struct ccp_session *session; + uint8_t *digest_data, *addr; + struct rte_mbuf *m_last; + int offset, digest_offset; + uint8_t digest_le[64]; + + session = (struct ccp_session *)get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + + if (session->cmd_id == CCP_CMD_COMBINED) { + digest_data = op->sym->aead.digest.data; + digest_offset = op->sym->aead.data.offset + + op->sym->aead.data.length; + } else { + digest_data = op->sym->auth.digest.data; + digest_offset = op->sym->auth.data.offset + + op->sym->auth.data.length; + } + m_last = rte_pktmbuf_lastseg(op->sym->m_src); + addr = (uint8_t *)((char *)m_last->buf_addr + m_last->data_off + + m_last->data_len - session->auth.ctx_len); + + rte_mb(); + offset = session->auth.offset; + + if (session->auth.engine == CCP_ENGINE_SHA) + if ((session->auth.ut.sha_type != CCP_SHA_TYPE_1) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_224) && + (session->auth.ut.sha_type != CCP_SHA_TYPE_256)) { + /* All other algorithms require byte + * swap done by host + */ + unsigned int i; + + offset = session->auth.ctx_len - + session->auth.offset - 1; + for (i = 0; i < session->auth.digest_length; i++) + digest_le[i] = addr[offset - i]; + offset = 0; + addr = digest_le; + } + + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if (session->auth.op == CCP_AUTH_OP_VERIFY) { + if (memcmp(addr + offset, digest_data, + session->auth.digest_length) != 0) + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + + } else { + if (unlikely(digest_data == 0)) + digest_data = rte_pktmbuf_mtod_offset( + op->sym->m_dst, uint8_t *, + digest_offset); + rte_memcpy(digest_data, addr + offset, + session->auth.digest_length); + } + /* Trim area used for digest from mbuf. */ + rte_pktmbuf_trim(op->sym->m_src, + session->auth.ctx_len); +} + +static int +ccp_prepare_ops(struct rte_crypto_op **op_d, + struct ccp_batch_info *b_info, + uint16_t nb_ops) +{ + int i, min_ops; + struct ccp_session *session; + + min_ops = RTE_MIN(nb_ops, b_info->opcnt); + + for (i = 0; i < min_ops; i++) { + op_d[i] = b_info->op[b_info->op_idx++]; + session = (struct ccp_session *)get_session_private_data( + op_d[i]->sym->session, + ccp_cryptodev_driver_id); + switch (session->cmd_id) { + case CCP_CMD_CIPHER: + op_d[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + break; + case CCP_CMD_AUTH: + case CCP_CMD_CIPHER_HASH: + case CCP_CMD_HASH_CIPHER: + case CCP_CMD_COMBINED: + ccp_auth_dq_prepare(op_d[i]); + break; + default: + CCP_LOG_ERR("Unsupported cmd_id"); + } + } + + b_info->opcnt -= min_ops; + return min_ops; +} + +int +process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops) +{ + struct ccp_batch_info *b_info; + uint32_t cur_head_offset; + + if (qp->b_info != NULL) { + b_info = qp->b_info; + if (unlikely(b_info->op_idx > 0)) + goto success; + } else if (rte_ring_dequeue(qp->processed_pkts, + (void **)&b_info)) + return 0; + cur_head_offset = CCP_READ_REG(b_info->cmd_q->reg_base, + CMD_Q_HEAD_LO_BASE); + + if (b_info->head_offset < b_info->tail_offset) { + if ((cur_head_offset >= b_info->head_offset) && + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } else { + if ((cur_head_offset >= b_info->head_offset) || + (cur_head_offset < b_info->tail_offset)) { + qp->b_info = b_info; + return 0; + } + } + + +success: + nb_ops = ccp_prepare_ops(op, b_info, nb_ops); + rte_atomic64_add(&b_info->cmd_q->free_slots, b_info->desccnt); + b_info->desccnt = 0; + if (b_info->opcnt > 0) { + qp->b_info = b_info; + } else { + rte_mempool_put(qp->batch_mp, (void *)b_info); + qp->b_info = NULL; + } + + return nb_ops; +} diff --git a/drivers/crypto/ccp/ccp_crypto.h b/drivers/crypto/ccp/ccp_crypto.h index 346d5ee..4455497 100644 --- a/drivers/crypto/ccp/ccp_crypto.h +++ b/drivers/crypto/ccp/ccp_crypto.h @@ -264,4 +264,39 @@ struct ccp_qp; int ccp_set_session_parameters(struct ccp_session *sess, const struct rte_crypto_sym_xform *xform); +/** + * Find count of slots + * + * @param session CCP private session + * @return count of free slots available + */ +int ccp_compute_slot_count(struct ccp_session *session); + +/** + * process crypto ops to be enqueued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param cmd_q CCP cmd queue + * @param nb_ops No. of ops to be submitted + * @return 0 on success otherwise -1 + */ +int process_ops_to_enqueue(const struct ccp_qp *qp, + struct rte_crypto_op **op, + struct ccp_queue *cmd_q, + uint16_t nb_ops, + int slots_req); + +/** + * process crypto ops to be dequeued + * + * @param qp CCP crypto queue-pair + * @param op crypto ops table + * @param nb_ops requested no. of ops + * @return 0 on success otherwise -1 + */ +int process_ops_to_dequeue(struct ccp_qp *qp, + struct rte_crypto_op **op, + uint16_t nb_ops); + #endif /* _CCP_CRYPTO_H_ */ diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 57bccf4..fee90e3 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -61,6 +61,33 @@ ccp_dev_start(struct rte_cryptodev *dev) return 0; } +struct ccp_queue * +ccp_allot_queue(struct rte_cryptodev *cdev, int slot_req) +{ + int i, ret = 0; + struct ccp_device *dev; + struct ccp_private *priv = cdev->data->dev_private; + + dev = TAILQ_NEXT(priv->last_dev, next); + if (unlikely(dev == NULL)) + dev = TAILQ_FIRST(&ccp_list); + priv->last_dev = dev; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + for (i = 0; i < dev->cmd_q_count; i++) { + dev->qidx++; + if (dev->qidx >= dev->cmd_q_count) + dev->qidx = 0; + ret = rte_atomic64_read(&dev->cmd_q[dev->qidx].free_slots); + if (ret >= slot_req) + return &dev->cmd_q[dev->qidx]; + } + return NULL; +} + static const struct rte_memzone * ccp_queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, diff --git a/drivers/crypto/ccp/ccp_dev.h b/drivers/crypto/ccp/ccp_dev.h index a16ba81..cfb3b03 100644 --- a/drivers/crypto/ccp/ccp_dev.h +++ b/drivers/crypto/ccp/ccp_dev.h @@ -445,4 +445,13 @@ int ccp_dev_start(struct rte_cryptodev *dev); */ int ccp_probe_devices(const struct rte_pci_id *ccp_id); +/** + * allocate a ccp command queue + * + * @dev rte crypto device + * @param slot_req number of required + * @return allotted CCP queue on success otherwise NULL + */ +struct ccp_queue *ccp_allot_queue(struct rte_cryptodev *dev, int slot_req); + #endif /* _CCP_DEV_H_ */ diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index 1fdbe1f..fb6d41e 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -38,6 +38,7 @@ #include #include +#include "ccp_crypto.h" #include "ccp_dev.h" #include "ccp_pmd_private.h" @@ -47,23 +48,72 @@ static unsigned int ccp_pmd_init_done; uint8_t ccp_cryptodev_driver_id; +static struct ccp_session * +get_ccp_session(struct ccp_qp *qp __rte_unused, struct rte_crypto_op *op) +{ + struct ccp_session *sess = NULL; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + if (unlikely(op->sym->session == NULL)) + return NULL; + + sess = (struct ccp_session *) + get_session_private_data( + op->sym->session, + ccp_cryptodev_driver_id); + } + + return sess; +} + static uint16_t -ccp_pmd_enqueue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { - uint16_t enq_cnt = 0; + struct ccp_session *sess = NULL; + struct ccp_qp *qp = queue_pair; + struct ccp_queue *cmd_q; + struct rte_cryptodev *dev = qp->dev; + uint16_t i, enq_cnt = 0, slots_req = 0; + + if (nb_ops == 0) + return 0; + + if (unlikely(rte_ring_full(qp->processed_pkts) != 0)) + return 0; + + for (i = 0; i < nb_ops; i++) { + sess = get_ccp_session(qp, ops[i]); + if (unlikely(sess == NULL) && (i == 0)) { + qp->qp_stats.enqueue_err_count++; + return 0; + } else if (sess == NULL) { + nb_ops = i; + break; + } + slots_req += ccp_compute_slot_count(sess); + } + + cmd_q = ccp_allot_queue(dev, slots_req); + if (unlikely(cmd_q == NULL)) + return 0; + enq_cnt = process_ops_to_enqueue(qp, ops, cmd_q, nb_ops, slots_req); + qp->qp_stats.enqueued_count += enq_cnt; return enq_cnt; } static uint16_t -ccp_pmd_dequeue_burst(void *queue_pair __rte_unused, - struct rte_crypto_op **ops __rte_unused, - uint16_t nb_ops __rte_unused) +ccp_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) { + struct ccp_qp *qp = queue_pair; uint16_t nb_dequeued = 0; + nb_dequeued = process_ops_to_dequeue(qp, ops, nb_ops); + + qp->qp_stats.dequeued_count += nb_dequeued; + return nb_dequeued; } -- 2.7.4