From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id E512EA05D3 for ; Tue, 26 Mar 2019 16:44:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E140F1B3A7; Tue, 26 Mar 2019 16:44:01 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id E88571B14C for ; Tue, 26 Mar 2019 16:43:52 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Mar 2019 08:43:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,273,1549958400"; d="scan'208";a="286034811" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga004.jf.intel.com with ESMTP; 26 Mar 2019 08:43:51 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: akhil.goyal@nxp.com, olivier.matz@6wind.com, Konstantin Ananyev Date: Tue, 26 Mar 2019 15:43:15 +0000 Message-Id: <20190326154320.29913-4-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190326154320.29913-1-konstantin.ananyev@intel.com> References: <20190320184655.17004-2-konstantin.ananyev@intel.com> <20190326154320.29913-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v3 3/8] ipsec: change the order in filling crypto op X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190326154315.ADhY80zeiWNP0fPdNUQzvGrfixSJ9TXhrJCZvFFqX64@z> Right now we first fill crypto_sym_op part of crypto_op, then in a separate cycle we fill crypto op fields. It makes more sense to fill whole crypto-op in one go instead. Signed-off-by: Konstantin Ananyev --- lib/librte_ipsec/sa.c | 46 ++++++++++++++++++++----------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c index 747c37002..97c0f8c61 100644 --- a/lib/librte_ipsec/sa.c +++ b/lib/librte_ipsec/sa.c @@ -464,20 +464,17 @@ mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[], * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices. */ static inline void -lksd_none_cop_prepare(const struct rte_ipsec_session *ss, - struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num) +lksd_none_cop_prepare(struct rte_crypto_op *cop, + struct rte_cryptodev_sym_session *cs, struct rte_mbuf *mb) { - uint32_t i; struct rte_crypto_sym_op *sop; - for (i = 0; i != num; i++) { - sop = cop[i]->sym; - cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; - cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; - cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION; - sop->m_src = mb[i]; - __rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses); - } + sop = cop->sym; + cop->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + cop->sess_type = RTE_CRYPTO_OP_WITH_SESSION; + sop->m_src = mb; + __rte_crypto_sym_op_attach_sym_session(sop, cs); } /* @@ -667,11 +664,13 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; n = num; sqn = esn_outb_update_sqn(sa, &n); @@ -689,10 +688,10 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* success, setup crypto op */ if (rc >= 0) { - mb[k] = mb[i]; outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc); - k++; + mb[k++] = mb[i]; /* failure, put packet into the death-row */ } else { dr[i - k] = mb[i]; @@ -700,9 +699,6 @@ outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } } - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) mbuf_bulk_copy(mb + k, dr, n - k); @@ -803,11 +799,13 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint64_t sqn; rte_be64_t sqc; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; union sym_op_data icv; uint64_t iv[IPSEC_MAX_IV_QWORD]; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; n = num; sqn = esn_outb_update_sqn(sa, &n); @@ -829,10 +827,10 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], /* success, setup crypto op */ if (rc >= 0) { - mb[k] = mb[i]; outb_pkt_xprepare(sa, sqc, &icv); + lksd_none_cop_prepare(cop[k], cs, mb[i]); esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc); - k++; + mb[k++] = mb[i]; /* failure, put packet into the death-row */ } else { dr[i - k] = mb[i]; @@ -840,9 +838,6 @@ outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], } } - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != n && k != 0) mbuf_bulk_copy(mb + k, dr, n - k); @@ -1021,11 +1016,13 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], int32_t rc; uint32_t i, k, hl; struct rte_ipsec_sa *sa; + struct rte_cryptodev_sym_session *cs; struct replay_sqn *rsn; union sym_op_data icv; struct rte_mbuf *dr[num]; sa = ss->sa; + cs = ss->crypto.ses; rsn = rsn_acquire(sa); k = 0; @@ -1033,9 +1030,11 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], hl = mb[i]->l2_len + mb[i]->l3_len; rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv); - if (rc >= 0) + if (rc >= 0) { + lksd_none_cop_prepare(cop[k], cs, mb[i]); rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv, hl, rc); + } if (rc == 0) mb[k++] = mb[i]; @@ -1047,9 +1046,6 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], rsn_release(sa, rsn); - /* update cops */ - lksd_none_cop_prepare(ss, mb, cop, k); - /* copy not prepared mbufs beyond good ones */ if (k != num && k != 0) mbuf_bulk_copy(mb + k, dr, num - k); -- 2.17.1