From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7FDEA04B5; Thu, 10 Sep 2020 17:29:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 13D501C19E; Thu, 10 Sep 2020 17:27:13 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7DE321C114 for ; Thu, 10 Sep 2020 17:27:00 +0200 (CEST) IronPort-SDR: meXke8TliW/mcJov8JEJGoUweECe076CKlXU1Z9C923YmxFIhmZ2fzsCfKwvVl6chf8uEn9O/Q Fz3dZHWo1TGg== X-IronPort-AV: E=McAfee;i="6000,8403,9739"; a="155956056" X-IronPort-AV: E=Sophos;i="5.76,413,1592895600"; d="scan'208";a="155956056" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Sep 2020 08:26:59 -0700 IronPort-SDR: ood3kafLosDjq8fQftpPMhqoFxD1ic6hJI4RjBYzVCZPceLoHjczxYCsloYWpSEt4e6Uy174MT ffuRkGM9cykA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.76,413,1592895600"; d="scan'208";a="480932128" Received: from silpixa00400573.ir.intel.com (HELO silpixa00400573.ger.corp.intel.com) ([10.237.223.107]) by orsmga005.jf.intel.com with ESMTP; 10 Sep 2020 08:26:58 -0700 From: Cristian Dumitrescu To: dev@dpdk.org Date: Thu, 10 Sep 2020 16:26:17 +0100 Message-Id: <20200910152645.9342-14-cristian.dumitrescu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200910152645.9342-1-cristian.dumitrescu@intel.com> References: <20200908201830.74206-2-cristian.dumitrescu@intel.com> <20200910152645.9342-1-cristian.dumitrescu@intel.com> Subject: [dpdk-dev] [PATCH v4 13/41] pipeline: add SWX dma instruction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DMA instruction handles the bulk read transfer of one header from the table entry action data. Typically used to generate headers, i.e. headers that are not extracted from the input packet. Signed-off-by: Cristian Dumitrescu --- lib/librte_pipeline/rte_swx_pipeline.c | 207 +++++++++++++++++++++++++ 1 file changed, 207 insertions(+) diff --git a/lib/librte_pipeline/rte_swx_pipeline.c b/lib/librte_pipeline/rte_swx_pipeline.c index b5b502caa..341afc735 100644 --- a/lib/librte_pipeline/rte_swx_pipeline.c +++ b/lib/librte_pipeline/rte_swx_pipeline.c @@ -255,6 +255,18 @@ enum instruction_type { INSTR_MOV, /* dst = MEF, src = MEFT */ INSTR_MOV_S, /* (dst, src) = (MEF, H) or (dst, src) = (H, MEFT) */ INSTR_MOV_I, /* dst = HMEF, src = I */ + + /* dma h.header t.field + * memcpy(h.header, t.field, sizeof(h.header)) + */ + INSTR_DMA_HT, + INSTR_DMA_HT2, + INSTR_DMA_HT3, + INSTR_DMA_HT4, + INSTR_DMA_HT5, + INSTR_DMA_HT6, + INSTR_DMA_HT7, + INSTR_DMA_HT8, }; struct instr_operand { @@ -290,12 +302,26 @@ struct instr_dst_src { }; }; +struct instr_dma { + struct { + uint8_t header_id[8]; + uint8_t struct_id[8]; + } dst; + + struct { + uint8_t offset[8]; + } src; + + uint16_t n_bytes[8]; +}; + struct instruction { enum instruction_type type; union { struct instr_io io; struct instr_hdr_validity valid; struct instr_dst_src mov; + struct instr_dma dma; }; }; @@ -2529,6 +2555,170 @@ instr_mov_i_exec(struct rte_swx_pipeline *p) thread_ip_inc(p); } +/* + * dma. + */ +static int +instr_dma_translate(struct rte_swx_pipeline *p, + struct action *action, + char **tokens, + int n_tokens, + struct instruction *instr, + struct instruction_data *data __rte_unused) +{ + char *dst = tokens[1]; + char *src = tokens[2]; + struct header *h; + struct field *tf; + + CHECK(action, EINVAL); + CHECK(n_tokens == 3, EINVAL); + + h = header_parse(p, dst); + CHECK(h, EINVAL); + + tf = action_field_parse(action, src); + CHECK(tf, EINVAL); + + instr->type = INSTR_DMA_HT; + instr->dma.dst.header_id[0] = h->id; + instr->dma.dst.struct_id[0] = h->struct_id; + instr->dma.n_bytes[0] = h->st->n_bits / 8; + instr->dma.src.offset[0] = tf->offset / 8; + + return 0; +} + +static inline void +__instr_dma_ht_exec(struct rte_swx_pipeline *p, uint32_t n_dma); + +static inline void +__instr_dma_ht_exec(struct rte_swx_pipeline *p, uint32_t n_dma) +{ + struct thread *t = &p->threads[p->thread_id]; + struct instruction *ip = t->ip; + uint8_t *action_data = t->structs[0]; + uint64_t valid_headers = t->valid_headers; + uint32_t i; + + for (i = 0; i < n_dma; i++) { + uint32_t header_id = ip->dma.dst.header_id[i]; + uint32_t struct_id = ip->dma.dst.struct_id[i]; + uint32_t offset = ip->dma.src.offset[i]; + uint32_t n_bytes = ip->dma.n_bytes[i]; + + struct header_runtime *h = &t->headers[header_id]; + uint8_t *h_ptr0 = h->ptr0; + uint8_t *h_ptr = t->structs[struct_id]; + + void *dst = MASK64_BIT_GET(valid_headers, header_id) ? + h_ptr : h_ptr0; + void *src = &action_data[offset]; + + TRACE("[Thread %2u] dma h.s t.f\n", p->thread_id); + + /* Headers. */ + memcpy(dst, src, n_bytes); + t->structs[struct_id] = dst; + valid_headers = MASK64_BIT_SET(valid_headers, header_id); + } + + t->valid_headers = valid_headers; +} + +static inline void +instr_dma_ht_exec(struct rte_swx_pipeline *p) +{ + __instr_dma_ht_exec(p, 1); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht2_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 2 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 2); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht3_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 3 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 3); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht4_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 4 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 4); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht5_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 5 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 5); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht6_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 6 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 6); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht7_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 7 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 7); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht8_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 8 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 8); + + /* Thread. */ + thread_ip_inc(p); +} + #define RTE_SWX_INSTRUCTION_TOKENS_MAX 16 static int @@ -2622,6 +2812,14 @@ instr_translate(struct rte_swx_pipeline *p, instr, data); + if (!strcmp(tokens[tpos], "dma")) + return instr_dma_translate(p, + action, + &tokens[tpos], + n_tokens - tpos, + instr, + data); + CHECK(0, EINVAL); } @@ -2770,6 +2968,15 @@ static instr_exec_t instruction_table[] = { [INSTR_MOV] = instr_mov_exec, [INSTR_MOV_S] = instr_mov_s_exec, [INSTR_MOV_I] = instr_mov_i_exec, + + [INSTR_DMA_HT] = instr_dma_ht_exec, + [INSTR_DMA_HT2] = instr_dma_ht2_exec, + [INSTR_DMA_HT3] = instr_dma_ht3_exec, + [INSTR_DMA_HT4] = instr_dma_ht4_exec, + [INSTR_DMA_HT5] = instr_dma_ht5_exec, + [INSTR_DMA_HT6] = instr_dma_ht6_exec, + [INSTR_DMA_HT7] = instr_dma_ht7_exec, + [INSTR_DMA_HT8] = instr_dma_ht8_exec, }; static inline void -- 2.17.1