From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E613A04B5; Thu, 1 Oct 2020 12:26:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F28F1DB76; Thu, 1 Oct 2020 12:20:39 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 07CB51DB07 for ; Thu, 1 Oct 2020 12:20:31 +0200 (CEST) IronPort-SDR: 9B4eyyntNjtnhusdJigmgK7yOS+HwUe8Hj3XxMNMwRGosC29AmVURgEhD/RJmjFxA/69IRrKr+ PEuSCEKVzjpQ== X-IronPort-AV: E=McAfee;i="6000,8403,9760"; a="224297058" X-IronPort-AV: E=Sophos;i="5.77,323,1596524400"; d="scan'208";a="224297058" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Oct 2020 03:20:31 -0700 IronPort-SDR: 34mvdwJTnvTrtz/re9eguDfUokd2gmvVwROc9Hxoom3cETYeYD69ze5gwRMwj4/oa4gfEjAXTV A9tMvSPEl+EQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,323,1596524400"; d="scan'208";a="515443396" Received: from silpixa00400573.ir.intel.com (HELO silpixa00400573.ger.corp.intel.com) ([10.237.223.107]) by fmsmga005.fm.intel.com with ESMTP; 01 Oct 2020 03:20:30 -0700 From: Cristian Dumitrescu To: dev@dpdk.org Cc: thomas@monjalon.net, david.marchand@redhat.com Date: Thu, 1 Oct 2020 11:19:41 +0100 Message-Id: <20201001102010.36861-14-cristian.dumitrescu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201001102010.36861-1-cristian.dumitrescu@intel.com> References: <20200930063416.68428-2-cristian.dumitrescu@intel.com> <20201001102010.36861-1-cristian.dumitrescu@intel.com> Subject: [dpdk-dev] [PATCH v7 13/42] pipeline: add SWX DMA instruction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DMA instruction handles the bulk read transfer of one header from the table entry action data. Typically used to generate headers, i.e. headers that are not extracted from the input packet. Signed-off-by: Cristian Dumitrescu --- lib/librte_pipeline/rte_swx_pipeline.c | 207 +++++++++++++++++++++++++ 1 file changed, 207 insertions(+) diff --git a/lib/librte_pipeline/rte_swx_pipeline.c b/lib/librte_pipeline/rte_swx_pipeline.c index b5b502caa..341afc735 100644 --- a/lib/librte_pipeline/rte_swx_pipeline.c +++ b/lib/librte_pipeline/rte_swx_pipeline.c @@ -255,6 +255,18 @@ enum instruction_type { INSTR_MOV, /* dst = MEF, src = MEFT */ INSTR_MOV_S, /* (dst, src) = (MEF, H) or (dst, src) = (H, MEFT) */ INSTR_MOV_I, /* dst = HMEF, src = I */ + + /* dma h.header t.field + * memcpy(h.header, t.field, sizeof(h.header)) + */ + INSTR_DMA_HT, + INSTR_DMA_HT2, + INSTR_DMA_HT3, + INSTR_DMA_HT4, + INSTR_DMA_HT5, + INSTR_DMA_HT6, + INSTR_DMA_HT7, + INSTR_DMA_HT8, }; struct instr_operand { @@ -290,12 +302,26 @@ struct instr_dst_src { }; }; +struct instr_dma { + struct { + uint8_t header_id[8]; + uint8_t struct_id[8]; + } dst; + + struct { + uint8_t offset[8]; + } src; + + uint16_t n_bytes[8]; +}; + struct instruction { enum instruction_type type; union { struct instr_io io; struct instr_hdr_validity valid; struct instr_dst_src mov; + struct instr_dma dma; }; }; @@ -2529,6 +2555,170 @@ instr_mov_i_exec(struct rte_swx_pipeline *p) thread_ip_inc(p); } +/* + * dma. + */ +static int +instr_dma_translate(struct rte_swx_pipeline *p, + struct action *action, + char **tokens, + int n_tokens, + struct instruction *instr, + struct instruction_data *data __rte_unused) +{ + char *dst = tokens[1]; + char *src = tokens[2]; + struct header *h; + struct field *tf; + + CHECK(action, EINVAL); + CHECK(n_tokens == 3, EINVAL); + + h = header_parse(p, dst); + CHECK(h, EINVAL); + + tf = action_field_parse(action, src); + CHECK(tf, EINVAL); + + instr->type = INSTR_DMA_HT; + instr->dma.dst.header_id[0] = h->id; + instr->dma.dst.struct_id[0] = h->struct_id; + instr->dma.n_bytes[0] = h->st->n_bits / 8; + instr->dma.src.offset[0] = tf->offset / 8; + + return 0; +} + +static inline void +__instr_dma_ht_exec(struct rte_swx_pipeline *p, uint32_t n_dma); + +static inline void +__instr_dma_ht_exec(struct rte_swx_pipeline *p, uint32_t n_dma) +{ + struct thread *t = &p->threads[p->thread_id]; + struct instruction *ip = t->ip; + uint8_t *action_data = t->structs[0]; + uint64_t valid_headers = t->valid_headers; + uint32_t i; + + for (i = 0; i < n_dma; i++) { + uint32_t header_id = ip->dma.dst.header_id[i]; + uint32_t struct_id = ip->dma.dst.struct_id[i]; + uint32_t offset = ip->dma.src.offset[i]; + uint32_t n_bytes = ip->dma.n_bytes[i]; + + struct header_runtime *h = &t->headers[header_id]; + uint8_t *h_ptr0 = h->ptr0; + uint8_t *h_ptr = t->structs[struct_id]; + + void *dst = MASK64_BIT_GET(valid_headers, header_id) ? + h_ptr : h_ptr0; + void *src = &action_data[offset]; + + TRACE("[Thread %2u] dma h.s t.f\n", p->thread_id); + + /* Headers. */ + memcpy(dst, src, n_bytes); + t->structs[struct_id] = dst; + valid_headers = MASK64_BIT_SET(valid_headers, header_id); + } + + t->valid_headers = valid_headers; +} + +static inline void +instr_dma_ht_exec(struct rte_swx_pipeline *p) +{ + __instr_dma_ht_exec(p, 1); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht2_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 2 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 2); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht3_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 3 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 3); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht4_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 4 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 4); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht5_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 5 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 5); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht6_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 6 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 6); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht7_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 7 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 7); + + /* Thread. */ + thread_ip_inc(p); +} + +static inline void +instr_dma_ht8_exec(struct rte_swx_pipeline *p) +{ + TRACE("[Thread %2u] *** The next 8 instructions are fused. ***\n", + p->thread_id); + + __instr_dma_ht_exec(p, 8); + + /* Thread. */ + thread_ip_inc(p); +} + #define RTE_SWX_INSTRUCTION_TOKENS_MAX 16 static int @@ -2622,6 +2812,14 @@ instr_translate(struct rte_swx_pipeline *p, instr, data); + if (!strcmp(tokens[tpos], "dma")) + return instr_dma_translate(p, + action, + &tokens[tpos], + n_tokens - tpos, + instr, + data); + CHECK(0, EINVAL); } @@ -2770,6 +2968,15 @@ static instr_exec_t instruction_table[] = { [INSTR_MOV] = instr_mov_exec, [INSTR_MOV_S] = instr_mov_s_exec, [INSTR_MOV_I] = instr_mov_i_exec, + + [INSTR_DMA_HT] = instr_dma_ht_exec, + [INSTR_DMA_HT2] = instr_dma_ht2_exec, + [INSTR_DMA_HT3] = instr_dma_ht3_exec, + [INSTR_DMA_HT4] = instr_dma_ht4_exec, + [INSTR_DMA_HT5] = instr_dma_ht5_exec, + [INSTR_DMA_HT6] = instr_dma_ht6_exec, + [INSTR_DMA_HT7] = instr_dma_ht7_exec, + [INSTR_DMA_HT8] = instr_dma_ht8_exec, }; static inline void -- 2.17.1