From: Nicolas Chautru <nicolas.chautru@intel.com>
To: dev@dpdk.org, gakhil@marvell.com
Cc: trix@redhat.com, thomas@monjalon.net, ray.kinsella@intel.com,
bruce.richardson@intel.com, hemant.agrawal@nxp.com,
mingshan.zhang@intel.com, david.marchand@redhat.com,
Nicolas Chautru <nicolas.chautru@intel.com>
Subject: [PATCH v1 6/9] baseband/acc101: support HARQ loopback
Date: Mon, 4 Apr 2022 14:13:45 -0700 [thread overview]
Message-ID: <1649106828-116338-7-git-send-email-nicolas.chautru@intel.com> (raw)
In-Reply-To: <1649106828-116338-1-git-send-email-nicolas.chautru@intel.com>
Add function to do HARQ loopback on top of
default 5G UL processing.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/bbdevs/acc101.rst | 1 +
drivers/baseband/acc101/rte_acc101_pmd.c | 157 +++++++++++++++++++++++++++++++
2 files changed, 158 insertions(+)
diff --git a/doc/guides/bbdevs/acc101.rst b/doc/guides/bbdevs/acc101.rst
index ae27bf3..49f6c74 100644
--- a/doc/guides/bbdevs/acc101.rst
+++ b/doc/guides/bbdevs/acc101.rst
@@ -38,6 +38,7 @@ ACC101 5G/4G FEC PMD supports the following BBDEV capabilities:
- ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` : provides an input for HARQ combining
- ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` : HARQ memory input is internal
- ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` : HARQ memory output is internal
+ - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` : loopback data to/from HARQ memory
- ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` : HARQ memory includes the fillers bits
- ``RTE_BBDEV_LDPC_DEC_SCATTER_GATHER`` : supports scatter-gather for input/output data
- ``RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION`` : supports compression of the HARQ input/output
diff --git a/drivers/baseband/acc101/rte_acc101_pmd.c b/drivers/baseband/acc101/rte_acc101_pmd.c
index 2861907..2009c1a 100644
--- a/drivers/baseband/acc101/rte_acc101_pmd.c
+++ b/drivers/baseband/acc101/rte_acc101_pmd.c
@@ -784,6 +784,7 @@
RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE |
RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE |
#ifdef ACC101_EXT_MEM
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK |
RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE |
RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE |
#endif
@@ -1730,6 +1731,157 @@ static inline uint32_t hq_index(uint32_t offset)
return return_descs;
}
+static inline int
+harq_loopback(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
+ uint16_t total_enqueued_cbs) {
+ struct acc101_fcw_ld *fcw;
+ union acc101_dma_desc *desc;
+ int next_triplet = 1;
+ struct rte_mbuf *hq_output_head, *hq_output;
+ uint16_t harq_dma_length_in, harq_dma_length_out;
+ uint16_t harq_in_length = op->ldpc_dec.harq_combined_input.length;
+ if (harq_in_length == 0) {
+ rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+ return -EINVAL;
+ }
+
+ int h_comp = check_bit(op->ldpc_dec.op_flags,
+ RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION
+ ) ? 1 : 0;
+ if (h_comp == 1) {
+ harq_in_length = harq_in_length * 8 / 6;
+ harq_in_length = RTE_ALIGN(harq_in_length, 64);
+ harq_dma_length_in = harq_in_length * 6 / 8;
+ } else {
+ harq_in_length = RTE_ALIGN(harq_in_length, 64);
+ harq_dma_length_in = harq_in_length;
+ }
+ harq_dma_length_out = harq_dma_length_in;
+
+ bool ddr_mem_in = check_bit(op->ldpc_dec.op_flags,
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE);
+ union acc101_harq_layout_data *harq_layout = q->d->harq_layout;
+ uint32_t harq_index = hq_index(ddr_mem_in ?
+ op->ldpc_dec.harq_combined_input.offset :
+ op->ldpc_dec.harq_combined_output.offset);
+
+ uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
+ & q->sw_ring_wrap_mask);
+ desc = q->ring_addr + desc_idx;
+ fcw = &desc->req.fcw_ld;
+ /* Set the FCW from loopback into DDR */
+ memset(fcw, 0, sizeof(struct acc101_fcw_ld));
+ fcw->FCWversion = ACC101_FCW_VER;
+ fcw->qm = 2;
+ fcw->Zc = 384;
+ if (harq_in_length < 16 * ACC101_N_ZC_1)
+ fcw->Zc = 16;
+ fcw->ncb = fcw->Zc * ACC101_N_ZC_1;
+ fcw->rm_e = 2;
+ fcw->hcin_en = 1;
+ fcw->hcout_en = 1;
+
+ rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d\n",
+ ddr_mem_in, harq_index,
+ harq_layout[harq_index].offset, harq_in_length,
+ harq_dma_length_in);
+
+ if (ddr_mem_in && (harq_layout[harq_index].offset > 0)) {
+ fcw->hcin_size0 = harq_layout[harq_index].size0;
+ fcw->hcin_offset = harq_layout[harq_index].offset;
+ fcw->hcin_size1 = harq_in_length - fcw->hcin_offset;
+ harq_dma_length_in = (fcw->hcin_size0 + fcw->hcin_size1);
+ if (h_comp == 1)
+ harq_dma_length_in = harq_dma_length_in * 6 / 8;
+ } else {
+ fcw->hcin_size0 = harq_in_length;
+ }
+ harq_layout[harq_index].val = 0;
+ rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d\n",
+ fcw->hcin_size0, fcw->hcin_offset, fcw->hcin_size1);
+ fcw->hcout_size0 = harq_in_length;
+ fcw->hcin_decomp_mode = h_comp;
+ fcw->hcout_comp_mode = h_comp;
+ fcw->gain_i = 1;
+ fcw->gain_h = 1;
+
+ /* Set the prefix of descriptor. This could be done at polling */
+ acc101_header_init(&desc->req);
+
+ /* Null LLR input for Decoder */
+ desc->req.data_ptrs[next_triplet].address =
+ q->lb_in_addr_iova;
+ desc->req.data_ptrs[next_triplet].blen = 2;
+ desc->req.data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_IN;
+ desc->req.data_ptrs[next_triplet].last = 0;
+ desc->req.data_ptrs[next_triplet].dma_ext = 0;
+ next_triplet++;
+
+ /* HARQ Combine input from either Memory interface */
+ if (!ddr_mem_in) {
+ next_triplet = acc101_dma_fill_blk_type_out(&desc->req,
+ op->ldpc_dec.harq_combined_input.data,
+ op->ldpc_dec.harq_combined_input.offset,
+ harq_dma_length_in,
+ next_triplet,
+ ACC101_DMA_BLKID_IN_HARQ);
+ } else {
+ desc->req.data_ptrs[next_triplet].address =
+ op->ldpc_dec.harq_combined_input.offset;
+ desc->req.data_ptrs[next_triplet].blen =
+ harq_dma_length_in;
+ desc->req.data_ptrs[next_triplet].blkid =
+ ACC101_DMA_BLKID_IN_HARQ;
+ desc->req.data_ptrs[next_triplet].dma_ext = 1;
+ next_triplet++;
+ }
+ desc->req.data_ptrs[next_triplet - 1].last = 1;
+ desc->req.m2dlen = next_triplet;
+
+ /* Dropped decoder hard output */
+ desc->req.data_ptrs[next_triplet].address =
+ q->lb_out_addr_iova;
+ desc->req.data_ptrs[next_triplet].blen = ACC101_BYTES_IN_WORD;
+ desc->req.data_ptrs[next_triplet].blkid = ACC101_DMA_BLKID_OUT_HARD;
+ desc->req.data_ptrs[next_triplet].last = 0;
+ desc->req.data_ptrs[next_triplet].dma_ext = 0;
+ next_triplet++;
+
+ /* HARQ Combine output to either Memory interface */
+ if (check_bit(op->ldpc_dec.op_flags,
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE
+ )) {
+ desc->req.data_ptrs[next_triplet].address =
+ op->ldpc_dec.harq_combined_output.offset;
+ desc->req.data_ptrs[next_triplet].blen =
+ harq_dma_length_out;
+ desc->req.data_ptrs[next_triplet].blkid =
+ ACC101_DMA_BLKID_OUT_HARQ;
+ desc->req.data_ptrs[next_triplet].dma_ext = 1;
+ next_triplet++;
+ } else {
+ hq_output_head = op->ldpc_dec.harq_combined_output.data;
+ hq_output = op->ldpc_dec.harq_combined_output.data;
+ next_triplet = acc101_dma_fill_blk_type_out(
+ &desc->req,
+ op->ldpc_dec.harq_combined_output.data,
+ op->ldpc_dec.harq_combined_output.offset,
+ harq_dma_length_out,
+ next_triplet,
+ ACC101_DMA_BLKID_OUT_HARQ);
+ /* HARQ output */
+ mbuf_append(hq_output_head, hq_output, harq_dma_length_out);
+ op->ldpc_dec.harq_combined_output.length =
+ harq_dma_length_out;
+ }
+ desc->req.data_ptrs[next_triplet - 1].last = 1;
+ desc->req.d2mlen = next_triplet - desc->req.m2dlen;
+ desc->req.op_addr = op;
+
+ /* One CB (one op) was successfully prepared to enqueue */
+ return 1;
+}
+
/** Enqueue one decode operations for ACC101 device in CB mode */
static inline int
enqueue_ldpc_dec_one_op_cb(struct acc101_queue *q, struct rte_bbdev_dec_op *op,
@@ -1738,6 +1890,11 @@ static inline uint32_t hq_index(uint32_t offset)
{
RTE_SET_USED(q_data);
int ret;
+ if (unlikely(check_bit(op->ldpc_dec.op_flags,
+ RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK))) {
+ ret = harq_loopback(q, op, total_enqueued_cbs);
+ return ret;
+ }
union acc101_dma_desc *desc;
uint16_t desc_idx = ((q->sw_ring_head + total_enqueued_cbs)
--
1.8.3.1
next prev parent reply other threads:[~2022-04-04 21:17 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-04 21:13 [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 1/9] baseband/acc101: introduce PMD for ACC101 Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 2/9] baseband/acc101: add HW register definition Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 3/9] baseband/acc101: add info get function Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 4/9] baseband/acc101: add queue configuration Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 5/9] baseband/acc101: add LDPC processing Nicolas Chautru
2022-04-04 21:13 ` Nicolas Chautru [this message]
2022-04-04 21:13 ` [PATCH v1 7/9] baseband/acc101: support 4G processing Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 8/9] baseband/acc101: support MSI interrupt Nicolas Chautru
2022-04-04 21:13 ` [PATCH v1 9/9] baseband/acc101: add device configure function Nicolas Chautru
2022-04-05 6:48 ` [PATCH v1 0/9] drivers/baseband: new PMD for ACC101 device Thomas Monjalon
2022-04-05 6:57 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1649106828-116338-7-git-send-email-nicolas.chautru@intel.com \
--to=nicolas.chautru@intel.com \
--cc=bruce.richardson@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=gakhil@marvell.com \
--cc=hemant.agrawal@nxp.com \
--cc=mingshan.zhang@intel.com \
--cc=ray.kinsella@intel.com \
--cc=thomas@monjalon.net \
--cc=trix@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).