From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1224545AAE; Fri, 4 Oct 2024 14:08:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EDC6540E5E; Fri, 4 Oct 2024 14:08:35 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id DA1A24027F for ; Fri, 4 Oct 2024 14:08:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1728043714; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=8/PIZPp9fHHR/Uu40td0/nCpP6tuLsTfVRUIcb2JfQA=; b=QoYb49eAwdGlyuVwIBcSYeaa94tJhStOF2DYx4nbrv5bDWYy/FyeU6lWoVLUUmn4Ks8TQB Y3uMSSDj8X9RDBUmKvOuDHV0gmTqXpUGzoljJHf9LpY7PKZ3LfYbQ2qjuQprJe0efs3G5v rh4XRWy9mrRUCbr76eJO/DUfp8yC8s0= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-516-rfFNI7fKPJmQtCnfMk-gOA-1; Fri, 04 Oct 2024 08:08:31 -0400 X-MC-Unique: rfFNI7fKPJmQtCnfMk-gOA-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C8BFA1956096; Fri, 4 Oct 2024 12:08:23 +0000 (UTC) Received: from [10.39.208.26] (unknown [10.39.208.26]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CCBDC19560A3; Fri, 4 Oct 2024 12:08:21 +0000 (UTC) Message-ID: Date: Fri, 4 Oct 2024 14:08:19 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 02/10] baseband/acc: queue allocation refactor To: Hernan Vargas , dev@dpdk.org, gakhil@marvell.com, trix@redhat.com Cc: nicolas.chautru@intel.com, qi.z.zhang@intel.com References: <20241003204912.131319-1-hernan.vargas@intel.com> <20241003204912.131319-3-hernan.vargas@intel.com> From: Maxime Coquelin Autocrypt: addr=maxime.coquelin@redhat.com; keydata= xsFNBFOEQQIBEADjNLYZZqghYuWv1nlLisptPJp+TSxE/KuP7x47e1Gr5/oMDJ1OKNG8rlNg kLgBQUki3voWhUbMb69ybqdMUHOl21DGCj0BTU3lXwapYXOAnsh8q6RRM+deUpasyT+Jvf3a gU35dgZcomRh5HPmKMU4KfeA38cVUebsFec1HuJAWzOb/UdtQkYyZR4rbzw8SbsOemtMtwOx YdXodneQD7KuRU9IhJKiEfipwqk2pufm2VSGl570l5ANyWMA/XADNhcEXhpkZ1Iwj3TWO7XR uH4xfvPl8nBsLo/EbEI7fbuUULcAnHfowQslPUm6/yaGv6cT5160SPXT1t8U9QDO6aTSo59N jH519JS8oeKZB1n1eLDslCfBpIpWkW8ZElGkOGWAN0vmpLfdyiqBNNyS3eGAfMkJ6b1A24un /TKc6j2QxM0QK4yZGfAxDxtvDv9LFXec8ENJYsbiR6WHRHq7wXl/n8guyh5AuBNQ3LIK44x0 KjGXP1FJkUhUuruGyZsMrDLBRHYi+hhDAgRjqHgoXi5XGETA1PAiNBNnQwMf5aubt+mE2Q5r qLNTgwSo2dpTU3+mJ3y3KlsIfoaxYI7XNsPRXGnZi4hbxmeb2NSXgdCXhX3nELUNYm4ArKBP LugOIT/zRwk0H0+RVwL2zHdMO1Tht1UOFGfOZpvuBF60jhMzbQARAQABzSxNYXhpbWUgQ29x dWVsaW4gPG1heGltZS5jb3F1ZWxpbkByZWRoYXQuY29tPsLBeAQTAQIAIgUCV3u/5QIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyjiNKEaHD4ma2g/+P+Hg9WkONPaY1J4AR7Uf kBneosS4NO3CRy0x4WYmUSLYMLx1I3VH6SVjqZ6uBoYy6Fs6TbF6SHNc7QbB6Qjo3neqnQR1 71Ua1MFvIob8vUEl3jAR/+oaE1UJKrxjWztpppQTukIk4oJOmXbL0nj3d8dA2QgHdTyttZ1H xzZJWWz6vqxCrUqHU7RSH9iWg9R2iuTzii4/vk1oi4Qz7y/q8ONOq6ffOy/t5xSZOMtZCspu Mll2Szzpc/trFO0pLH4LZZfz/nXh2uuUbk8qRIJBIjZH3ZQfACffgfNefLe2PxMqJZ8mFJXc RQO0ONZvwoOoHL6CcnFZp2i0P5ddduzwPdGsPq1bnIXnZqJSl3dUfh3xG5ArkliZ/++zGF1O wvpGvpIuOgLqjyCNNRoR7cP7y8F24gWE/HqJBXs1qzdj/5Hr68NVPV1Tu/l2D1KMOcL5sOrz 2jLXauqDWn1Okk9hkXAP7+0Cmi6QwAPuBT3i6t2e8UdtMtCE4sLesWS/XohnSFFscZR6Vaf3 gKdWiJ/fW64L6b9gjkWtHd4jAJBAIAx1JM6xcA1xMbAFsD8gA2oDBWogHGYcScY/4riDNKXi lw92d6IEHnSf6y7KJCKq8F+Jrj2BwRJiFKTJ6ChbOpyyR6nGTckzsLgday2KxBIyuh4w+hMq TGDSp2rmWGJjASrOwU0EVPSbkwEQAMkaNc084Qvql+XW+wcUIY+Dn9A2D1gMr2BVwdSfVDN7 0ZYxo9PvSkzh6eQmnZNQtl8WSHl3VG3IEDQzsMQ2ftZn2sxjcCadexrQQv3Lu60Tgj7YVYRM H+fLYt9W5YuWduJ+FPLbjIKynBf6JCRMWr75QAOhhhaI0tsie3eDsKQBA0w7WCuPiZiheJaL 4MDe9hcH4rM3ybnRW7K2dLszWNhHVoYSFlZGYh+MGpuODeQKDS035+4H2rEWgg+iaOwqD7bg CQXwTZ1kSrm8NxIRVD3MBtzp9SZdUHLfmBl/tLVwDSZvHZhhvJHC6Lj6VL4jPXF5K2+Nn/Su CQmEBisOmwnXZhhu8ulAZ7S2tcl94DCo60ReheDoPBU8PR2TLg8rS5f9w6mLYarvQWL7cDtT d2eX3Z6TggfNINr/RTFrrAd7NHl5h3OnlXj7PQ1f0kfufduOeCQddJN4gsQfxo/qvWVB7PaE 1WTIggPmWS+Xxijk7xG6x9McTdmGhYaPZBpAxewK8ypl5+yubVsE9yOOhKMVo9DoVCjh5To5 aph7CQWfQsV7cd9PfSJjI2lXI0dhEXhQ7lRCFpf3V3mD6CyrhpcJpV6XVGjxJvGUale7+IOp sQIbPKUHpB2F+ZUPWds9yyVxGwDxD8WLqKKy0WLIjkkSsOb9UBNzgRyzrEC9lgQ/ABEBAAHC wV8EGAECAAkFAlT0m5MCGwwACgkQyjiNKEaHD4nU8hAAtt0xFJAy0sOWqSmyxTc7FUcX+pbD KVyPlpl6urKKMk1XtVMUPuae/+UwvIt0urk1mXi6DnrAN50TmQqvdjcPTQ6uoZ8zjgGeASZg jj0/bJGhgUr9U7oG7Hh2F8vzpOqZrdd65MRkxmc7bWj1k81tOU2woR/Gy8xLzi0k0KUa8ueB iYOcZcIGTcs9CssVwQjYaXRoeT65LJnTxYZif2pfNxfINFzCGw42s3EtZFteczClKcVSJ1+L +QUY/J24x0/ocQX/M1PwtZbB4c/2Pg/t5FS+s6UB1Ce08xsJDcwyOPIH6O3tccZuriHgvqKP yKz/Ble76+NFlTK1mpUlfM7PVhD5XzrDUEHWRTeTJSvJ8TIPL4uyfzhjHhlkCU0mw7Pscyxn DE8G0UYMEaNgaZap8dcGMYH/96EfE5s/nTX0M6MXV0yots7U2BDb4soLCxLOJz4tAFDtNFtA wLBhXRSvWhdBJZiig/9CG3dXmKfi2H+wdUCSvEFHRpgo7GK8/Kh3vGhgKmnnxhl8ACBaGy9n fxjSxjSO6rj4/MeenmlJw1yebzkX8ZmaSi8BHe+n6jTGEFNrbiOdWpJgc5yHIZZnwXaW54QT UhhSjDL1rV2B4F28w30jYmlRmm2RdN7iCZfbyP3dvFQTzQ4ySquuPkIGcOOHrvZzxbRjzMx1 Mwqu3GQ= In-Reply-To: <20241003204912.131319-3-hernan.vargas@intel.com> X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 10/3/24 22:49, Hernan Vargas wrote: > Refactor to manage queue memory per operation more flexibly for VRB > devices. > > Signed-off-by: Hernan Vargas > --- > drivers/baseband/acc/acc_common.h | 5 + > drivers/baseband/acc/rte_vrb_pmd.c | 214 ++++++++++++++++++++--------- > 2 files changed, 157 insertions(+), 62 deletions(-) > > diff --git a/drivers/baseband/acc/acc_common.h b/drivers/baseband/acc/acc_common.h > index b1f81e73e68d..adbac0dcca70 100644 > --- a/drivers/baseband/acc/acc_common.h > +++ b/drivers/baseband/acc/acc_common.h > @@ -149,6 +149,8 @@ > #define VRB2_VF_ID_SHIFT 6 > > #define ACC_MAX_FFT_WIN 16 > +#define ACC_MAX_RING_BUFFER 64 > +#define VRB2_MAX_Q_PER_OP 256 > > extern int acc_common_logtype; > > @@ -581,6 +583,9 @@ struct acc_device { > void *sw_rings_base; /* Base addr of un-aligned memory for sw rings */ > void *sw_rings; /* 64MBs of 64MB aligned memory for sw rings */ > rte_iova_t sw_rings_iova; /* IOVA address of sw_rings */ > + void *sw_rings_array[ACC_MAX_RING_BUFFER]; /* Array of aligned memory for sw rings. */ > + rte_iova_t sw_rings_iova_array[ACC_MAX_RING_BUFFER]; /* Array of sw_rings IOVA. */ > + uint32_t queue_index[ACC_MAX_RING_BUFFER]; /* Tracking queue index per ring buffer. */ > /* Virtual address of the info memory routed to the this function under > * operation, whether it is PF or VF. > * HW may DMA information data at this location asynchronously > diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c > index bae01e563826..2c62a5b3e329 100644 > --- a/drivers/baseband/acc/rte_vrb_pmd.c > +++ b/drivers/baseband/acc/rte_vrb_pmd.c > @@ -281,7 +281,7 @@ fetch_acc_config(struct rte_bbdev *dev) > /* Check the depth of the AQs. */ > reg_len0 = acc_reg_read(d, d->reg_addr->depth_log0_offset); > reg_len1 = acc_reg_read(d, d->reg_addr->depth_log1_offset); > - for (acc = 0; acc < NUM_ACC; acc++) { > + for (acc = 0; acc < VRB1_NUM_ACCS; acc++) { > qtopFromAcc(&q_top, acc, acc_conf); > if (q_top->first_qgroup_index < ACC_NUM_QGRPS_PER_WORD) > q_top->aq_depth_log2 = > @@ -290,7 +290,7 @@ fetch_acc_config(struct rte_bbdev *dev) > q_top->aq_depth_log2 = (reg_len1 >> ((q_top->first_qgroup_index - > ACC_NUM_QGRPS_PER_WORD) * 4)) & 0xF; > } > - } else { > + } else if (d->device_variant == VRB2_VARIANT) { > reg0 = acc_reg_read(d, d->reg_addr->qman_group_func); > reg1 = acc_reg_read(d, d->reg_addr->qman_group_func + 4); > reg2 = acc_reg_read(d, d->reg_addr->qman_group_func + 8); > @@ -308,7 +308,7 @@ fetch_acc_config(struct rte_bbdev *dev) > idx = (reg2 >> ((qg % ACC_NUM_QGRPS_PER_WORD) * 4)) & 0x7; > else > idx = (reg3 >> ((qg % ACC_NUM_QGRPS_PER_WORD) * 4)) & 0x7; > - if (idx < VRB_NUM_ACCS) { > + if (idx < VRB2_NUM_ACCS) { > acc = qman_func_id[idx]; > updateQtop(acc, qg, acc_conf, d); > } > @@ -321,7 +321,7 @@ fetch_acc_config(struct rte_bbdev *dev) > reg_len2 = acc_reg_read(d, d->reg_addr->depth_log0_offset + 8); > reg_len3 = acc_reg_read(d, d->reg_addr->depth_log0_offset + 12); > > - for (acc = 0; acc < NUM_ACC; acc++) { > + for (acc = 0; acc < VRB2_NUM_ACCS; acc++) { > qtopFromAcc(&q_top, acc, acc_conf); > if (q_top->first_qgroup_index / ACC_NUM_QGRPS_PER_WORD == 0) > q_top->aq_depth_log2 = (reg_len0 >> ((q_top->first_qgroup_index % This function could be much heavily refactored. If we look at was is actuallt performed, VRB1 and VRB2 logic is the same, just a couple of value differs (they could be set at probe time). I might propose something in the future. > @@ -543,6 +543,7 @@ vrb_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id) > { > uint32_t phys_low, phys_high, value; > struct acc_device *d = dev->data->dev_private; > + uint16_t queues_per_op, i; > int ret; > > if (d->pf_device && !d->acc_conf.pf_mode_en) { > @@ -564,27 +565,37 @@ vrb_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id) > return -ENODEV; > } > > - alloc_sw_rings_min_mem(dev, d, num_queues, socket_id); > + if (d->device_variant == VRB1_VARIANT) { > + alloc_sw_rings_min_mem(dev, d, num_queues, socket_id); > > - /* If minimal memory space approach failed, then allocate > - * the 2 * 64MB block for the sw rings. > - */ > - if (d->sw_rings == NULL) > - alloc_2x64mb_sw_rings_mem(dev, d, socket_id); > + /* If minimal memory space approach failed, then allocate > + * the 2 * 64MB block for the sw rings. > + */ > + if (d->sw_rings == NULL) > + alloc_2x64mb_sw_rings_mem(dev, d, socket_id); > > - if (d->sw_rings == NULL) { > - rte_bbdev_log(NOTICE, > - "Failure allocating sw_rings memory"); > - return -ENOMEM; > + if (d->sw_rings == NULL) { > + rte_bbdev_log(NOTICE, "Failure allocating sw_rings memory"); > + return -ENOMEM; > + } > + } else if (d->device_variant == VRB2_VARIANT) { > + queues_per_op = RTE_MIN(VRB2_MAX_Q_PER_OP, num_queues); > + for (i = 0; i <= RTE_BBDEV_OP_MLDTS; i++) { > + alloc_sw_rings_min_mem(dev, d, queues_per_op, socket_id); > + if (d->sw_rings == NULL) { > + rte_bbdev_log(NOTICE, "Failure allocating sw_rings memory %d", i); > + return -ENOMEM; > + } > + /* Moves the pointer to the relevant array. */ > + d->sw_rings_array[i] = d->sw_rings; > + d->sw_rings_iova_array[i] = d->sw_rings_iova; > + d->sw_rings = NULL; > + d->sw_rings_base = NULL; > + d->sw_rings_iova = 0; > + d->queue_index[i] = 0; > + } > } > > - /* Configure device with the base address for DMA descriptor rings. > - * Same descriptor rings used for UL and DL DMA Engines. > - * Note : Assuming only VF0 bundle is used for PF mode. > - */ > - phys_high = (uint32_t)(d->sw_rings_iova >> 32); > - phys_low = (uint32_t)(d->sw_rings_iova & ~(ACC_SIZE_64MBYTE-1)); > - > /* Read the populated cfg from device registers. */ > fetch_acc_config(dev); > > @@ -599,20 +610,60 @@ vrb_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id) > if (d->pf_device) > acc_reg_write(d, VRB1_PfDmaAxiControl, 1); > > - acc_reg_write(d, d->reg_addr->dma_ring_ul5g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_ul5g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->dma_ring_dl5g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_dl5g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->dma_ring_ul4g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_ul4g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->dma_ring_dl4g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_dl4g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->dma_ring_fft_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_fft_lo, phys_low); > - if (d->device_variant == VRB2_VARIANT) { > - acc_reg_write(d, d->reg_addr->dma_ring_mld_hi, phys_high); > - acc_reg_write(d, d->reg_addr->dma_ring_mld_lo, phys_low); > + if (d->device_variant == VRB1_VARIANT) { > + /* Configure device with the base address for DMA descriptor rings. > + * Same descriptor rings used for UL and DL DMA Engines. > + * Note : Assuming only VF0 bundle is used for PF mode. > + */ > + phys_high = (uint32_t)(d->sw_rings_iova >> 32); > + phys_low = (uint32_t)(d->sw_rings_iova & ~(ACC_SIZE_64MBYTE-1)); > + acc_reg_write(d, d->reg_addr->dma_ring_ul5g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->dma_ring_ul5g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->dma_ring_dl5g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->dma_ring_dl5g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->dma_ring_ul4g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->dma_ring_ul4g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->dma_ring_dl4g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->dma_ring_dl4g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->dma_ring_fft_hi, phys_high); > + acc_reg_write(d, d->reg_addr->dma_ring_fft_lo, phys_low); > + } else if (d->device_variant == VRB2_VARIANT) { > + /* Configure device with the base address for DMA descriptor rings. > + * Different ring buffer used for each operation type. > + * Note : Assuming only VF0 bundle is used for PF mode. > + */ > + acc_reg_write(d, d->reg_addr->dma_ring_ul5g_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_LDPC_DEC] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_ul5g_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_LDPC_DEC] > + & ~(ACC_SIZE_64MBYTE - 1))); > + acc_reg_write(d, d->reg_addr->dma_ring_dl5g_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_LDPC_ENC] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_dl5g_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_LDPC_ENC] > + & ~(ACC_SIZE_64MBYTE - 1))); > + acc_reg_write(d, d->reg_addr->dma_ring_ul4g_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_TURBO_DEC] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_ul4g_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_TURBO_DEC] > + & ~(ACC_SIZE_64MBYTE - 1))); > + acc_reg_write(d, d->reg_addr->dma_ring_dl4g_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_TURBO_ENC] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_dl4g_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_TURBO_ENC] > + & ~(ACC_SIZE_64MBYTE - 1))); > + acc_reg_write(d, d->reg_addr->dma_ring_fft_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_FFT] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_fft_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_FFT] > + & ~(ACC_SIZE_64MBYTE - 1))); > + acc_reg_write(d, d->reg_addr->dma_ring_mld_hi, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_MLDTS] >> 32)); > + acc_reg_write(d, d->reg_addr->dma_ring_mld_lo, > + (uint32_t)(d->sw_rings_iova_array[RTE_BBDEV_OP_MLDTS] > + & ~(ACC_SIZE_64MBYTE - 1))); > } > + > /* > * Configure Ring Size to the max queue ring size > * (used for wrapping purpose). > @@ -636,19 +687,21 @@ vrb_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id) > > phys_high = (uint32_t)(d->tail_ptr_iova >> 32); > phys_low = (uint32_t)(d->tail_ptr_iova); > - acc_reg_write(d, d->reg_addr->tail_ptrs_ul5g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_ul5g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->tail_ptrs_dl5g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_dl5g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->tail_ptrs_ul4g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_ul4g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->tail_ptrs_dl4g_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_dl4g_lo, phys_low); > - acc_reg_write(d, d->reg_addr->tail_ptrs_fft_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_fft_lo, phys_low); > - if (d->device_variant == VRB2_VARIANT) { > - acc_reg_write(d, d->reg_addr->tail_ptrs_mld_hi, phys_high); > - acc_reg_write(d, d->reg_addr->tail_ptrs_mld_lo, phys_low); > + { > + acc_reg_write(d, d->reg_addr->tail_ptrs_ul5g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_ul5g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->tail_ptrs_dl5g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_dl5g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->tail_ptrs_ul4g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_ul4g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->tail_ptrs_dl4g_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_dl4g_lo, phys_low); > + acc_reg_write(d, d->reg_addr->tail_ptrs_fft_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_fft_lo, phys_low); > + if (d->device_variant == VRB2_VARIANT) { > + acc_reg_write(d, d->reg_addr->tail_ptrs_mld_hi, phys_high); > + acc_reg_write(d, d->reg_addr->tail_ptrs_mld_lo, phys_low); > + } > } > > ret = allocate_info_ring(dev); > @@ -684,8 +737,13 @@ vrb_setup_queues(struct rte_bbdev *dev, uint16_t num_queues, int socket_id) > rte_free(d->tail_ptrs); > d->tail_ptrs = NULL; > free_sw_rings: > - rte_free(d->sw_rings_base); > - d->sw_rings = NULL; > + if (d->device_variant == VRB1_VARIANT) { > + rte_free(d->sw_rings_base); > + d->sw_rings = NULL; It was not caught initially, but it looks akward to free sw_rings_base, and then set sw_rings to NULL. sw_rings_based should also be set to NULL. > + } else if (d->device_variant == VRB2_VARIANT) { > + for (i = 0; i <= RTE_BBDEV_OP_MLDTS; i++) > + rte_free(d->sw_rings_array[i]); Same here, you should set sw_rings_array[i] to NULL to avoid double free later. > + } > > return ret; > } > @@ -809,17 +867,34 @@ vrb_intr_enable(struct rte_bbdev *dev) > static int > vrb_dev_close(struct rte_bbdev *dev) > { > + int i; > struct acc_device *d = dev->data->dev_private; > + > vrb_check_ir(d); > - if (d->sw_rings_base != NULL) { > - rte_free(d->tail_ptrs); > - rte_free(d->info_ring); > - rte_free(d->sw_rings_base); > - rte_free(d->harq_layout); > - d->tail_ptrs = NULL; > - d->info_ring = NULL; > - d->sw_rings_base = NULL; > - d->harq_layout = NULL; > + if (d->device_variant == VRB1_VARIANT) { > + if (d->sw_rings_base != NULL) { > + rte_free(d->tail_ptrs); > + rte_free(d->info_ring); > + rte_free(d->sw_rings_base); > + rte_free(d->harq_layout); > + d->tail_ptrs = NULL; > + d->info_ring = NULL; > + d->sw_rings_base = NULL; > + d->harq_layout = NULL; > + } Actually, it would be cleanner to perform the free operations systematically, no need to check whether sw_rings_base is NULL. > + } else if (d->device_variant == VRB2_VARIANT) { > + if (d->sw_rings_array[1] != NULL) { Why 1 and not 0? And as mentionned above this can be done unconditionnally. > + rte_free(d->tail_ptrs); > + rte_free(d->info_ring); > + rte_free(d->harq_layout); > + d->tail_ptrs = NULL; > + d->info_ring = NULL; > + d->harq_layout = NULL; > + for (i = 0; i <= RTE_BBDEV_OP_MLDTS; i++) { > + rte_free(d->sw_rings_array[i]); > + d->sw_rings_array[i] = NULL; > + } > + } > } > /* Ensure all in flight HW transactions are completed. */ > usleep(ACC_LONG_WAIT); > @@ -890,8 +965,16 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id, > } > > q->d = d; > - q->ring_addr = RTE_PTR_ADD(d->sw_rings, (d->sw_ring_size * queue_id)); > - q->ring_addr_iova = d->sw_rings_iova + (d->sw_ring_size * queue_id); > + if (d->device_variant == VRB1_VARIANT) { > + q->ring_addr = RTE_PTR_ADD(d->sw_rings, (d->sw_ring_size * queue_id)); > + q->ring_addr_iova = d->sw_rings_iova + (d->sw_ring_size * queue_id); > + } else if (d->device_variant == VRB2_VARIANT) { > + q->ring_addr = RTE_PTR_ADD(d->sw_rings_array[conf->op_type], > + (d->sw_ring_size * d->queue_index[conf->op_type])); > + q->ring_addr_iova = d->sw_rings_iova_array[conf->op_type] + > + (d->sw_ring_size * d->queue_index[conf->op_type]); > + d->queue_index[conf->op_type]++; > + } > > /* Prepare the Ring with default descriptor format. */ > union acc_dma_desc *desc = NULL; > @@ -1347,8 +1430,14 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info) > dev_info->queue_priority[RTE_BBDEV_OP_FFT] = d->acc_conf.q_fft.num_qgroups; > dev_info->queue_priority[RTE_BBDEV_OP_MLDTS] = d->acc_conf.q_mld.num_qgroups; > dev_info->max_num_queues = 0; > - for (i = RTE_BBDEV_OP_NONE; i <= RTE_BBDEV_OP_MLDTS; i++) > + for (i = RTE_BBDEV_OP_NONE; i <= RTE_BBDEV_OP_MLDTS; i++) { > + if (unlikely(dev_info->num_queues[i] > VRB2_MAX_Q_PER_OP)) { > + rte_bbdev_log(ERR, "Unexpected number of queues %d exposed for op %d", > + dev_info->num_queues[i], i); > + dev_info->num_queues[i] = VRB2_MAX_Q_PER_OP; > + } > dev_info->max_num_queues += dev_info->num_queues[i]; > + } > dev_info->queue_size_lim = ACC_MAX_QUEUE_DEPTH; > dev_info->hardware_accelerated = true; > dev_info->max_dl_queue_priority = > @@ -4239,7 +4328,8 @@ vrb_bbdev_init(struct rte_bbdev *dev, struct rte_pci_driver *drv) > d->reg_addr = &vrb1_pf_reg_addr; > else > d->reg_addr = &vrb1_vf_reg_addr; > - } else { > + } else if ((pci_dev->id.device_id == RTE_VRB2_PF_DEVICE_ID) || > + (pci_dev->id.device_id == RTE_VRB2_VF_DEVICE_ID)) { > d->device_variant = VRB2_VARIANT; > d->queue_offset = vrb2_queue_offset; > d->num_qgroups = VRB2_NUM_QGRPS;