From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EB306A0C45; Wed, 22 Sep 2021 05:22:57 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7BFFF4003F; Wed, 22 Sep 2021 05:22:57 +0200 (CEST) Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by mails.dpdk.org (Postfix) with ESMTP id 698094003C for ; Wed, 22 Sep 2021 05:22:55 +0200 (CEST) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HDk5s37gMzbmgW; Wed, 22 Sep 2021 11:18:41 +0800 (CST) Received: from dggpeml500024.china.huawei.com (7.185.36.10) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Wed, 22 Sep 2021 11:22:52 +0800 Received: from [10.40.190.165] (10.40.190.165) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Wed, 22 Sep 2021 11:22:52 +0800 To: Kevin Laatz , CC: , , References: <20210827172048.558704-1-kevin.laatz@intel.com> <20210917152437.3270330-1-kevin.laatz@intel.com> <20210917152437.3270330-10-kevin.laatz@intel.com> From: fengchengwen Message-ID: <366077eb-6f00-631c-db2d-7baaaeb1cf11@huawei.com> Date: Wed, 22 Sep 2021 11:22:52 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20210917152437.3270330-10-kevin.laatz@intel.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.40.190.165] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected Subject: Re: [dpdk-dev] [PATCH v5 09/16] dma/idxd: add data-path job submission functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2021/9/17 23:24, Kevin Laatz wrote: > Add data path functions for enqueuing and submitting operations to DSA > devices. > > Signed-off-by: Bruce Richardson > Signed-off-by: Kevin Laatz > Reviewed-by: Conor Walsh > --- > doc/guides/dmadevs/idxd.rst | 64 +++++++++++++++ > drivers/dma/idxd/idxd_common.c | 136 +++++++++++++++++++++++++++++++ > drivers/dma/idxd/idxd_internal.h | 5 ++ > drivers/dma/idxd/meson.build | 1 + > 4 files changed, 206 insertions(+) > [snip] > + > +static __rte_always_inline int > +__idxd_write_desc(struct rte_dma_dev *dev, > + const uint32_t op_flags, > + const rte_iova_t src, > + const rte_iova_t dst, > + const uint32_t size, > + const uint32_t flags) > +{ > + struct idxd_dmadev *idxd = dev->dev_private; > + uint16_t mask = idxd->desc_ring_mask; > + uint16_t job_id = idxd->batch_start + idxd->batch_size; > + /* we never wrap batches, so we only mask the start and allow start+size to overflow */ > + uint16_t write_idx = (idxd->batch_start & mask) + idxd->batch_size; > + > + /* first check batch ring space then desc ring space */ > + if ((idxd->batch_idx_read == 0 && idxd->batch_idx_write == idxd->max_batches) || > + idxd->batch_idx_write + 1 == idxd->batch_idx_read) > + return -1; > + if (((write_idx + 1) & mask) == (idxd->ids_returned & mask)) > + return -1; Please return -ENOSPC when the ring is full. > + > + /* write desc. Note: descriptors don't wrap, but the completion address does */ > + const uint64_t op_flags64 = (uint64_t)(op_flags | IDXD_FLAG_COMPLETION_ADDR_VALID) << 32; > + const uint64_t comp_addr = __desc_idx_to_iova(idxd, write_idx & mask); > + _mm256_store_si256((void *)&idxd->desc_ring[write_idx], > + _mm256_set_epi64x(dst, src, comp_addr, op_flags64)); > + _mm256_store_si256((void *)&idxd->desc_ring[write_idx].size, > + _mm256_set_epi64x(0, 0, 0, size)); > + > + idxd->batch_size++; > + > + rte_prefetch0_write(&idxd->desc_ring[write_idx + 1]); > + > + if (flags & RTE_DMA_OP_FLAG_SUBMIT) > + __submit(idxd); > + > + return job_id; > +} > + > +int > +idxd_enqueue_copy(struct rte_dma_dev *dev, uint16_t qid __rte_unused, rte_iova_t src, > + rte_iova_t dst, unsigned int length, uint64_t flags) > +{ > + /* we can take advantage of the fact that the fence flag in dmadev and DSA are the same, > + * but check it at compile time to be sure. > + */ > + RTE_BUILD_BUG_ON(RTE_DMA_OP_FLAG_FENCE != IDXD_FLAG_FENCE); > + uint32_t memmove = (idxd_op_memmove << IDXD_CMD_OP_SHIFT) | > + IDXD_FLAG_CACHE_CONTROL | (flags & IDXD_FLAG_FENCE); > + return __idxd_write_desc(dev, memmove, src, dst, length, flags); > +} > + [snip]