From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: abhimanyu.saini@xilinx.com, dev@dpdk.org
Cc: chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru,
Abhimanyu Saini <asaini@xilinx.com>
Subject: Re: [PATCH v3] vdpa/sfc: make MCDI memzone name unique
Date: Tue, 15 Feb 2022 11:56:20 +0100 [thread overview]
Message-ID: <07f7dd4f-34b3-8b61-46f7-ab183703e92b@redhat.com> (raw)
In-Reply-To: <20220214105148.18414-1-asaini@xilinx.com>
On 2/14/22 11:51, abhimanyu.saini@xilinx.com wrote:
> From: Abhimanyu Saini <asaini@xilinx.com>
>
> Buffer for MCDI channel is allocated using rte_memzone_reserve_aligned
> with zone name 'mcdi'. Since multiple MCDI channels are needed to
> support multiple VF(s) and rte_memzone_reserve_aligned expects unique
> zone names, append PCI address to zone name to make it unique.
>
> Signed-off-by: Abhimanyu Saini <asaini@xilinx.com>
> ---
> v2:
> - Formatting changes
> v3:
> - Formatting changes
>
> drivers/vdpa/sfc/sfc_vdpa_hw.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c
> index fd1fee7..a7018b1 100644
> --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c
> +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c
> @@ -25,21 +25,30 @@
> {
> uint64_t mcdi_iova;
> size_t mcdi_buff_size;
> + char mz_name[RTE_MEMZONE_NAMESIZE];
> const struct rte_memzone *mz = NULL;
> int numa_node = sva->pdev->device.numa_node;
> int ret;
>
> mcdi_buff_size = RTE_ALIGN_CEIL(len, PAGE_SIZE);
> + ret = snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "%s_%s",
> + sva->pdev->name, name);
> + if (ret < 0 || ret >= RTE_MEMZONE_NAMESIZE) {
From the man page:
"
The functions snprintf() and vsnprintf() do not write more than size
bytes (including the terminating null byte ('\0')).
"
you might want to pass RTE_MEMZONE_NAMESIZE - 1 as size arg to snprintf,
so you can just check ret >= 0?
> + sfc_vdpa_err(sva, "%s_%s too long to fit in mz_name",
> + sva->pdev->name, name);
> + return -EINVAL;
> + }
>
> - sfc_vdpa_log_init(sva, "name=%s, len=%zu", name, len);
> + sfc_vdpa_log_init(sva, "name=%s, len=%zu", mz_name, len);
>
> - mz = rte_memzone_reserve_aligned(name, mcdi_buff_size,
> + mz = rte_memzone_reserve_aligned(mz_name, mcdi_buff_size,
> numa_node,
> RTE_MEMZONE_IOVA_CONTIG,
> PAGE_SIZE);
> if (mz == NULL) {
> sfc_vdpa_err(sva, "cannot reserve memory for %s: len=%#x: %s",
> - name, (unsigned int)len, rte_strerror(rte_errno));
> + mz_name, (unsigned int)len,
> + rte_strerror(rte_errno));
> return -ENOMEM;
> }
>
next prev parent reply other threads:[~2022-02-15 10:56 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-11 5:33 [PATCH] " abhimanyu.saini
2022-01-14 7:06 ` Xia, Chenbo
2022-01-17 11:29 ` [PATCH v2] " abhimanyu.saini
2022-02-01 8:21 ` Maxime Coquelin
2022-02-14 10:51 ` [PATCH v3] " abhimanyu.saini
2022-02-15 10:56 ` Maxime Coquelin [this message]
2022-02-15 10:59 ` Maxime Coquelin
2022-02-15 12:21 ` Maxime Coquelin
2022-02-17 8:54 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=07f7dd4f-34b3-8b61-46f7-ab183703e92b@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=abhimanyu.saini@xilinx.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=asaini@xilinx.com \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).