patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Anoob Joseph <anoobj@marvell.com>
To: Vipin Varghese <vipin.varghese@amd.com>
Cc: "Ferruh.Yigit@amd.com" <Ferruh.Yigit@amd.com>,
	"cheng1.jiang@intel.com" <cheng1.jiang@intel.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"dev@dpdk.org" <dev@dpdk.org>,
	Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Subject: RE: [EXT] [PATCH] app/dma-perf: fix physical address seg-fault
Date: Wed, 16 Aug 2023 07:26:58 +0000	[thread overview]
Message-ID: <PH0PR18MB467285D75866F44004737F82DF15A@PH0PR18MB4672.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20230816071810.1796-1-vipin.varghese@amd.com>

Hi Vipin,

Thanks for the update. Please see inline.

Thanks,
Anoob

> -----Original Message-----
> From: Vipin Varghese <vipin.varghese@amd.com>
> Sent: Wednesday, August 16, 2023 12:48 PM
> To: thomas@monjalon.net; dev@dpdk.org
> Cc: Ferruh.Yigit@amd.com; cheng1.jiang@intel.com; stable@dpdk.org
> Subject: [EXT] [PATCH] app/dma-perf: fix physical address seg-fault
> 
> External Email
> 
> ----------------------------------------------------------------------
> do_cpu_mem_copy uses DPDK API rte_mbuf_data_iova to return the start
> of the virtual address for both src and dst.
> But in case of iova mode set as PA, this results in seg-fault.
> This is because rte_memcpy VA address and not PA.
> 
> this fix checks the iova mode and invokes rte_memcpy with the right
> arguments.
> 
> Bugzilla ID: 1269
> Fixes: 623dc9364dc6 ("app/dma-perf: introduce DMA performance test")
> Cc: cheng1.jiang@intel.com
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Vipin Varghese <vipin.varghese@amd.com>
> ---
> 
> tested for both va and pa
> 
> CMD:
> PA: dpdk-test-dma-perf --iova-mode=pa  -- --config test.ini
> VA: dpdk-test-dma-perf --iova-mode=va  -- --config test.ini
> DC: dpdk-test-dma-perf --iova-mode=dc  -- --config test.ini
> 
> Log: fails for dc mode `EAL: invalid parameters for --iova-mode`
> 
> test.ini:
> ```
> [case1]
> type=CPU_MEM_COPY
> mem_size=10
> buf_size=64,8192,2,MUL
> src_numa_node=0
> dst_numa_node=0
> cache_flush=0
> test_seconds=2
> lcore = 7
> eal_args=--in-memory --no-pci
> ```
> ---
>  app/test-dma-perf/benchmark.c | 39 +++++++++++++++++++++++++++----
> ----
>  1 file changed, 30 insertions(+), 9 deletions(-)
> 
> diff --git a/app/test-dma-perf/benchmark.c b/app/test-dma-
> perf/benchmark.c index 0601e0d171..5573acc9f9 100644
> --- a/app/test-dma-perf/benchmark.c
> +++ b/app/test-dma-perf/benchmark.c
> @@ -279,6 +279,10 @@ do_cpu_mem_copy(void *p)
>  	struct rte_mbuf **srcs = para->srcs;
>  	struct rte_mbuf **dsts = para->dsts;
>  	uint32_t i;
> +	bool isAddrPaMode = false;
> +
> +	if (rte_eal_iova_mode() == RTE_IOVA_PA)
> +		isAddrPaMode = true;
> 
>  	worker_info->stop_flag = false;
>  	worker_info->ready_flag = true;
> @@ -286,16 +290,33 @@ do_cpu_mem_copy(void *p)
>  	while (!worker_info->start_flag)
>  		;
> 
> -	while (1) {
> -		for (i = 0; i < nr_buf; i++) {
> -			/* copy buffer form src to dst */
> -			rte_memcpy((void
> *)(uintptr_t)rte_mbuf_data_iova(dsts[i]),
> -				(void
> *)(uintptr_t)rte_mbuf_data_iova(srcs[i]),
> -				(size_t)buf_size);
> -			worker_info->total_cpl++;
> +	if (true == isAddrPaMode) {
> +		while (1) {
> +			for (i = 0; i < nr_buf; i++) {
> +				void *src = rte_pktmbuf_mtod(dsts[i], void
> *);
> +				void *dst = rte_pktmbuf_mtod(srcs[i], void
> *);
> +
> +				/* copy buffer form src to dst */
> +				rte_memcpy(dst,
> +					src,
> +					(size_t)buf_size);
> +				worker_info->total_cpl++;
> +			}
> +			if (worker_info->stop_flag)
> +				break;
> +		}
> +	} else {
> +		while (1) {
> +			for (i = 0; i < nr_buf; i++) {
> +				/* copy buffer form src to dst */
> +				rte_memcpy((void
> *)(uintptr_t)rte_mbuf_data_iova(dsts[i]),
> +					(void
> *)(uintptr_t)rte_mbuf_data_iova(srcs[i]),
> +					(size_t)buf_size);

[Anoob] Even in case of VA as IOVA, rte_pktmbuf_mtod() should work, right? I was imagining a simple replacement of ' rte_mbuf_data_iova' with 'rte_pktmbuf_mtod' as the fix.

> +				worker_info->total_cpl++;
> +			}
> +			if (worker_info->stop_flag)
> +				break;
>  		}
> -		if (worker_info->stop_flag)
> -			break;
>  	}
> 
>  	return 0;
> --
> 2.34.1


  reply	other threads:[~2023-08-16  7:27 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-16  7:18 Vipin Varghese
2023-08-16  7:26 ` Anoob Joseph [this message]
2023-08-16  7:31   ` [EXT] " Varghese, Vipin
2023-08-16  9:24 ` [PATCH v2] " Vipin Varghese
2023-08-16  9:35 ` Vipin Varghese
2023-08-16  9:42 ` Vipin Varghese
2023-08-16 10:12   ` Jerin Jacob
2023-08-16 10:46   ` [EXT] " Anoob Joseph
2023-09-21  2:28   ` Jiang, Cheng1
2023-10-04  9:00     ` Varghese, Vipin
2023-10-04 13:00       ` Jerin Jacob
2023-10-19  4:19   ` [PATCH v3] " Vipin Varghese
2023-10-24  2:16     ` lihuisong (C)
2023-11-14 14:27       ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR18MB467285D75866F44004737F82DF15A@PH0PR18MB4672.namprd18.prod.outlook.com \
    --to=anoobj@marvell.com \
    --cc=Ferruh.Yigit@amd.com \
    --cc=cheng1.jiang@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinj@marvell.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    --cc=vipin.varghese@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).