DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
To: fengchengwen <fengchengwen@huawei.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: Anoob Joseph <anoobj@marvell.com>,
	Vamsi Krishna Attunuru <vattunuru@marvell.com>,
	Amit Prakash Shukla <amitprakashs@marvell.com>,
	Vidya Sagar Velumuri <vvelumuri@marvell.com>,
	Kevin Laatz <kevin.laatz@intel.com>,
	Bruce Richardson <bruce.richardson@intel.com>
Subject: RE: [EXT] Re: [PATCH v3 3/3] test/dma: add SG copy tests
Date: Mon, 13 Nov 2023 09:27:09 +0000	[thread overview]
Message-ID: <CO1PR18MB4714A660072A97DB7CB0DD98CBB3A@CO1PR18MB4714.namprd18.prod.outlook.com> (raw)
In-Reply-To: <e8833081-0390-26c1-f5d2-a75b66993d46@huawei.com>

Hi Fengchengwen,

> 
> On 2023/11/3 23:38, Gowrishankar Muthukrishnan wrote:
> > Add scatter-gather copy tests.
> >
> > Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> > Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
> > ---
> >  app/test/test_dmadev.c     | 132 +++++++++++++++++++++++++++++-
> >  app/test/test_dmadev_api.c | 163 ++++++++++++++++++++++++++++++++++-
> --
> >  app/test/test_dmadev_api.h |   2 +
> >  3 files changed, 283 insertions(+), 14 deletions(-)
> >
> > diff --git a/app/test/test_dmadev.c b/app/test/test_dmadev.c index
> > 780941fc1e..a2f3a7f999 100644
> > --- a/app/test/test_dmadev.c
> > +++ b/app/test/test_dmadev.c
> > @@ -19,7 +19,7 @@
> >  #define ERR_RETURN(...) do { print_err(__func__, __LINE__,
> > __VA_ARGS__); return -1; } while (0)
> >
> >  #define TEST_RINGSIZE 512
> > -#define COPY_LEN 1024
> > +#define COPY_LEN 1032
> 
> The test MAX_SG_NUM is limit 4, so it could be 1/2/3/4 segment, and 1032 can
> both div 1/2/3/4, but 1024 couldn't
> 
> I think this is why change 1024->1032.
> Suggest add some comment about it.
>

This change is not required after normalization as done in next patch.
 
> >
> >  static struct rte_mempool *pool;
> >  static int16_t test_dev_id;
> > @@ -396,6 +396,120 @@ test_stop_start(int16_t dev_id, uint16_t vchan)
> >  	return 0;
> >  }
> >
> > +static int
> > +test_enqueue_sg_copies(int16_t dev_id, uint16_t vchan) {
> > +	unsigned int src_len, dst_len, n_sge, len, i, j, k;
> > +	char orig_src[COPY_LEN], orig_dst[COPY_LEN];
> > +	struct rte_dma_info info = { 0 };
> > +	enum rte_dma_status_code status;
> > +	uint16_t id, n_src, n_dst;
> > +
> > +	if (rte_dma_info_get(dev_id, &info) < 0)
> > +		ERR_RETURN("Failed to get dev info");
> > +
> > +	n_sge = RTE_MIN(info.max_sges, TEST_SG_MAX);
> > +	len = COPY_LEN;
> > +
> > +	for (n_src = 1; n_src <= n_sge; n_src++) {
> > +		src_len = len / n_src;
> > +		for (n_dst = 1; n_dst <= n_sge; n_dst++) {
> > +			dst_len = len / n_dst;
> 
> If the len % [1~n_dst] not zero, how to process it ?
> 
> I see, it was ensured by adjust copy_len value. Suggest add comments for it.
> 
> Also suggest extra above to one function.

I have added minor fix here to normalize buffer lengths. Please check in next patch version.
> 
> > +
> > +			struct rte_dma_sge sg_src[n_sge], sg_dst[n_sge];
> > +			struct rte_mbuf *src[n_sge], *dst[n_sge];
> > +			char *src_data[n_sge], *dst_data[n_sge];
> > +
> > +			for (i = 0 ; i < COPY_LEN; i++)
> > +				orig_src[i] = rte_rand() & 0xFF;
> > +
> > +			memset(orig_dst, 0, COPY_LEN);
> > +
> > +			for (i = 0; i < n_src; i++) {
> > +				src[i] = rte_pktmbuf_alloc(pool);
> > +				RTE_ASSERT(src[i] != NULL);
> > +				sg_src[i].addr = rte_pktmbuf_iova(src[i]);
> > +				sg_src[i].length = src_len;
> > +				src_data[i] = rte_pktmbuf_mtod(src[i], char *);
> > +			}
> > +
> > +			for (k = 0; k < n_dst; k++) {
> > +				dst[k] = rte_pktmbuf_alloc(pool);
> > +				RTE_ASSERT(dst[k] != NULL);
> > +				sg_dst[k].addr = rte_pktmbuf_iova(dst[k]);
> > +				sg_dst[k].length = dst_len;
> > +				dst_data[k] = rte_pktmbuf_mtod(dst[k], char
> *);
> > +			}
> > +
> > +			for (i = 0; i < n_src; i++) {
> > +				for (j = 0; j < src_len; j++)
> > +					src_data[i][j] = orig_src[i * src_len + j];
> > +			}
> > +
> > +			for (k = 0; k < n_dst; k++)
> > +				memset(dst_data[k], 0, dst_len);
> > +
> > +			printf("\tsrc segs: %2d [seg len: %4d] - dst segs: %2d
> [seg len : %4d]\n",
> > +				n_src, src_len, n_dst, dst_len);
> > +
> > +			id = rte_dma_copy_sg(dev_id, vchan, sg_src, sg_dst,
> n_src, n_dst,
> > +					     RTE_DMA_OP_FLAG_SUBMIT);
> > +
> > +			if (id != id_count)
> > +				ERR_RETURN("Error with rte_dma_copy_sg,
> got %u, expected %u\n",
> > +					id, id_count);
> > +
> > +			/* Give time for copy to finish, then check it was done
> */
> > +			await_hw(dev_id, vchan);
> > +
> > +			for (k = 0; k < n_dst; k++)
> > +				memcpy((&orig_dst[0] + k * dst_len),
> dst_data[k], dst_len);
> > +
> > +			if (memcmp(orig_src, orig_dst, COPY_LEN))
> > +				ERR_RETURN("Data mismatch");
> > +
> > +			/* Verify completion */
> > +			id = ~id;
> > +			if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) !=
> 1)
> > +				ERR_RETURN("Error with
> rte_dma_completed\n");
> > +
> > +			/* Verify expected index(id_count) */
> > +			if (id != id_count)
> > +				ERR_RETURN("Error:incorrect job id received,
> %u [expected %u]\n",
> > +						id, id_count);
> > +
> > +			/* Check for completed and id when no job done */
> > +			id = ~id;
> > +			if (rte_dma_completed(dev_id, vchan, 1, &id, NULL) !=
> 0)
> > +				ERR_RETURN("Error with rte_dma_completed
> when no job done\n");
> > +
> > +			if (id != id_count)
> > +				ERR_RETURN("Error:incorrect job id received
> when no job done, %u [expected %u]\n",
> > +					   id, id_count);
> > +
> > +			/* Check for completed_status and id when no job done
> */
> > +			id = ~id;
> > +			if (rte_dma_completed_status(dev_id, vchan, 1, &id,
> &status) != 0)
> > +				ERR_RETURN("Error with
> rte_dma_completed_status when no job done\n");
> > +			if (id != id_count)
> > +				ERR_RETURN("Error:incorrect job id received
> when no job done, %u [expected %u]\n",
> > +						id, 0);
> > +
> > +			for (i = 0; i < n_src; i++)
> > +				rte_pktmbuf_free(src[i]);
> > +			for (i = 0; i < n_dst; i++)
> > +				rte_pktmbuf_free(dst[i]);
> > +
> > +			/* Verify that completion returns nothing more */
> > +			if (rte_dma_completed(dev_id, 0, 1, NULL, NULL) != 0)
> > +				ERR_RETURN("Error with rte_dma_completed
> in empty check\n");
> 
> already verify this, no need do more

But this is against empty param check (as it is also done in plain copy test).

> 
> > +
> > +			id_count++;
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +
> >  /* Failure handling test cases - global macros and variables for those tests*/
> >  #define COMP_BURST_SZ	16
> >  #define OPT_FENCE(idx) ((fence && idx == 8) ? RTE_DMA_OP_FLAG_FENCE :
> > 0) @@ -1003,7 +1117,7 @@ test_dmadev_setup(void)
> >  			TEST_RINGSIZE * 2, /* n == num elements */
> >  			32,  /* cache size */
> >  			0,   /* priv size */
> > -			2048, /* data room size */
> > +			COPY_LEN + RTE_PKTMBUF_HEADROOM, /* data
> room size */
> >  			info.numa_node);
> >  	if (pool == NULL)
> >  		ERR_RETURN("Error with mempool creation\n"); @@ -1026,6
> +1140,7 @@
> > test_dmadev_instance(int16_t dev_id)
> >  #define CHECK_ERRS    true
> >  	enum {
> >  		  TEST_COPY = 0,
> > +		  TEST_COPY_SG,
> >  		  TEST_START,
> >  		  TEST_BURST,
> >  		  TEST_ERR,
> > @@ -1060,6 +1175,13 @@ test_dmadev_instance(int16_t dev_id)
> >  	param[TEST_COPY].vchan = vchan;
> >  	param[TEST_COPY].check_err_stats = CHECK_ERRS;
> >
> > +	param[TEST_COPY_SG].printable = "copy";
> 
> should be SG copy.

Ack.
> 
> > +	param[TEST_COPY_SG].test_fn = test_enqueue_sg_copies;
> > +	param[TEST_COPY_SG].iterations = 1;
> > +	param[TEST_COPY_SG].dev_id = dev_id;
> > +	param[TEST_COPY_SG].vchan = vchan;
> > +	param[TEST_COPY_SG].check_err_stats = CHECK_ERRS;
> > +
> >  	param[TEST_START].printable = "stop-start";
> >  	param[TEST_START].test_fn = test_stop_start;
> >  	param[TEST_START].iterations = 1;
> > @@ -1122,6 +1244,12 @@ test_dmadev_instance(int16_t dev_id)
> >  	/* run tests stopping/starting devices and check jobs still work after
> restart */
> >  	ts->unit_test_cases[TEST_START].enabled = 1;
> >
> > +	/* run SG test cases */
> > +	if ((dev_info.dev_capa & RTE_DMA_CAPA_OPS_COPY_SG) == 0)
> > +		printf("DMA Dev %u: No SG support, skipping SG copy tests\n",
> dev_id);
> > +	else
> > +		ts->unit_test_cases[TEST_COPY_SG].enabled = 1;
> 
> suggest wrap it as test_dmadev_sg_copy_setup, just like
> test_dmadev_burst_setup
> 
> > +
> >  	/* run some burst capacity tests */
> >  	ts->unit_test_cases[TEST_BURST].setup = test_dmadev_burst_setup;
> >  	ts->unit_test_cases[TEST_BURST].enabled = 1; diff --git
> > a/app/test/test_dmadev_api.c b/app/test/test_dmadev_api.c index
> > aa07d2b359..37e43c9336 100644
> > --- a/app/test/test_dmadev_api.c
> > +++ b/app/test/test_dmadev_api.c
> > @@ -10,6 +10,7 @@
> >  #include <rte_dmadev.h>
> >
> >  #include "test.h"
> > +#include "test_dmadev_api.h"
> >
> >  extern int test_dma_api(uint16_t dev_id);
> >
> > @@ -21,36 +22,62 @@ static int16_t invalid_dev_id;
> >
> >  static char *src;
> >  static char *dst;
> > +static char *src_sg[TEST_SG_MAX];
> > +static char *dst_sg[TEST_SG_MAX];
> >
> >  static int
> >  testsuite_setup(void)
> >  {
> >  	invalid_dev_id = -1;
> > -
> > -	src = rte_malloc("dmadev_test_src", TEST_MEMCPY_SIZE, 0);
> > -	if (src == NULL)
> > -		return -ENOMEM;
> > -	dst = rte_malloc("dmadev_test_dst", TEST_MEMCPY_SIZE, 0);
> > -	if (dst == NULL) {
> > -		rte_free(src);
> > -		src = NULL;
> > -		return -ENOMEM;
> > +	int i, rc = 0;
> > +
> > +	for (i = 0; i < TEST_SG_MAX; i++) {
> > +		src_sg[i] = rte_malloc("dmadev_test_src", TEST_MEMCPY_SIZE,
> 0);
> > +		if (src_sg[i] == NULL) {
> > +			rc = -ENOMEM;
> > +			goto exit;
> > +		}
> > +
> > +		dst_sg[i] = rte_malloc("dmadev_test_dst", TEST_MEMCPY_SIZE,
> 0);
> > +		if (dst_sg[i] == NULL) {
> > +			rte_free(src_sg[i]);
> > +			src_sg[i] = NULL;
> > +			rc = -ENOMEM;
> > +			goto exit;
> > +		}
> >  	}
> >
> > +	src = src_sg[0];
> > +	dst = dst_sg[0];
> > +
> >  	/* Set dmadev log level to critical to suppress unnecessary output
> >  	 * during API tests.
> >  	 */
> >  	rte_log_set_level_pattern("lib.dmadev", RTE_LOG_CRIT);
> >
> > -	return 0;
> > +	return rc;
> > +exit:
> > +	while (--i >= 0) {
> > +		rte_free(src_sg[i]);
> > +		rte_free(dst_sg[i]);
> > +	}
> > +
> > +	return rc;
> >  }
> >
> >  static void
> >  testsuite_teardown(void)
> >  {
> > -	rte_free(src);
> > +	int i;
> > +
> > +	for (i = 0; i < TEST_SG_MAX; i++) {
> > +		rte_free(src_sg[i]);
> > +		src_sg[i] = NULL;
> > +		rte_free(dst_sg[i]);
> > +		dst_sg[i] = NULL;
> > +	}
> > +
> >  	src = NULL;
> > -	rte_free(dst);
> >  	dst = NULL;
> >  	/* Ensure the dmadev is stopped. */
> >  	rte_dma_stop(test_dev_id);
> > @@ -437,6 +464,37 @@ verify_memory(void)
> >  	return 0;
> >  }
> >
> > +static void
> > +sg_memory_setup(int n)
> > +{
> > +	int i, j;
> > +
> > +	for (i = 0; i < n; i++) {
> > +		for (j = 0; j < TEST_MEMCPY_SIZE; j++)
> > +			src_sg[i][j] = (char)j;
> > +
> > +		memset(dst_sg[i], 0, TEST_MEMCPY_SIZE);
> > +	}
> > +}
> > +
> > +static int
> > +sg_memory_verify(int n)
> > +{
> > +	int i, j;
> > +
> > +	for (i = 0; i < n; i++) {
> > +		for (j = 0; j < TEST_MEMCPY_SIZE; j++) {
> > +			if (src_sg[i][j] == dst_sg[i][j])
> > +				continue;
> > +
> > +			RTE_TEST_ASSERT_EQUAL(src_sg[i][j], dst_sg[i][j],
> "Failed to copy memory, %d %d",
> > +				src_sg[i][j], dst_sg[i][j]);
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >  static int
> >  test_dma_completed(void)
> >  {
> > @@ -551,6 +609,86 @@ test_dma_completed_status(void)
> >  	return TEST_SUCCESS;
> >  }
> >
> > +static int
> > +test_dma_sg(void)
> > +{
> > +	struct rte_dma_sge src_sge[TEST_SG_MAX], dst_sge[TEST_SG_MAX];
> > +	struct rte_dma_info dev_info = { 0 };
> > +	uint16_t last_idx = -1;
> > +	bool has_error = true;
> > +	int n_sge, i, ret;
> > +	uint16_t cpl_ret;
> > +
> > +	ret = rte_dma_info_get(test_dev_id, &dev_info);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to obtain device info, %d",
> > +ret);
> > +
> > +	if ((dev_info.dev_capa & RTE_DMA_CAPA_OPS_COPY_SG) == 0)
> > +		return TEST_SKIPPED;
> > +
> > +	n_sge = RTE_MIN(dev_info.max_sges, TEST_SG_MAX);
> > +
> > +	ret = setup_vchan(1);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup one vchan, %d", ret);
> > +
> > +	ret = rte_dma_start(test_dev_id);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start, %d", ret);
> > +
> > +	for (i = 0; i < n_sge; i++) {
> > +		src_sge[i].addr = rte_malloc_virt2iova(src_sg[i]);
> > +		src_sge[i].length = TEST_MEMCPY_SIZE;
> > +		dst_sge[i].addr = rte_malloc_virt2iova(dst_sg[i]);
> > +		dst_sge[i].length = TEST_MEMCPY_SIZE;
> > +	}
> > +
> > +	sg_memory_setup(n_sge);
> > +
> > +	/* Check enqueue without submit */
> > +	ret = rte_dma_copy_sg(test_dev_id, 0, src_sge, dst_sge, n_sge, n_sge,
> 0);
> > +	RTE_TEST_ASSERT_EQUAL(ret, 0, "Failed to enqueue copy, %d", ret);
> > +
> > +	rte_delay_us_sleep(TEST_WAIT_US_VAL);
> > +
> > +	cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error);
> > +	RTE_TEST_ASSERT_EQUAL(cpl_ret, 0, "Failed to get completed");
> > +
> > +	/* Check DMA submit */
> > +	ret = rte_dma_submit(test_dev_id, 0);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to submit, %d", ret);
> > +
> > +	rte_delay_us_sleep(TEST_WAIT_US_VAL);
> > +
> > +	cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error);
> > +	RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed");
> > +	RTE_TEST_ASSERT_EQUAL(last_idx, 0, "Last idx should be zero, %u",
> last_idx);
> > +	RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error");
> > +
> > +	ret = sg_memory_verify(n_sge);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to verify memory");
> > +
> > +	sg_memory_setup(n_sge);
> > +
> > +	/* Check for enqueue with submit */
> > +	ret = rte_dma_copy_sg(test_dev_id, 0, src_sge, dst_sge, n_sge, n_sge,
> > +			      RTE_DMA_OP_FLAG_SUBMIT);
> > +	RTE_TEST_ASSERT_EQUAL(ret, 1, "Failed to enqueue copy, %d", ret);
> > +
> > +	rte_delay_us_sleep(TEST_WAIT_US_VAL);
> > +
> > +	cpl_ret = rte_dma_completed(test_dev_id, 0, 1, &last_idx, &has_error);
> > +	RTE_TEST_ASSERT_EQUAL(cpl_ret, 1, "Failed to get completed");
> > +	RTE_TEST_ASSERT_EQUAL(last_idx, 1, "Last idx should be 1, %u",
> last_idx);
> > +	RTE_TEST_ASSERT_EQUAL(has_error, false, "Should have no error");
> > +
> > +	ret = sg_memory_verify(n_sge);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to verify memory");
> > +
> > +	/* Stop dmadev to make sure dmadev to a known state */
> > +	ret = rte_dma_stop(test_dev_id);
> > +	RTE_TEST_ASSERT_SUCCESS(ret, "Failed to stop, %d", ret);
> > +
> > +	return TEST_SUCCESS;
> > +}
> > +
> >  static struct unit_test_suite dma_api_testsuite = {
> >  	.suite_name = "DMA API Test Suite",
> >  	.setup = testsuite_setup,
> > @@ -568,6 +706,7 @@ static struct unit_test_suite dma_api_testsuite = {
> >  		TEST_CASE(test_dma_dump),
> >  		TEST_CASE(test_dma_completed),
> >  		TEST_CASE(test_dma_completed_status),
> > +		TEST_CASE(test_dma_sg),
> >  		TEST_CASES_END()
> >  	}
> >  };
> > diff --git a/app/test/test_dmadev_api.h b/app/test/test_dmadev_api.h
> > index 33fbc5bd41..10ab925f80 100644
> > --- a/app/test/test_dmadev_api.h
> > +++ b/app/test/test_dmadev_api.h
> > @@ -2,4 +2,6 @@
> >   * Copyright(c) 2021 HiSilicon Limited
> >   */
> >
> > +#define TEST_SG_MAX 4
> 
> suggest TEST_MAX_SGES which corresponding dev_info.max_sges

Ok, I have fixed API test to validate against max_ages.

Thanks,
Gowrishankar
> 
> Thanks
> Chengwen
> 
> > +
> >  int test_dma_api(uint16_t dev_id);
> >

  reply	other threads:[~2023-11-13  9:27 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-10 11:59 [PATCH] test/dma: add test skip status Gowrishankar Muthukrishnan
2023-08-10 12:36 ` [PATCH v2 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-08-10 12:36   ` [PATCH v2 1/3] test/dma: add test skip status Gowrishankar Muthukrishnan
2023-08-10 12:36   ` [PATCH v2 2/3] test/dma: test vchan reconfiguration Gowrishankar Muthukrishnan
2023-08-10 12:36   ` [PATCH v2 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-03 15:38   ` [PATCH v3 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-11-03 15:38     ` [PATCH v3 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-06  3:47       ` fengchengwen
2023-11-13  9:28         ` [EXT] " Gowrishankar Muthukrishnan
2023-11-03 15:38     ` [PATCH v3 2/3] test/dma: test vchan reconfiguration Gowrishankar Muthukrishnan
2023-11-06  4:06       ` fengchengwen
2023-11-03 15:38     ` [PATCH v3 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-06  6:43       ` fengchengwen
2023-11-13  9:27         ` Gowrishankar Muthukrishnan [this message]
2023-11-13 12:53     ` [PATCH v4 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-11-13 12:53       ` [PATCH v4 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-13 12:53       ` [PATCH v4 2/3] test/dma: test multiple vchan Gowrishankar Muthukrishnan
2023-11-13 12:53       ` [PATCH v4 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-16  7:16       ` [PATCH v4 0/3] test/dma: add vchan reconfig and SG tests fengchengwen
2023-11-16 10:40       ` [PATCH v5 " Gowrishankar Muthukrishnan
2023-11-16 10:40         ` [PATCH v5 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-16 10:40         ` [PATCH v5 2/3] test/dma: test multiple vchan Gowrishankar Muthukrishnan
2023-11-16 10:40         ` [PATCH v5 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-16 10:59         ` [PATCH v6 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-11-16 10:59           ` [PATCH v6 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-16 10:59           ` [PATCH v6 2/3] test/dma: test multiple vchan Gowrishankar Muthukrishnan
2023-11-16 10:59           ` [PATCH v6 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-16 13:27           ` [PATCH v7 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-11-16 13:27             ` [PATCH v7 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-16 13:27             ` [PATCH v7 2/3] test/dma: test multiple vchan Gowrishankar Muthukrishnan
2023-11-16 13:27             ` [PATCH v7 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-11-16 17:45             ` [PATCH v8 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2023-11-16 17:45               ` [PATCH v8 1/3] test/dma: use unit test framework Gowrishankar Muthukrishnan
2023-11-16 17:45               ` [PATCH v8 2/3] test/dma: test multiple vchan Gowrishankar Muthukrishnan
2023-11-16 17:45               ` [PATCH v8 3/3] test/dma: add SG copy tests Gowrishankar Muthukrishnan
2023-12-07 10:10               ` [PATCH v8 0/3] test/dma: add vchan reconfig and SG tests Gowrishankar Muthukrishnan
2024-02-05 10:34                 ` Gowrishankar Muthukrishnan
2024-03-06 19:45               ` Thomas Monjalon
2023-10-08  1:51 ` [PATCH] test/dma: add test skip status fengchengwen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CO1PR18MB4714A660072A97DB7CB0DD98CBB3A@CO1PR18MB4714.namprd18.prod.outlook.com \
    --to=gmuthukrishn@marvell.com \
    --cc=amitprakashs@marvell.com \
    --cc=anoobj@marvell.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=fengchengwen@huawei.com \
    --cc=kevin.laatz@intel.com \
    --cc=vattunuru@marvell.com \
    --cc=vvelumuri@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).