test suite reviews and discussions
 help / color / mirror / Atom feed
From: yufengx.mo@intel.com
To: dts@dpdk.org
Cc: yufengmx <yufengx.mo@intel.com>
Subject: [dts] [PATCH V1 1/2] memory_register: upstream test plan
Date: Wed,  6 Jun 2018 13:32:37 +0800	[thread overview]
Message-ID: <1528263159-21324-2-git-send-email-yufengx.mo@intel.com> (raw)
In-Reply-To: <1528263159-21324-1-git-send-email-yufengx.mo@intel.com>

From: yufengmx <yufengx.mo@intel.com>


This test plan is for memory_register feature.
It is a feature related with DPDK Memory System Redesign sub task
Register/unregister memory with vfio dynamically.

Signed-off-by: yufengmx <yufengx.mo@intel.com>
---
 test_plans/memory_register_test_plan.rst | 204 +++++++++++++++++++++++++++++++
 1 file changed, 204 insertions(+)
 create mode 100644 test_plans/memory_register_test_plan.rst

diff --git a/test_plans/memory_register_test_plan.rst b/test_plans/memory_register_test_plan.rst
new file mode 100644
index 0000000..f020b82
--- /dev/null
+++ b/test_plans/memory_register_test_plan.rst
@@ -0,0 +1,204 @@
+.. Copyright (c) <2018>, Intel Corporation
+   All rights reserved.
+   
+   Redistribution and use in source and binary forms, with or without
+   modification, are permitted provided that the following conditions
+   are met:
+   
+   - Redistributions of source code must retain the above copyright
+     notice, this list of conditions and the following disclaimer.
+   
+   - Redistributions in binary form must reproduce the above copyright
+     notice, this list of conditions and the following disclaimer in
+     the documentation and/or other materials provided with the
+     distribution.
+   
+   - Neither the name of Intel Corporation nor the names of its
+     contributors may be used to endorse or promote products derived
+     from this software without specific prior written permission.
+   
+   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+   FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+   COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+   INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+   (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+   SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+   HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+   STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+   ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
+   OF THE POSSIBILITY OF SUCH DAMAGE.
+
+===============
+memory register
+===============
+
+It is a way to register memory outside hugepages with DPDK after start up.
+Specifically, the registered memory will be the target of DMA operations and  
+use it for allocations through rte_malloc or rte_mempool.
+
+This feature should work for both uio and vfio. There are no performance 
+requirements around how quickly new memory can be registered or unregistered, 
+but calls to translate virtual to physical addresses should remain fast. 
+
+It is a feature related with DPDK's ``Memory System Redesign`` sub task 
+``Register/unregister memory with vfio dynamically``.
+
+Prerequisites
+-------------
+2xNICs (2 full duplex optical ports per NIC)
+    no nic type limitation
+
+bind port number::
+    one port
+    four port
+
+driver::
+    vfio-pci
+    igb_uio 
+
+HW configuration
+----------------
+1U socket/2U socket::
+    SuperMicro 1U Xeon D Broadwell SoC uServer(1U Socket)
+    Broadwell-EP Xeon E5-2600(2U Socket)
+
+Test cases
+----------
+
+DPDK has no custom example or unit test binary to test memory register/
+unregister. The idea behind the testing process is to run dpdk unit test related 
+with memory. It is the only method to test this feature. `Memory System Redesign` 
+task is related with all dpdk memory management mechanism, including 
+malloc/mbuf/memory/mempool/memcpy/memzone. Run Compound test based on 
+driver/port number/socket type
+
+ Test Case : dpdk malloc autotest
+=================================
+*. bind ports
+
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test``
+
+    ./test/app/test -n 1 -c f
+
+*. run malloc_autotest
+
+    RTE>> malloc_autotest
+ 
+*. check ``test ok`` in output
+
+ Test Case : dpdk mbuf autotest
+===============================
+*. bind ports
+
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run mbuf_autotest
+
+    RTE>> mbuf_autotest
+ 
+*. check ``test ok`` in output
+
+ Test Case : dpdk memcpy autotest
+=================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run memcpy_autotest
+
+    RTE>> memcpy_autotest
+ 
+*. check ``test ok`` in output
+
+ Test Case : dpdk memcpy perf autotest
+======================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run memcpy_perf_autotest
+
+    RTE>> memcpy_perf_autotest
+ 
+*. check ``test ok`` in output
+
+ Test Case : dpdk memory autotest
+=================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run memory_autotest
+
+    RTE>> memory_autotest
+ 
+*. check ``test ok`` in output
+
+Test Case : dpdk mempool autotest
+=================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run mempool_autotest
+
+    RTE>> mempool_autotest
+
+*. check ``test ok`` in output
+
+ Test Case : dpdk mempool perf autotest
+=======================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run mempool_perf_autotest
+
+    RTE>> mempool_perf_autotest
+ 
+*. check ``test ok`` in output
+
+ Test Case : dpdk memzone autotest
+==================================
+*. bind ports
+  
+   ./usertools/dpdk_nic_bind.py --bind=<driver> <pci address>
+
+*. boot up unit test binary ``test`` 
+
+    ./test/app/test -n 1 -c f
+
+*. run memzone_autotest
+
+    RTE>> memzone_autotest
+ 
+*. check ``test ok`` in output
+
-- 
1.9.3

  reply	other threads:[~2018-06-06  5:32 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-06  5:32 [dts] [PATCH V1 0/2] memory_register: upstream test plan and automation script yufengx.mo
2018-06-06  5:32 ` yufengx.mo [this message]
2018-06-07  1:55   ` [dts] [PATCH V1 1/2] memory_register: upstream test plan Tu, Lijuan
2018-06-06  5:32 ` [dts] [PATCH V1 2/2] memory_register: upstream automation script yufengx.mo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1528263159-21324-2-git-send-email-yufengx.mo@intel.com \
    --to=yufengx.mo@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).