DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/5] Some documentation fixes
@ 2023-11-23 11:44 David Marchand
  2023-11-23 11:44 ` [PATCH 1/5] doc: remove restriction on ixgbe vector support David Marchand
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev; +Cc: thomas

Not urgent for the release (especially the last patch which is scary by
its size) but here are some cleanups in the documentation.


-- 
David Marchand

David Marchand (5):
  doc: remove restriction on ixgbe vector support
  doc: enhance readability in memif example commands
  doc: fix some ordered lists
  doc: remove number of commands in vDPA guide
  doc: use ordered lists

 doc/guides/eventdevs/dlb2.rst                 | 29 ++++++-----
 doc/guides/eventdevs/dpaa.rst                 |  2 +-
 .../linux_gsg/nic_perf_intel_platform.rst     | 10 ++--
 doc/guides/nics/cnxk.rst                      |  4 +-
 doc/guides/nics/dpaa2.rst                     | 19 +++----
 doc/guides/nics/enetc.rst                     |  6 +--
 doc/guides/nics/enetfec.rst                   | 12 ++---
 doc/guides/nics/i40e.rst                      | 16 +++---
 doc/guides/nics/ixgbe.rst                     |  2 -
 doc/guides/nics/memif.rst                     | 10 ++--
 doc/guides/nics/mlx4.rst                      | 32 ++++++------
 doc/guides/nics/mlx5.rst                      | 39 +++++++--------
 doc/guides/nics/mvpp2.rst                     | 49 ++++++++++---------
 doc/guides/nics/pfe.rst                       |  8 +--
 doc/guides/nics/tap.rst                       | 14 +++---
 doc/guides/nics/virtio.rst                    | 12 +++++
 doc/guides/platform/bluefield.rst             |  4 +-
 doc/guides/platform/cnxk.rst                  | 29 ++++++-----
 doc/guides/platform/dpaa.rst                  | 14 +++---
 doc/guides/platform/dpaa2.rst                 | 20 ++++----
 doc/guides/platform/mlx5.rst                  | 14 +++---
 doc/guides/platform/octeontx.rst              | 22 ++++-----
 .../prog_guide/env_abstraction_layer.rst      | 10 ++--
 .../generic_segmentation_offload_lib.rst      |  2 +-
 doc/guides/prog_guide/graph_lib.rst           | 39 ++++++++-------
 doc/guides/prog_guide/rawdev.rst              | 28 ++++++-----
 doc/guides/prog_guide/rte_flow.rst            | 12 ++---
 doc/guides/prog_guide/stack_lib.rst           |  8 +--
 doc/guides/prog_guide/trace_lib.rst           | 12 ++---
 doc/guides/rawdevs/ifpga.rst                  |  5 +-
 doc/guides/sample_app_ug/ip_pipeline.rst      |  4 +-
 doc/guides/sample_app_ug/pipeline.rst         |  4 +-
 doc/guides/sample_app_ug/vdpa.rst             | 29 ++++++-----
 doc/guides/windows_gsg/run_apps.rst           |  8 +--
 34 files changed, 282 insertions(+), 246 deletions(-)

-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/5] doc: remove restriction on ixgbe vector support
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
@ 2023-11-23 11:44 ` David Marchand
  2023-11-23 11:45   ` Bruce Richardson
  2023-11-23 11:44 ` [PATCH 2/5] doc: enhance readability in memif example commands David Marchand
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev; +Cc: thomas, stable, Qiming Yang, Wenjun Wu, Bruce Richardson, Jianbo Liu

The ixgbe driver has vector support for different architectures for a
while now.

Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/nics/ixgbe.rst | 2 --
 1 file changed, 2 deletions(-)

diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b1d77ab7ab..14573b542e 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -47,8 +47,6 @@ The wider register gives space to hold multiple packet buffers so as to save ins
 There is no change to PMD API. The RX/TX handler are the only two entries for vPMD packet I/O.
 They are transparently registered at runtime RX/TX execution if all condition checks pass.
 
-1.  To date, only an SSE version of IX GBE vPMD is available.
-
 Some constraints apply as pre-conditions for specific optimizations on bulk packet transfers.
 The following sections explain RX and TX constraints in the vPMD.
 
-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 2/5] doc: enhance readability in memif example commands
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
  2023-11-23 11:44 ` [PATCH 1/5] doc: remove restriction on ixgbe vector support David Marchand
@ 2023-11-23 11:44 ` David Marchand
  2023-11-23 11:48   ` Bruce Richardson
  2023-11-23 11:44 ` [PATCH 3/5] doc: fix some ordered lists David Marchand
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev; +Cc: thomas, Jakub Grajciar

'#.' is a token for ordered lists in RST.
Add a space in those example commands.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/nics/memif.rst | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/doc/guides/nics/memif.rst b/doc/guides/nics/memif.rst
index afc574fdaa..2867b2f66d 100644
--- a/doc/guides/nics/memif.rst
+++ b/doc/guides/nics/memif.rst
@@ -216,15 +216,15 @@ In this example we run two instances of testpmd application and transmit packets
 
 First create ``server`` interface::
 
-    #./<build_dir>/app/dpdk-testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=server -- -i
+    # ./<build_dir>/app/dpdk-testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=server -- -i
 
 Now create ``client`` interface (server must be already running so the client will connect)::
 
-    #./<build_dir>/app/dpdk-testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif -- -i
+    # ./<build_dir>/app/dpdk-testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif -- -i
 
 You can also enable ``zero-copy`` on ``client`` interface::
 
-    #./<build_dir>/app/dpdk-testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i
+    # ./<build_dir>/app/dpdk-testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i
 
 Start forwarding packets::
 
@@ -260,7 +260,7 @@ To see socket filename use show memif command::
 
 Now create memif interface by running testpmd with these command line options::
 
-    #./dpdk-testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i
+    # ./dpdk-testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i
 
 Testpmd should now create memif client interface and try to connect to server.
 In testpmd set forward option to icmpecho and start forwarding::
@@ -283,7 +283,7 @@ The situation is analogous to cross connecting 2 ports of the NIC by cable.
 
 To set the loopback, just use the same socket and id with different roles::
 
-    #./dpdk-testpmd --vdev=net_memif0,role=server,id=0 --vdev=net_memif1,role=client,id=0 -- -i
+    # ./dpdk-testpmd --vdev=net_memif0,role=server,id=0 --vdev=net_memif1,role=client,id=0 -- -i
 
 Then start the communication::
 
-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 3/5] doc: fix some ordered lists
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
  2023-11-23 11:44 ` [PATCH 1/5] doc: remove restriction on ixgbe vector support David Marchand
  2023-11-23 11:44 ` [PATCH 2/5] doc: enhance readability in memif example commands David Marchand
@ 2023-11-23 11:44 ` David Marchand
  2023-11-23 11:49   ` Bruce Richardson
  2023-11-23 17:22   ` Dariusz Sosnowski
  2023-11-23 11:44 ` [PATCH 4/5] doc: remove number of commands in vDPA guide David Marchand
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev
  Cc: thomas, stable, Dariusz Sosnowski, Viacheslav Ovsiienko, Ori Kam,
	Suanming Mou, Matan Azrad, Maxime Coquelin, Chenbo Xia,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Jiayu Hu, Michael Baum, Jianfeng Tan, Yuanhan Liu, Tiwei Bie,
	Yinan Wang, Jerin Jacob, Mark Kavanagh, Konstantin Ananyev,
	John McNamara

Ordered lists must start preceded by an empty line.
Entries must be separated by an empty line (as per our coding style).
Incorrectly indented lines are seen as a separator and result in
starting a new list in the rendered doc.

Fix issues in some guides.

Fixes: 85d9252e55f2 ("net/mlx5: add test for remote PD and CTX")
Fixes: 26b683b4f7d0 ("net/virtio: setup Rx queue interrupts")
Fixes: 9dcf5d15569b ("doc: clarify path selection in virtio guide")
Fixes: 68a03efeed65 ("doc: add Marvell cnxk platform guide")
Fixes: f6010c7655cc ("doc: add GSO programmer's guide")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/nics/mlx5.rst                      | 21 +++++++++----------
 doc/guides/nics/virtio.rst                    | 12 +++++++++++
 doc/guides/platform/cnxk.rst                  |  3 +++
 .../generic_segmentation_offload_lib.rst      |  2 +-
 4 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 45379960f0..39a8c5d7b4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -2326,19 +2326,18 @@ This command performs:
 
 #. Call the regular ``port attach`` function with updated identifier.
 
-For example, to attach a port whose PCI address is ``0000:0a:00.0``
-and its socket path is ``/var/run/import_ipc_socket``:
+   For example, to attach a port whose PCI address is ``0000:0a:00.0``
+   and its socket path is ``/var/run/import_ipc_socket``:
 
-.. code-block:: console
-
-   testpmd> mlx5 port attach 0000:0a:00.0 socket=/var/run/import_ipc_socket
-   testpmd: MLX5 socket path is /var/run/import_ipc_socket
-   testpmd: Attach port with extra devargs 0000:0a:00.0,cmd_fd=40,pd_handle=1
-   Attaching a new port...
-   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket 0)
-   Port 0 is attached. Now total ports is 1
-   Done
+   .. code-block:: console
 
+      testpmd> mlx5 port attach 0000:0a:00.0 socket=/var/run/import_ipc_socket
+      testpmd: MLX5 socket path is /var/run/import_ipc_socket
+      testpmd: Attach port with extra devargs 0000:0a:00.0,cmd_fd=40,pd_handle=1
+      Attaching a new port...
+      EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket 0)
+      Port 0 is attached. Now total ports is 1
+      Done
 
 port map external Rx queue
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index ba6247170d..c22ce56a02 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -217,6 +217,7 @@ Prerequisites for Rx interrupts
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 To support Rx interrupts,
+
 #. Check if guest kernel supports VFIO-NOIOMMU:
 
     Linux started to support VFIO-NOIOMMU since 4.8.0. Make sure the guest
@@ -379,12 +380,16 @@ according to below configuration:
 
 #. Split virtqueue mergeable path: If Rx mergeable is negotiated, in-order feature is
    not negotiated, this path will be selected.
+
 #. Split virtqueue non-mergeable path: If Rx mergeable and in-order feature are not
    negotiated, also Rx offload(s) are requested, this path will be selected.
+
 #. Split virtqueue in-order mergeable path: If Rx mergeable and in-order feature are
    both negotiated, this path will be selected.
+
 #. Split virtqueue in-order non-mergeable path: If in-order feature is negotiated and
    Rx mergeable is not negotiated, this path will be selected.
+
 #. Split virtqueue vectorized Rx path: If Rx mergeable is disabled and no Rx offload
    requested, this path will be selected.
 
@@ -393,16 +398,21 @@ according to below configuration:
 
 #. Packed virtqueue mergeable path: If Rx mergeable is negotiated, in-order feature
    is not negotiated, this path will be selected.
+
 #. Packed virtqueue non-mergeable path: If Rx mergeable and in-order feature are not
    negotiated, this path will be selected.
+
 #. Packed virtqueue in-order mergeable path: If in-order and Rx mergeable feature are
    both negotiated, this path will be selected.
+
 #. Packed virtqueue in-order non-mergeable path: If in-order feature is negotiated and
    Rx mergeable is not negotiated, this path will be selected.
+
 #. Packed virtqueue vectorized Rx path: If building and running environment support
    (AVX512 || NEON) && in-order feature is negotiated && Rx mergeable
    is not negotiated && TCP_LRO Rx offloading is disabled && vectorized option enabled,
    this path will be selected.
+
 #. Packed virtqueue vectorized Tx path: If building and running environment support
    (AVX512 || NEON)  && in-order feature is negotiated && vectorized option enabled,
    this path will be selected.
@@ -480,5 +490,7 @@ or configuration, below steps can help you identify which path you selected and
 root cause faster.
 
 #. Run vhost/virtio test case;
+
 #. Run "perf top" and check virtio Rx/Tx callback names;
+
 #. Identify which virtio path is selected refer to above table.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index b3aa4de09d..b901062c93 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -113,7 +113,9 @@ where even VF bound to the first domain and odd VF bound to the second domain.
 Typical application usage models are,
 
 #. Communication between the Linux kernel and DPDK application.
+
 #. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
+
 #. Communication between two different DPDK applications.
 
 SDP interface
@@ -132,6 +134,7 @@ can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports
 The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
 
 #. Communication channel between remote host and cnxk SoC over PCIe.
+
 #. Transfer packets received from network interface to remote host over PCIe and
    vice-versa.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index e605b86376..30d13bcc61 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -204,7 +204,7 @@ To segment an outgoing packet, an application must:
    - a flag, that indicates whether the IPv4 headers of output segments should
      contain fixed or incremental ID values.
 
-2. Set the appropriate ol_flags in the mbuf.
+#. Set the appropriate ol_flags in the mbuf.
 
    - The GSO library use the value of an mbuf's ``ol_flags`` attribute to
      determine how a packet should be segmented. It is the application's
-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 4/5] doc: remove number of commands in vDPA guide
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
                   ` (2 preceding siblings ...)
  2023-11-23 11:44 ` [PATCH 3/5] doc: fix some ordered lists David Marchand
@ 2023-11-23 11:44 ` David Marchand
  2023-11-23 12:43   ` Thomas Monjalon
  2023-11-23 11:44 ` [PATCH 5/5] doc: use ordered lists David Marchand
  2023-11-24 12:49 ` [PATCH 0/5] Some documentation fixes David Marchand
  5 siblings, 1 reply; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev; +Cc: thomas, stable, Maxime Coquelin, Chenbo Xia, Matan Azrad

There are now 5 supported commands.

Fixes: 6505865aa8ed ("examples/vdpa: add statistics show command")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/sample_app_ug/vdpa.rst | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index cb9c4f2169..6b6de53e48 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -38,8 +38,7 @@ where
 * --iface specifies the path prefix of the UNIX domain socket file, e.g.
   /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n>
   (n starts from 0).
-* --interactive means run the vdpa sample in interactive mode, currently 4
-  internal cmds are supported:
+* --interactive means run the vdpa sample in interactive mode:
 
   1. help: show help message
   2. list: list all available vdpa devices
-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 5/5] doc: use ordered lists
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
                   ` (3 preceding siblings ...)
  2023-11-23 11:44 ` [PATCH 4/5] doc: remove number of commands in vDPA guide David Marchand
@ 2023-11-23 11:44 ` David Marchand
  2023-11-23 11:53   ` Bruce Richardson
  2023-11-23 17:23   ` Dariusz Sosnowski
  2023-11-24 12:49 ` [PATCH 0/5] Some documentation fixes David Marchand
  5 siblings, 2 replies; 15+ messages in thread
From: David Marchand @ 2023-11-23 11:44 UTC (permalink / raw)
  To: dev
  Cc: thomas, Abdullah Sevincer, Hemant Agrawal, Sachin Saxena,
	Bruce Richardson, Konstantin Ananyev, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Gagandeep Singh,
	Apeksha Gupta, Yuying Zhang, Beilei Xing, Matan Azrad,
	Viacheslav Ovsiienko, Dariusz Sosnowski, Ori Kam, Suanming Mou,
	Liron Himi, Anatoly Burakov, Jerin Jacob, Zhirun Yan, Rosen Xu,
	Tianfei Zhang, Cristian Dumitrescu, Maxime Coquelin, Chenbo Xia,
	Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
	Pallavi Kadam

Prefer automatically ordered lists by using #.

Signed-off-by: David Marchand <david.marchand@redhat.com>
---
 doc/guides/eventdevs/dlb2.rst                 | 29 ++++++-----
 doc/guides/eventdevs/dpaa.rst                 |  2 +-
 .../linux_gsg/nic_perf_intel_platform.rst     | 10 ++--
 doc/guides/nics/cnxk.rst                      |  4 +-
 doc/guides/nics/dpaa2.rst                     | 19 +++----
 doc/guides/nics/enetc.rst                     |  6 +--
 doc/guides/nics/enetfec.rst                   | 12 ++---
 doc/guides/nics/i40e.rst                      | 16 +++---
 doc/guides/nics/mlx4.rst                      | 32 ++++++------
 doc/guides/nics/mlx5.rst                      | 18 +++----
 doc/guides/nics/mvpp2.rst                     | 49 ++++++++++---------
 doc/guides/nics/pfe.rst                       |  8 +--
 doc/guides/nics/tap.rst                       | 14 +++---
 doc/guides/platform/bluefield.rst             |  4 +-
 doc/guides/platform/cnxk.rst                  | 26 +++++-----
 doc/guides/platform/dpaa.rst                  | 14 +++---
 doc/guides/platform/dpaa2.rst                 | 20 ++++----
 doc/guides/platform/mlx5.rst                  | 14 +++---
 doc/guides/platform/octeontx.rst              | 22 ++++-----
 .../prog_guide/env_abstraction_layer.rst      | 10 ++--
 doc/guides/prog_guide/graph_lib.rst           | 39 ++++++++-------
 doc/guides/prog_guide/rawdev.rst              | 28 ++++++-----
 doc/guides/prog_guide/rte_flow.rst            | 12 ++---
 doc/guides/prog_guide/stack_lib.rst           |  8 +--
 doc/guides/prog_guide/trace_lib.rst           | 12 ++---
 doc/guides/rawdevs/ifpga.rst                  |  5 +-
 doc/guides/sample_app_ug/ip_pipeline.rst      |  4 +-
 doc/guides/sample_app_ug/pipeline.rst         |  4 +-
 doc/guides/sample_app_ug/vdpa.rst             | 26 +++++-----
 doc/guides/windows_gsg/run_apps.rst           |  8 +--
 30 files changed, 250 insertions(+), 225 deletions(-)

diff --git a/doc/guides/eventdevs/dlb2.rst b/doc/guides/eventdevs/dlb2.rst
index 6a273d6f45..2532d92888 100644
--- a/doc/guides/eventdevs/dlb2.rst
+++ b/doc/guides/eventdevs/dlb2.rst
@@ -271,24 +271,29 @@ certain reconfiguration sequences that are valid in the eventdev API but not
 supported by the PMD.
 
 Specifically, the PMD supports the following configuration sequence:
-1. Configure and start the device
-2. Stop the device
-3. (Optional) Reconfigure the device
-4. (Optional) If step 3 is run:
 
-   a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
-   b. The reconfigured port(s) lose their previous queue links.
+#. Configure and start the device
 
-5. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
-6. Restart the device. If the device is reconfigured in step 3 but one or more
+#. Stop the device
+
+#. (Optional) Reconfigure the device
+   Setup queue(s). The reconfigured queue(s) lose their previous port links.
+   The reconfigured port(s) lose their previous queue links.
+   Link port(s) to queue(s)
+
+#. Restart the device. If the device is reconfigured in step 3 but one or more
    of its ports or queues are not, the PMD will apply their previous
    configuration (including port->queue links) at this time.
 
 The PMD does not support the following configuration sequences:
-1. Configure and start the device
-2. Stop the device
-3. Setup queue or setup port
-4. Start the device
+
+#. Configure and start the device
+
+#. Stop the device
+
+#. Setup queue or setup port
+
+#. Start the device
 
 This sequence is not supported because the event device must be reconfigured
 before its ports or queues can be.
diff --git a/doc/guides/eventdevs/dpaa.rst b/doc/guides/eventdevs/dpaa.rst
index 266f92d159..33d41fc7c4 100644
--- a/doc/guides/eventdevs/dpaa.rst
+++ b/doc/guides/eventdevs/dpaa.rst
@@ -64,7 +64,7 @@ Example:
 Limitations
 -----------
 
-1. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
+#. DPAA eventdev can not work with DPAA PUSH mode queues configured for ethdev.
    Please configure export DPAA_NUM_PUSH_QUEUES=0
 
 Platform Requirement
diff --git a/doc/guides/linux_gsg/nic_perf_intel_platform.rst b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
index dbfaf4e350..4a5815dfb9 100644
--- a/doc/guides/linux_gsg/nic_perf_intel_platform.rst
+++ b/doc/guides/linux_gsg/nic_perf_intel_platform.rst
@@ -127,7 +127,7 @@ The following are some recommendations on GRUB boot settings:
 Configurations before running DPDK
 ----------------------------------
 
-1. Reserve huge pages.
+#. Reserve huge pages.
    See the earlier section on :ref:`linux_gsg_hugepages` for more details.
 
    .. code-block:: console
@@ -147,7 +147,7 @@ Configurations before running DPDK
       # Mount to the specific folder.
       mount -t hugetlbfs nodev /mnt/huge
 
-2. Check the CPU layout using the DPDK ``cpu_layout`` utility:
+#. Check the CPU layout using the DPDK ``cpu_layout`` utility:
 
    .. code-block:: console
 
@@ -157,7 +157,7 @@ Configurations before running DPDK
 
    Or run ``lscpu`` to check the cores on each socket.
 
-3. Check your NIC id and related socket id:
+#. Check your NIC id and related socket id:
 
    .. code-block:: console
 
@@ -181,5 +181,5 @@ Configurations before running DPDK
    **Note**: To get the best performance, ensure that the core and NICs are in the same socket.
    In the example above ``85:00.0`` is on socket 1 and should be used by cores on socket 1 for the best performance.
 
-4. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
-More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
+#. Check which kernel drivers needs to be loaded and whether there is a need to unbind the network ports from their kernel drivers.
+   More details about DPDK setup and Linux kernel requirements see :ref:`linux_gsg_compiling_dpdk` and :ref:`linux_gsg_linux_drivers`.
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 21063a80ff..9ec52e380f 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -311,10 +311,10 @@ Runtime Config Options
 
    In CN10K, in event mode, driver can work in two modes,
 
-   1. Inbound encrypted traffic received by probed ipsec inline device while
+   #. Inbound encrypted traffic received by probed ipsec inline device while
       plain traffic post decryption is received by ethdev.
 
-   2. Both Inbound encrypted traffic and plain traffic post decryption are
+   #. Both Inbound encrypted traffic and plain traffic post decryption are
       received by ethdev.
 
    By default event mode works using inline device i.e mode ``1``.
diff --git a/doc/guides/nics/dpaa2.rst b/doc/guides/nics/dpaa2.rst
index 2d113f53df..c0d3e7a178 100644
--- a/doc/guides/nics/dpaa2.rst
+++ b/doc/guides/nics/dpaa2.rst
@@ -563,8 +563,9 @@ Traffic Management API
 DPAA2 PMD supports generic DPDK Traffic Management API which allows to
 configure the following features:
 
-1. Hierarchical scheduling
-2. Traffic shaping
+#. Hierarchical scheduling
+
+#. Traffic shaping
 
 Internally TM is represented by a hierarchy (tree) of nodes.
 Node which has a parent is called a leaf whereas node without
@@ -602,19 +603,19 @@ Usage example
 
 For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`.
 
-1. Run testpmd as follows:
+#. Run testpmd as follows:
 
    .. code-block:: console
 
 	./dpdk-testpmd  -c 0xf -n 1 -- -i --portmask 0x3 --nb-cores=1 --txq=4 --rxq=4
 
-2. Stop all ports:
+#. Stop all ports:
 
    .. code-block:: console
 
 	testpmd> port stop all
 
-3. Add shaper profile:
+#. Add shaper profile:
 
    One port level shaper and strict priority on all 4 queues of port 0:
 
@@ -642,7 +643,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 	add port tm leaf node 0 3 8 0 500 1 -1 0 0 0 0
 	port tm hierarchy commit 0 no
 
-4. Create flows as per the source IP addresses:
+#. Create flows as per the source IP addresses:
 
    .. code-block:: console
 
@@ -655,7 +656,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 	flow create 1 group 0 priority 4 ingress pattern ipv4 src is \
 	10.10.10.4 / end actions queue index 3 / end
 
-5. Start all ports
+#. Start all ports
 
    .. code-block:: console
 
@@ -663,10 +664,10 @@ For a detailed usage description please refer to "Traffic Management" section in
 
 
 
-6. Enable forwarding
+#. Enable forwarding
 
    .. code-block:: console
 
 		testpmd> start
 
-7. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0
+#. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0
diff --git a/doc/guides/nics/enetc.rst b/doc/guides/nics/enetc.rst
index 855bacfd0f..e96260f96a 100644
--- a/doc/guides/nics/enetc.rst
+++ b/doc/guides/nics/enetc.rst
@@ -76,15 +76,15 @@ Prerequisites
 There are three main pre-requisites for executing ENETC PMD on a ENETC
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
diff --git a/doc/guides/nics/enetfec.rst b/doc/guides/nics/enetfec.rst
index ad28c8f8fb..c1adb64369 100644
--- a/doc/guides/nics/enetfec.rst
+++ b/doc/guides/nics/enetfec.rst
@@ -102,28 +102,28 @@ Prerequisites
 There are three main pre-requisites for executing ENETFEC PMD on a i.MX 8M Mini
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain
    <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting
    <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-.. note::
+   .. note::
 
-   Branch is 'lf-5.10.y'
+      Branch is 'lf-5.10.y'
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used.
    For example, Ubuntu 18.04 LTS (Bionic) or 20.04 LTS(Focal) userland
    which can be obtained from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. The Ethernet device will be registered as virtual device,
+#. The Ethernet device will be registered as virtual device,
    so ENETFEC has dependency on **rte_bus_vdev** library
    and it is mandatory to use `--vdev` with value `net_enetfec`
    to run DPDK application.
diff --git a/doc/guides/nics/i40e.rst b/doc/guides/nics/i40e.rst
index 6cd1165521..3432eabb36 100644
--- a/doc/guides/nics/i40e.rst
+++ b/doc/guides/nics/i40e.rst
@@ -964,7 +964,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
    Performance Test Setup
 
 
-1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
+#. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
    The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
    for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
    Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
@@ -972,23 +972,23 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
       82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
       85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
 
-2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
+#. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
 
-3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
+#. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
    In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
    are 18-35 and 54-71.
    Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
    cores from different cores (e.g core18 and core19).
 
-4. Bind these two ports to igb_uio.
+#. Bind these two ports to igb_uio.
 
-5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
+#. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
    will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
 
-6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
+#. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
    Compile the ``l3fwd sample`` with the default lpm mode.
 
-7. The command line of running l3fwd would be something like the following::
+#. The command line of running l3fwd would be something like the following::
 
       ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
               -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
@@ -996,7 +996,7 @@ See :numref:`figure_intel_perf_test_setup` for the performance test setup.
    This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
    core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
 
-8. Configure the traffic at a traffic generator.
+#. Configure the traffic at a traffic generator.
 
    * Start creating a stream on packet generator.
 
diff --git a/doc/guides/nics/mlx4.rst b/doc/guides/nics/mlx4.rst
index c6279f51d0..50962caeda 100644
--- a/doc/guides/nics/mlx4.rst
+++ b/doc/guides/nics/mlx4.rst
@@ -234,9 +234,9 @@ NVIDIA MLNX_OFED as a fallback
 Installing NVIDIA MLNX_OFED
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-1. Download latest NVIDIA MLNX_OFED.
+#. Download latest NVIDIA MLNX_OFED.
 
-2. Install the required libraries and kernel modules either by installing
+#. Install the required libraries and kernel modules either by installing
    only the required set, or by installing the entire NVIDIA MLNX_OFED:
 
    For bare metal use::
@@ -251,22 +251,22 @@ Installing NVIDIA MLNX_OFED
 
         ./mlnxofedinstall --dpdk --upstream-libs --guest
 
-3. Verify the firmware is the correct one::
+#. Verify the firmware is the correct one::
 
         ibv_devinfo
 
-4. Set all ports links to Ethernet, follow instructions on the screen::
+#. Set all ports links to Ethernet, follow instructions on the screen::
 
         connectx_port_config
 
-5. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
+#. Continue with :ref:`section 2 of the Quick Start Guide <QSG_2>`.
 
 .. _qsg:
 
 Quick Start Guide
 -----------------
 
-1. Set all ports links to Ethernet::
+#. Set all ports links to Ethernet::
 
         PCI=<NIC PCI address>
         echo eth > "/sys/bus/pci/devices/$PCI/mlx4_port0"
@@ -280,7 +280,7 @@ Quick Start Guide
 
 .. _QSG_2:
 
-2. In case of bare metal or hypervisor, configure optimized steering mode
+#. In case of bare metal or hypervisor, configure optimized steering mode
    by adding the following line to ``/etc/modprobe.d/mlx4_core.conf``::
 
         options mlx4_core log_num_mgm_entry_size=-7
@@ -290,7 +290,7 @@ Quick Start Guide
         If VLAN filtering is used, set log_num_mgm_entry_size=-1.
         Performance degradation can occur on this case.
 
-3. Restart the driver::
+#. Restart the driver::
 
         /etc/init.d/openibd restart
 
@@ -298,17 +298,17 @@ Quick Start Guide
 
         service openibd restart
 
-4. Install DPDK and you are ready to go.
+#. Install DPDK and you are ready to go.
    See :doc:`compilation instructions <../linux_gsg/build_dpdk>`.
 
 Performance tuning
 ------------------
 
-1. Verify the optimized steering mode is configured::
+#. Verify the optimized steering mode is configured::
 
         cat /sys/module/mlx4_core/parameters/log_num_mgm_entry_size
 
-2. Use the CPU near local NUMA node to which the PCIe adapter is connected,
+#. Use the CPU near local NUMA node to which the PCIe adapter is connected,
    for better performance. For VMs, verify that the right CPU
    and NUMA node are pinned according to the above. Run::
 
@@ -316,21 +316,21 @@ Performance tuning
 
    to identify the NUMA node to which the PCIe adapter is connected.
 
-3. If more than one adapter is used, and root complex capabilities allow
+#. If more than one adapter is used, and root complex capabilities allow
    to put both adapters on the same NUMA node without PCI bandwidth degradation,
    it is recommended to locate both adapters on the same NUMA node.
    This in order to forward packets from one to the other without
    NUMA performance penalty.
 
-4. Disable pause frames::
+#. Disable pause frames::
 
         ethtool -A <netdev> rx off tx off
 
-5. Verify IO non-posted prefetch is disabled by default. This can be checked
+#. Verify IO non-posted prefetch is disabled by default. This can be checked
    via the BIOS configuration. Please contact you server provider for more
    information about the settings.
 
-.. note::
+   .. note::
 
         On some machines, depends on the machine integrator, it is beneficial
         to set the PCI max read request parameter to 1K. This can be
@@ -347,7 +347,7 @@ Performance tuning
         The XXX can be different on different systems. Make sure to configure
         according to the setpci output.
 
-6. To minimize overhead of searching Memory Regions:
+#. To minimize overhead of searching Memory Regions:
 
    - '--socket-mem' is recommended to pin memory by predictable amount.
    - Configure per-lcore cache when creating Mempools for packet buffer.
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 39a8c5d7b4..d002106765 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1535,15 +1535,15 @@ Use <sfnum> to probe SF representor::
 Performance tuning
 ------------------
 
-1. Configure aggressive CQE Zipping for maximum performance::
+#. Configure aggressive CQE Zipping for maximum performance::
 
         mlxconfig -d <mst device> s CQE_COMPRESSION=1
 
-  To set it back to the default CQE Zipping mode use::
+   To set it back to the default CQE Zipping mode use::
 
         mlxconfig -d <mst device> s CQE_COMPRESSION=0
 
-2. In case of virtualization:
+#. In case of virtualization:
 
    - Make sure that hypervisor kernel is 3.16 or newer.
    - Configure boot with ``iommu=pt``.
@@ -1551,7 +1551,7 @@ Performance tuning
    - Make sure to allocate a VM on huge pages.
    - Make sure to set CPU pinning.
 
-3. Use the CPU near local NUMA node to which the PCIe adapter is connected,
+#. Use the CPU near local NUMA node to which the PCIe adapter is connected,
    for better performance. For VMs, verify that the right CPU
    and NUMA node are pinned according to the above. Run::
 
@@ -1559,21 +1559,21 @@ Performance tuning
 
    to identify the NUMA node to which the PCIe adapter is connected.
 
-4. If more than one adapter is used, and root complex capabilities allow
+#. If more than one adapter is used, and root complex capabilities allow
    to put both adapters on the same NUMA node without PCI bandwidth degradation,
    it is recommended to locate both adapters on the same NUMA node.
    This in order to forward packets from one to the other without
    NUMA performance penalty.
 
-5. Disable pause frames::
+#. Disable pause frames::
 
         ethtool -A <netdev> rx off tx off
 
-6. Verify IO non-posted prefetch is disabled by default. This can be checked
+#. Verify IO non-posted prefetch is disabled by default. This can be checked
    via the BIOS configuration. Please contact you server provider for more
    information about the settings.
 
-.. note::
+   .. note::
 
         On some machines, depends on the machine integrator, it is beneficial
         to set the PCI max read request parameter to 1K. This can be
@@ -1590,7 +1590,7 @@ Performance tuning
         The XXX can be different on different systems. Make sure to configure
         according to the setpci output.
 
-7. To minimize overhead of searching Memory Regions:
+#. To minimize overhead of searching Memory Regions:
 
    - '--socket-mem' is recommended to pin memory by predictable amount.
    - Configure per-lcore cache when creating Mempools for packet buffer.
diff --git a/doc/guides/nics/mvpp2.rst b/doc/guides/nics/mvpp2.rst
index cbfa47afd8..e3d4b6a479 100644
--- a/doc/guides/nics/mvpp2.rst
+++ b/doc/guides/nics/mvpp2.rst
@@ -572,9 +572,11 @@ Traffic metering and policing
 
 MVPP2 PMD supports DPDK traffic metering and policing that allows the following:
 
-1. Meter ingress traffic.
-2. Do policing.
-3. Gather statistics.
+#. Meter ingress traffic.
+
+#. Do policing.
+
+#. Gather statistics.
 
 For an additional description please refer to DPDK :doc:`Traffic Metering and Policing API <../prog_guide/traffic_metering_and_policing>`.
 
@@ -592,25 +594,25 @@ The following capabilities are not supported:
 Usage example
 ~~~~~~~~~~~~~
 
-1. Run testpmd user app:
+#. Run testpmd user app:
 
    .. code-block:: console
 
 		./dpdk-testpmd --vdev=eth_mvpp2,iface=eth0,iface=eth2 -c 6 -- -i -p 3 -a --txd 1024 --rxd 1024
 
-2. Create meter profile:
+#. Create meter profile:
 
    .. code-block:: console
 
 		testpmd> add port meter profile 0 0 srtcm_rfc2697 2000 256 256
 
-3. Create meter:
+#. Create meter:
 
    .. code-block:: console
 
 		testpmd> create port meter 0 0 0 yes d d d 0 1 0
 
-4. Create flow rule witch meter attached:
+#. Create flow rule witch meter attached:
 
    .. code-block:: console
 
@@ -628,10 +630,13 @@ Traffic Management API
 MVPP2 PMD supports generic DPDK Traffic Management API which allows to
 configure the following features:
 
-1. Hierarchical scheduling
-2. Traffic shaping
-3. Congestion management
-4. Packet marking
+#. Hierarchical scheduling
+
+#. Traffic shaping
+
+#. Congestion management
+
+#. Packet marking
 
 Internally TM is represented by a hierarchy (tree) of nodes.
 Node which has a parent is called a leaf whereas node without
@@ -671,20 +676,20 @@ Usage example
 
 For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`.
 
-1. Run testpmd as follows:
+#. Run testpmd as follows:
 
    .. code-block:: console
 
 		./dpdk-testpmd --vdev=net_mrvl,iface=eth0,iface=eth2,cfg=./qos_config -c 7 -- \
 		-i -p 3 --disable-hw-vlan-strip --rxq 3 --txq 3 --txd 1024 --rxd 1024
 
-2. Stop all ports:
+#. Stop all ports:
 
    .. code-block:: console
 
 		testpmd> port stop all
 
-3. Add shaper profile:
+#. Add shaper profile:
 
    .. code-block:: console
 
@@ -698,7 +703,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 		70000   - Bucket size in bytes.
 		0       - Packet length adjustment - ignored.
 
-4. Add non-leaf node for port 0:
+#. Add non-leaf node for port 0:
 
    .. code-block:: console
 
@@ -717,7 +722,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 		 3  - Enable statistics for both number of transmitted packets and bytes.
 		 0  - Number of shared shapers.
 
-5. Add leaf node for tx queue 0:
+#. Add leaf node for tx queue 0:
 
    .. code-block:: console
 
@@ -737,7 +742,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-6. Add leaf node for tx queue 1:
+#. Add leaf node for tx queue 1:
 
    .. code-block:: console
 
@@ -757,7 +762,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-7. Add leaf node for tx queue 2:
+#. Add leaf node for tx queue 2:
 
    .. code-block:: console
 
@@ -777,18 +782,18 @@ For a detailed usage description please refer to "Traffic Management" section in
 		 1  - Enable statistics counter for number of transmitted packets.
 		 0  - Number of shared shapers.
 
-8. Commit hierarchy:
+#. Commit hierarchy:
 
    .. code-block:: console
 
 		testpmd> port tm hierarchy commit 0 no
 
-  Parameters have following meaning::
+   Parameters have following meaning::
 
 		0  - Id of a port.
 		no - Do not flush TM hierarchy if commit fails.
 
-9. Start all ports
+#. Start all ports
 
    .. code-block:: console
 
@@ -796,7 +801,7 @@ For a detailed usage description please refer to "Traffic Management" section in
 
 
 
-10. Enable forwarding
+#. Enable forwarding
 
    .. code-block:: console
 
diff --git a/doc/guides/nics/pfe.rst b/doc/guides/nics/pfe.rst
index 748c382573..172ae80984 100644
--- a/doc/guides/nics/pfe.rst
+++ b/doc/guides/nics/pfe.rst
@@ -110,21 +110,21 @@ Prerequisites
 Below are some pre-requisites for executing PFE PMD on a PFE
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. The ethernet device will be registered as virtual device, so pfe has dependency on
+#. The ethernet device will be registered as virtual device, so pfe has dependency on
    **rte_bus_vdev** library and it is mandatory to use `--vdev` with value `net_pfe` to
    run DPDK application.
 
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 449e747994..d4f45c02a1 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -249,19 +249,19 @@ It is possible to support different RSS hash algorithms by updating file
 ``tap_bpf_program.c``  In order to add a new RSS hash algorithm follow these
 steps:
 
-1. Write the new RSS implementation in file ``tap_bpf_program.c``
+#. Write the new RSS implementation in file ``tap_bpf_program.c``
 
-BPF programs which are uploaded to the kernel correspond to
-C functions under different ELF sections.
+   BPF programs which are uploaded to the kernel correspond to
+   C functions under different ELF sections.
 
-2. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
+#. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
 
-3. Use make to compile  `tap_bpf_program.c`` via ``LLVM`` into an object file
-   and extract the resulting instructions into ``tap_bpf_insn.h``.
+#. Use make to compile  `tap_bpf_program.c`` via ``LLVM`` into an object file
+   and extract the resulting instructions into ``tap_bpf_insn.h``::
 
     cd bpf; make
 
-4. Recompile the TAP PMD.
+#. Recompile the TAP PMD.
 
 The C arrays are uploaded to the kernel using BPF system calls.
 
diff --git a/doc/guides/platform/bluefield.rst b/doc/guides/platform/bluefield.rst
index 322b08a217..954686affc 100644
--- a/doc/guides/platform/bluefield.rst
+++ b/doc/guides/platform/bluefield.rst
@@ -25,11 +25,11 @@ Supported BlueField Platforms
 Common Offload HW Drivers
 -------------------------
 
-1. **NIC Driver**
+#. **NIC Driver**
 
    See :doc:`../nics/mlx5` for NVIDIA mlx5 NIC driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    This is based on the crypto extension support of armv8. See
    :doc:`../cryptodevs/armv8` for armv8 crypto driver information.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index b901062c93..70065e3d96 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -95,9 +95,11 @@ It is only a configuration driver used in control path.
 The :numref:`figure_cnxk_resource_virtualization` diagram also shows a
 resource provisioning example where,
 
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
+#. PFx and PFx-VF0 bound to Linux netdev driver.
+
+#. PFx-VF1 ethdev driver bound to the first DPDK application.
+
+#. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
 
 LBK HW Access
 -------------
@@ -179,7 +181,7 @@ Procedure to Setup Platform
 There are three main prerequisites for setting up DPDK on cnxk
 compatible board:
 
-1. **RVU AF Linux kernel driver**
+#. **RVU AF Linux kernel driver**
 
    The dependent kernel drivers can be obtained from the
    `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
@@ -188,7 +190,7 @@ compatible board:
 
    Linux kernel should be configured with the following features enabled:
 
-.. code-block:: console
+   .. code-block:: console
 
         # 64K pages enabled for better performance
         CONFIG_ARM64_64K_PAGES=y
@@ -218,7 +220,7 @@ compatible board:
         # Enable if OCTEONTX2 DMA PF driver required
         CONFIG_OCTEONTX2_DPI_PF=n
 
-2. **ARM64 Linux Tool Chain**
+#. **ARM64 Linux Tool Chain**
 
    For example, the *aarch64* Linaro Toolchain, which can be obtained from
    `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
@@ -226,7 +228,7 @@ compatible board:
    Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
    optimized for cnxk CPU.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem may be used. For example,
    Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
@@ -261,11 +263,13 @@ context or stats using debugfs.
 
 Enable ``debugfs`` by:
 
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUG_FS=y``.
-2. Boot OCTEON CN9K/CN10K with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
+#. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUG_FS=y``.
 
-.. code-block:: console
+#. Boot OCTEON CN9K/CN10K with debugfs supported kernel.
+
+#. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
+
+   .. code-block:: console
 
        # mount -t debugfs none /sys/kernel/debug
 
diff --git a/doc/guides/platform/dpaa.rst b/doc/guides/platform/dpaa.rst
index 389692907d..282a2f45ee 100644
--- a/doc/guides/platform/dpaa.rst
+++ b/doc/guides/platform/dpaa.rst
@@ -22,15 +22,15 @@ processors-and-mcus/qoriq-layerscape-arm-processors:QORIQ-ARM>`_.
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Nics Driver**
+#. **Nics Driver**
 
    See :doc:`../nics/dpaa` for NXP dpaa nic driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    See :doc:`../cryptodevs/dpaa_sec` for NXP dpaa cryptodev driver information.
 
-3. **Eventdev Driver**
+#. **Eventdev Driver**
 
    See :doc:`../eventdevs/dpaa` for NXP dpaa eventdev driver information.
 
@@ -41,22 +41,22 @@ Steps To Setup Platform
 There are four main pre-requisites for executing DPAA PMD on a DPAA
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. **FMC Tool**
+#. **FMC Tool**
 
    If one is planning to use more than 1 Recv queue and hardware capability to
    parse, classify and distribute the packets, the Frame Manager Configuration
diff --git a/doc/guides/platform/dpaa2.rst b/doc/guides/platform/dpaa2.rst
index a9fcad6ca2..2b0d93a976 100644
--- a/doc/guides/platform/dpaa2.rst
+++ b/doc/guides/platform/dpaa2.rst
@@ -24,23 +24,23 @@ processors-and-mcus/qoriq-layerscape-arm-processors:QORIQ-ARM>`_.
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Nics Driver**
+#. **Nics Driver**
 
    See :doc:`../nics/dpaa2` for NXP dpaa2 nic driver information.
 
-2. **Cryptodev Driver**
+#. **Cryptodev Driver**
 
    See :doc:`../cryptodevs/dpaa2_sec` for NXP dpaa2 cryptodev driver information.
 
-3. **Eventdev Driver**
+#. **Eventdev Driver**
 
    See :doc:`../eventdevs/dpaa2` for NXP dpaa2 eventdev driver information.
 
-4. **Rawdev AIOP CMDIF Driver**
+#. **Rawdev AIOP CMDIF Driver**
 
    See :doc:`../rawdevs/dpaa2_cmdif` for NXP dpaa2 AIOP command interface driver information.
 
-5. **DMA Driver**
+#. **DMA Driver**
 
    See :doc:`../dmadevs/dpaa2` for NXP dpaa2 QDMA driver information.
 
@@ -51,27 +51,27 @@ Steps To Setup Platform
 There are four main pre-requisites for executing DPAA2 PMD on a DPAA2
 compatible board:
 
-1. **ARM 64 Tool Chain**
+#. **ARM 64 Tool Chain**
 
    For example, the `*aarch64* Linaro Toolchain <https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-i686_aarch64-linux-gnu.tar.xz>`_.
 
-2. **Linux Kernel**
+#. **Linux Kernel**
 
    It can be obtained from `NXP's Github hosting <https://source.codeaurora.org/external/qoriq/qoriq-components/linux>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 16.04 LTS (Xenial) or 18.04 (Bionic) userland which can be obtained
    from `here
    <http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-arm64.tar.gz>`_.
 
-4. **Resource Scripts**
+#. **Resource Scripts**
 
    DPAA2 based resources can be configured easily with the help of ready scripts
    as provided in the DPDK Extra repository.
 
-5. **Build Config**
+#. **Build Config**
 
    Use dpaa build configs, they work for both DPAA2 and DPAA platforms.
 
diff --git a/doc/guides/platform/mlx5.rst b/doc/guides/platform/mlx5.rst
index 73dadd4064..400000e284 100644
--- a/doc/guides/platform/mlx5.rst
+++ b/doc/guides/platform/mlx5.rst
@@ -361,34 +361,34 @@ Sub-Function is a portion of the PCI device,
 it has its own dedicated queues.
 An SF shares PCI-level resources with other SFs and/or with its parent PCI function.
 
-0. Requirement::
+#. Requirement::
 
       MLNX_OFED version >= 5.4-0.3.3.0
 
-1. Configure SF feature::
+#. Configure SF feature::
 
       # Run mlxconfig on both PFs on host and ECPFs on BlueField.
       mlxconfig -d <mst device> set PER_PF_NUM_SF=1 PF_TOTAL_SF=252 PF_SF_BAR_SIZE=12
 
-2. Enable switchdev mode::
+#. Enable switchdev mode::
 
       mlxdevm dev eswitch set pci/<DBDF> mode switchdev
 
-3. Add SF port::
+#. Add SF port::
 
       mlxdevm port add pci/<DBDF> flavour pcisf pfnum 0 sfnum <sfnum>
 
       Get SFID from output: pci/<DBDF>/<SFID>
 
-4. Modify MAC address::
+#. Modify MAC address::
 
       mlxdevm port function set pci/<DBDF>/<SFID> hw_addr <MAC>
 
-5. Activate SF port::
+#. Activate SF port::
 
       mlxdevm port function set pci/<DBDF>/<ID> state active
 
-6. Devargs to probe SF device::
+#. Devargs to probe SF device::
 
       auxiliary:mlx5_core.sf.<num>,class=eth:regex
 
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
index 1459dc7109..b01f51ba4d 100644
--- a/doc/guides/platform/octeontx.rst
+++ b/doc/guides/platform/octeontx.rst
@@ -15,15 +15,15 @@ More information about SoC can be found at `Cavium, Inc Official Website
 Common Offload HW Block Drivers
 -------------------------------
 
-1. **Crypto Driver**
+#. **Crypto Driver**
    See :doc:`../cryptodevs/octeontx` for octeontx crypto driver
    information.
 
-2. **Eventdev Driver**
+#. **Eventdev Driver**
    See :doc:`../eventdevs/octeontx` for octeontx ssovf eventdev driver
    information.
 
-3. **Mempool Driver**
+#. **Mempool Driver**
    See :doc:`../mempool/octeontx` for octeontx fpavf mempool driver
    information.
 
@@ -33,24 +33,24 @@ Steps To Setup Platform
 There are three main pre-prerequisites for setting up Platform drivers on
 OCTEON TX compatible board:
 
-1. **OCTEON TX Linux kernel PF driver for Network acceleration HW blocks**
+#. **OCTEON TX Linux kernel PF driver for Network acceleration HW blocks**
 
    The OCTEON TX Linux kernel drivers (includes the required PF driver for the
    Platform drivers) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
    along with build, install and dpdk usage instructions.
 
-.. note::
+   .. note::
 
-   The PF driver and the required microcode for the crypto offload block will be
-   available with OCTEON TX SDK only. So for using crypto offload, follow the steps
-   mentioned in :ref:`setup_platform_using_OCTEON_TX_SDK`.
+      The PF driver and the required microcode for the crypto offload block will be
+      available with OCTEON TX SDK only. So for using crypto offload, follow the steps
+      mentioned in :ref:`setup_platform_using_OCTEON_TX_SDK`.
 
-2. **ARM64 Tool Chain**
+#. **ARM64 Tool Chain**
 
    For example, the *aarch64* Linaro Toolchain, which can be obtained from
    `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
 
-3. **Rootfile system**
+#. **Rootfile system**
 
    Any *aarch64* supporting filesystem can be used. For example,
    Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
@@ -60,7 +60,7 @@ OCTEON TX compatible board:
    as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
    to bring up a OCTEON TX board. Please refer :ref:`setup_platform_using_OCTEON_TX_SDK`.
 
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+#. Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
 
 .. _setup_platform_using_OCTEON_TX_SDK:
 
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6debf54efb..9559c12a98 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -807,15 +807,15 @@ Known Issues
 
   This means, use cases involving preemptible pthreads should consider using rte_ring carefully.
 
-  1. It CAN be used for preemptible single-producer and single-consumer use case.
+  #. It CAN be used for preemptible single-producer and single-consumer use case.
 
-  2. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
+  #. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
 
-  3. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
+  #. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
 
-  4. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
+  #. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
 
-  5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+  #. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
 
   Alternatively, applications can use the lock-free stack mempool handler. When
   considering this handler, note that:
diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst
index 96cff9ccc7..ad09bdfe26 100644
--- a/doc/guides/prog_guide/graph_lib.rst
+++ b/doc/guides/prog_guide/graph_lib.rst
@@ -346,31 +346,32 @@ handling where every packet could be going to different next node.
 
 Example of intermediate node implementation with home run:
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-1. Start with speculation that next_node = node->ctx.
-This could be the next_node application used in the previous function call of this node.
 
-2. Get the next_node stream array with required space using
-``rte_node_next_stream_get(next_node, space)``.
+#. Start with speculation that next_node = node->ctx.
+   This could be the next_node application used in the previous function call of this node.
 
-3. while n_left_from > 0 (i.e packets left to be sent) prefetch next pkt_set
-and process current pkt_set to find their next node
+#. Get the next_node stream array with required space using
+   ``rte_node_next_stream_get(next_node, space)``.
 
-4. if all the next nodes of the current pkt_set match speculated next node,
-just count them as successfully speculated(``last_spec``) till now and
-continue the loop without actually moving them to the next node. else if there is
-a mismatch, copy all the pkt_set pointers that were ``last_spec`` and move the
-current pkt_set to their respective next's nodes using ``rte_enqueue_next_x1()``.
-Also, one of the next_node can be updated as speculated next_node if it is more
-probable. Finally, reset ``last_spec`` to zero.
+#. while n_left_from > 0 (i.e packets left to be sent) prefetch next pkt_set
+   and process current pkt_set to find their next node
 
-5. if n_left_from != 0 then goto 3) to process remaining packets.
+#. if all the next nodes of the current pkt_set match speculated next node,
+   just count them as successfully speculated(``last_spec``) till now and
+   continue the loop without actually moving them to the next node. else if there is
+   a mismatch, copy all the pkt_set pointers that were ``last_spec`` and move the
+   current pkt_set to their respective next's nodes using ``rte_enqueue_next_x1()``.
+   Also, one of the next_node can be updated as speculated next_node if it is more
+   probable. Finally, reset ``last_spec`` to zero.
 
-6. if last_spec == nb_objs, All the objects passed were successfully speculated
-to single next node. So, the current stream can be moved to next node using
-``rte_node_next_stream_move(node, next_node)``.
-This is the ``home run`` where memcpy of buffer pointers to next node is avoided.
+#. if n_left_from != 0 then goto 3) to process remaining packets.
 
-7. Update the ``node->ctx`` with more probable next node.
+#. if last_spec == nb_objs, All the objects passed were successfully speculated
+   to single next node. So, the current stream can be moved to next node using
+   ``rte_node_next_stream_move(node, next_node)``.
+   This is the ``home run`` where memcpy of buffer pointers to next node is avoided.
+
+#. Update the ``node->ctx`` with more probable next node.
 
 Graph object memory layout
 --------------------------
diff --git a/doc/guides/prog_guide/rawdev.rst b/doc/guides/prog_guide/rawdev.rst
index 488e0a7ef6..07a2c4e73c 100644
--- a/doc/guides/prog_guide/rawdev.rst
+++ b/doc/guides/prog_guide/rawdev.rst
@@ -13,11 +13,13 @@ In terms of device flavor (type) support, DPDK currently has ethernet
 
 For a new type of device, for example an accelerator, there are not many
 options except:
-1. create another lib/MySpecialDev, driver/MySpecialDrv and use it
-through Bus/PMD model.
-2. Or, create a vdev and implement necessary custom APIs which are directly
-exposed from driver layer. However this may still require changes in bus code
-in DPDK.
+
+#. create another lib/MySpecialDev, driver/MySpecialDrv and use it
+   through Bus/PMD model.
+
+#. Or, create a vdev and implement necessary custom APIs which are directly
+   exposed from driver layer. However this may still require changes in bus code
+   in DPDK.
 
 The DPDK Rawdev library is an abstraction that provides the DPDK framework a
 way to manage such devices in a generic manner without expecting changes to
@@ -30,19 +32,19 @@ Design
 
 Key factors guiding design of the Rawdevice library:
 
-1. Following are some generic operations which can be treated as applicable
+#. Following are some generic operations which can be treated as applicable
    to a large subset of device types. None of the operations are mandatory to
    be implemented by a driver. Application should also be designed for proper
    handling for unsupported APIs.
 
-  * Device Start/Stop - In some cases, 'reset' might also be required which
-    has different semantics than a start-stop-start cycle.
-  * Configuration - Device, Queue or any other sub-system configuration
-  * I/O - Sending a series of buffers which can enclose any arbitrary data
-  * Statistics - Fetch arbitrary device statistics
-  * Firmware Management - Firmware load/unload/status
+   * Device Start/Stop - In some cases, 'reset' might also be required which
+     has different semantics than a start-stop-start cycle.
+   * Configuration - Device, Queue or any other sub-system configuration
+   * I/O - Sending a series of buffers which can enclose any arbitrary data
+   * Statistics - Fetch arbitrary device statistics
+   * Firmware Management - Firmware load/unload/status
 
-2. Application API should be able to pass along arbitrary state information
+#. Application API should be able to pass along arbitrary state information
    to/from device driver. This can be achieved by maintaining context
    information through opaque data or pointers.
 
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b5d4b0e929..627b845bfb 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3987,17 +3987,17 @@ Flow rules management can be done via special lockless flow management queues.
 
 The asynchronous flow rule insertion logic can be broken into two phases.
 
-1. Initialization stage as shown here:
+#. Initialization stage as shown here:
 
-.. _figure_rte_flow_async_init:
+   .. _figure_rte_flow_async_init:
 
-.. figure:: img/rte_flow_async_init.*
+   .. figure:: img/rte_flow_async_init.*
 
-2. Main loop as presented on a datapath application example:
+#. Main loop as presented on a datapath application example:
 
-.. _figure_rte_flow_async_usage:
+   .. _figure_rte_flow_async_usage:
 
-.. figure:: img/rte_flow_async_usage.*
+   .. figure:: img/rte_flow_async_usage.*
 
 Enqueue creation operation
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 3097cab0c2..975d3ad796 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -75,10 +75,12 @@ compare-and-swap instruction to atomically update both the stack top pointer
 and a modification counter. The ABA problem can occur without a modification
 counter if, for example:
 
-1. Thread A reads head pointer X and stores the pointed-to list element.
-2. Other threads modify the list such that the head pointer is once again X,
+#. Thread A reads head pointer X and stores the pointed-to list element.
+
+#. Other threads modify the list such that the head pointer is once again X,
    but its pointed-to data is different than what thread A read.
-3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+#. Thread A changes the head pointer with a compare-and-swap and succeeds.
 
 In this case thread A would not detect that the list had changed, and would
 both pop stale data and incorrect change the head pointer. By adding a
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e5718feddc..d9b17abe90 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -175,13 +175,13 @@ events.
 
 There are many tools you can use to read DPDK traces:
 
-1. ``babeltrace`` is a command-line utility that converts trace formats; it
-supports the format that DPDK trace library produces, CTF, as well as a
-basic text output that can be grep'ed.
-The babeltrace command is part of the Open Source Babeltrace project.
+#. ``babeltrace`` is a command-line utility that converts trace formats; it
+   supports the format that DPDK trace library produces, CTF, as well as a
+   basic text output that can be grep'ed.
+   The babeltrace command is part of the Open Source Babeltrace project.
 
-2. ``Trace Compass`` is a graphical user interface for viewing and analyzing
-any type of logs or traces, including DPDK traces.
+#. ``Trace Compass`` is a graphical user interface for viewing and analyzing
+   any type of logs or traces, including DPDK traces.
 
 Use the babeltrace command-line tool
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/rawdevs/ifpga.rst b/doc/guides/rawdevs/ifpga.rst
index 41877aeddf..c0901ddeae 100644
--- a/doc/guides/rawdevs/ifpga.rst
+++ b/doc/guides/rawdevs/ifpga.rst
@@ -244,13 +244,14 @@ through partitioning of individual dedicated resources, or virtualization of
 shared resources. OFS provides several models to share the AFU resources via
 PR mechanism and hardware-based virtualization schemes.
 
-1. Legacy model.
+#. Legacy model.
    With legacy model FPGA cards like Intel PAC N3000 or N5000, there is
    a notion that the boundary between the AFU and the shell is also the unit of
    PR for those FPGA platforms. This model is only able to handle a
    single context, because it only has one PR engine, and one PR region which
    has an associated Port device.
-2. Multiple VFs per PR slot.
+
+#. Multiple VFs per PR slot.
    In this model, available AFU resources may allow instantiation of many VFs
    which have a dedicated PCIe function with their own dedicated MMIO space, or
    partition a region of MMIO space on a single PCIe function. Intel PAC N6000
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index f30ac5e19d..ff5ee67ec2 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -127,9 +127,9 @@ The main thread is creating and managing all the application objects based on CL
 Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread
 executes two tasks in time-sharing mode:
 
-1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
+#. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
 
-2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
+#. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
    messages send by the main thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules
    to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
 
diff --git a/doc/guides/sample_app_ug/pipeline.rst b/doc/guides/sample_app_ug/pipeline.rst
index 7c86bf484a..58ed0d296a 100644
--- a/doc/guides/sample_app_ug/pipeline.rst
+++ b/doc/guides/sample_app_ug/pipeline.rst
@@ -111,8 +111,8 @@ The main thread is creating and managing all the application objects based on CL
 Each data plane thread runs one or several pipelines previously assigned to it in round-robin order. Each data plane thread
 executes two tasks in time-sharing mode:
 
-1. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
+#. *Packet processing task*: Process bursts of input packets read from the pipeline input ports.
 
-2. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
+#. *Message handling task*: Periodically, the data plane thread pauses the packet processing task and polls for request
    messages send by the main thread. Examples: add/remove pipeline to/from current data plane thread, add/delete rules
    to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index 6b6de53e48..5a71a70e37 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -40,11 +40,15 @@ where
   (n starts from 0).
 * --interactive means run the vdpa sample in interactive mode:
 
-  1. help: show help message
-  2. list: list all available vdpa devices
-  3. create: create a new vdpa port with socket file and vdpa device address
-  4. stats: show statistics of virtio queues
-  5. quit: unregister vhost driver and exit the application
+  #. help: show help message
+
+  #. list: list all available vdpa devices
+
+  #. create: create a new vdpa port with socket file and vdpa device address
+
+  #. stats: show statistics of virtio queues
+
+  #. quit: unregister vhost driver and exit the application
 
 Take IFCVF driver for example:
 
@@ -100,21 +104,21 @@ vDPA supports cross-backend live migration, user can migrate SW vhost backend
 VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is
 the source host with SW vhost VM and B is the destination host with vDPA.
 
-1. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
+#. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
    in migration-listen mode:
 
-.. code-block:: console
+   .. code-block:: console
 
         B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))
 
-2. Start the migration (on source host):
+#. Start the migration (on source host):
 
-.. code-block:: console
+   .. code-block:: console
 
         A: (qemu) migrate -d tcp:<B ip>:4444 (or other PORT)
 
-3. Check the status (on source host):
+#. Check the status (on source host):
 
-.. code-block:: console
+   .. code-block:: console
 
         A: (qemu) info migrate
diff --git a/doc/guides/windows_gsg/run_apps.rst b/doc/guides/windows_gsg/run_apps.rst
index 08f110d0b5..2584144c4c 100644
--- a/doc/guides/windows_gsg/run_apps.rst
+++ b/doc/guides/windows_gsg/run_apps.rst
@@ -10,16 +10,16 @@ Grant *Lock pages in memory* Privilege
 Use of hugepages ("large pages" in Windows terminology) requires
 ``SeLockMemoryPrivilege`` for the user running an application.
 
-1. Open *Local Security Policy* snap-in, either:
+#. Open *Local Security Policy* snap-in, either:
 
    * Control Panel / Computer Management / Local Security Policy;
    * or Win+R, type ``secpol``, press Enter.
 
-2. Open *Local Policies / User Rights Assignment / Lock pages in memory.*
+#. Open *Local Policies / User Rights Assignment / Lock pages in memory.*
 
-3. Add desired users or groups to the list of grantees.
+#. Add desired users or groups to the list of grantees.
 
-4. Privilege is applied upon next logon. In particular, if privilege has been
+#. Privilege is applied upon next logon. In particular, if privilege has been
    granted to current user, a logoff is required before it is available.
 
 See `Large-Page Support`_ in MSDN for details.
-- 
2.41.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] doc: remove restriction on ixgbe vector support
  2023-11-23 11:44 ` [PATCH 1/5] doc: remove restriction on ixgbe vector support David Marchand
@ 2023-11-23 11:45   ` Bruce Richardson
  0 siblings, 0 replies; 15+ messages in thread
From: Bruce Richardson @ 2023-11-23 11:45 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, thomas, stable, Qiming Yang, Wenjun Wu, Jianbo Liu

On Thu, Nov 23, 2023 at 12:44:01PM +0100, David Marchand wrote:
> The ixgbe driver has vector support for different architectures for a
> while now.
> 
> Fixes: b20971b6cca0 ("net/ixgbe: implement vector driver for ARM")
> Cc: stable@dpdk.org
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] doc: enhance readability in memif example commands
  2023-11-23 11:44 ` [PATCH 2/5] doc: enhance readability in memif example commands David Marchand
@ 2023-11-23 11:48   ` Bruce Richardson
  0 siblings, 0 replies; 15+ messages in thread
From: Bruce Richardson @ 2023-11-23 11:48 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, thomas, Jakub Grajciar

On Thu, Nov 23, 2023 at 12:44:02PM +0100, David Marchand wrote:
> '#.' is a token for ordered lists in RST.
> Add a space in those example commands.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

As someone who runs DPDK on my systems as a regular user rather than root,
I'd also point out an alternative fix is to replace the "#" symbol, which
tends to be for the root prompt, with "$" symbol, more commonly used for
regular users. We should encourage running DPDK as non-root as much as we
can.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/5] doc: fix some ordered lists
  2023-11-23 11:44 ` [PATCH 3/5] doc: fix some ordered lists David Marchand
@ 2023-11-23 11:49   ` Bruce Richardson
  2023-11-23 17:22   ` Dariusz Sosnowski
  1 sibling, 0 replies; 15+ messages in thread
From: Bruce Richardson @ 2023-11-23 11:49 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, thomas, stable, Dariusz Sosnowski, Viacheslav Ovsiienko,
	Ori Kam, Suanming Mou, Matan Azrad, Maxime Coquelin, Chenbo Xia,
	Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Jiayu Hu, Michael Baum, Jianfeng Tan, Yuanhan Liu, Tiwei Bie,
	Yinan Wang, Jerin Jacob, Mark Kavanagh, Konstantin Ananyev,
	John McNamara

On Thu, Nov 23, 2023 at 12:44:03PM +0100, David Marchand wrote:
> Ordered lists must start preceded by an empty line.
> Entries must be separated by an empty line (as per our coding style).
> Incorrectly indented lines are seen as a separator and result in
> starting a new list in the rendered doc.
> 
> Fix issues in some guides.
> 
> Fixes: 85d9252e55f2 ("net/mlx5: add test for remote PD and CTX")
> Fixes: 26b683b4f7d0 ("net/virtio: setup Rx queue interrupts")
> Fixes: 9dcf5d15569b ("doc: clarify path selection in virtio guide")
> Fixes: 68a03efeed65 ("doc: add Marvell cnxk platform guide")
> Fixes: f6010c7655cc ("doc: add GSO programmer's guide")
> Cc: stable@dpdk.org
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 5/5] doc: use ordered lists
  2023-11-23 11:44 ` [PATCH 5/5] doc: use ordered lists David Marchand
@ 2023-11-23 11:53   ` Bruce Richardson
  2023-11-23 17:23   ` Dariusz Sosnowski
  1 sibling, 0 replies; 15+ messages in thread
From: Bruce Richardson @ 2023-11-23 11:53 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, thomas, Abdullah Sevincer, Hemant Agrawal, Sachin Saxena,
	Konstantin Ananyev, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Gagandeep Singh, Apeksha Gupta,
	Yuying Zhang, Beilei Xing, Matan Azrad, Viacheslav Ovsiienko,
	Dariusz Sosnowski, Ori Kam, Suanming Mou, Liron Himi,
	Anatoly Burakov, Jerin Jacob, Zhirun Yan, Rosen Xu,
	Tianfei Zhang, Cristian Dumitrescu, Maxime Coquelin, Chenbo Xia,
	Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
	Pallavi Kadam

On Thu, Nov 23, 2023 at 12:44:05PM +0100, David Marchand wrote:
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Haven't checked all instances, but definely agree with the idea.
If not merged for 23.11, please merge early for 24.03 in case of churn
during development.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] doc: remove number of commands in vDPA guide
  2023-11-23 11:44 ` [PATCH 4/5] doc: remove number of commands in vDPA guide David Marchand
@ 2023-11-23 12:43   ` Thomas Monjalon
  0 siblings, 0 replies; 15+ messages in thread
From: Thomas Monjalon @ 2023-11-23 12:43 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, stable, Maxime Coquelin, Chenbo Xia, Matan Azrad

23/11/2023 12:44, David Marchand:
> There are now 5 supported commands.
> 
> Fixes: 6505865aa8ed ("examples/vdpa: add statistics show command")
> Cc: stable@dpdk.org
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  doc/guides/sample_app_ug/vdpa.rst | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
> index cb9c4f2169..6b6de53e48 100644
> --- a/doc/guides/sample_app_ug/vdpa.rst
> +++ b/doc/guides/sample_app_ug/vdpa.rst
> @@ -38,8 +38,7 @@ where
>  * --iface specifies the path prefix of the UNIX domain socket file, e.g.
>    /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n>
>    (n starts from 0).
> -* --interactive means run the vdpa sample in interactive mode, currently 4
> -  internal cmds are supported:
> +* --interactive means run the vdpa sample in interactive mode:

While modifying this line, I think uppercase "vDPA" should be used.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 3/5] doc: fix some ordered lists
  2023-11-23 11:44 ` [PATCH 3/5] doc: fix some ordered lists David Marchand
  2023-11-23 11:49   ` Bruce Richardson
@ 2023-11-23 17:22   ` Dariusz Sosnowski
  2023-11-24  8:11     ` David Marchand
  1 sibling, 1 reply; 15+ messages in thread
From: Dariusz Sosnowski @ 2023-11-23 17:22 UTC (permalink / raw)
  To: David Marchand, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	stable, Slava Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Maxime Coquelin, Chenbo Xia, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Jiayu Hu, Michael Baum,
	Jianfeng Tan, Yuanhan Liu, Tiwei Bie, Yinan Wang, Jerin Jacob,
	Mark Kavanagh, Konstantin Ananyev, John McNamara

Hi,

> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index
> 45379960f0..39a8c5d7b4 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -2326,19 +2326,18 @@ This command performs:
> 
>  #. Call the regular ``port attach`` function with updated identifier.
> 
> -For example, to attach a port whose PCI address is ``0000:0a:00.0`` -and its
> socket path is ``/var/run/import_ipc_socket``:
> +   For example, to attach a port whose PCI address is ``0000:0a:00.0``
> +   and its socket path is ``/var/run/import_ipc_socket``:
> 
> -.. code-block:: console
> -
> -   testpmd> mlx5 port attach 0000:0a:00.0
> socket=/var/run/import_ipc_socket
> -   testpmd: MLX5 socket path is /var/run/import_ipc_socket
> -   testpmd: Attach port with extra devargs
> 0000:0a:00.0,cmd_fd=40,pd_handle=1
> -   Attaching a new port...
> -   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket
> 0)
> -   Port 0 is attached. Now total ports is 1
> -   Done
> +   .. code-block:: console
> 
> +      testpmd> mlx5 port attach 0000:0a:00.0
> socket=/var/run/import_ipc_socket
> +      testpmd: MLX5 socket path is /var/run/import_ipc_socket
> +      testpmd: Attach port with extra devargs
> 0000:0a:00.0,cmd_fd=40,pd_handle=1
> +      Attaching a new port...
> +      EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket
> 0)
> +      Port 0 is attached. Now total ports is 1
> +      Done
> 
>  port map external Rx queue
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~
The preceding list explains what "mlx5 port attach" command does and the following section provides an example of usage.
I don't think this section should be a part of that list.

Best regards,
Dariusz Sosnowski

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 5/5] doc: use ordered lists
  2023-11-23 11:44 ` [PATCH 5/5] doc: use ordered lists David Marchand
  2023-11-23 11:53   ` Bruce Richardson
@ 2023-11-23 17:23   ` Dariusz Sosnowski
  1 sibling, 0 replies; 15+ messages in thread
From: Dariusz Sosnowski @ 2023-11-23 17:23 UTC (permalink / raw)
  To: David Marchand, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	Abdullah Sevincer, Hemant Agrawal, Sachin Saxena,
	Bruce Richardson, Konstantin Ananyev, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Gagandeep Singh,
	Apeksha Gupta, Yuying Zhang, Beilei Xing, Matan Azrad,
	Slava Ovsiienko, Ori Kam, Suanming Mou, Liron Himi,
	Anatoly Burakov, Jerin Jacob, Zhirun Yan, Rosen Xu,
	Tianfei Zhang, Cristian Dumitrescu, Maxime Coquelin, Chenbo Xia,
	Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy,
	Pallavi Kadam

Hi,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Thursday, November 23, 2023 12:44
> Subject: [PATCH 5/5] doc: use ordered lists
> 
> Prefer automatically ordered lists by using #.
> 
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
>  doc/guides/eventdevs/dlb2.rst                 | 29 ++++++-----
>  doc/guides/eventdevs/dpaa.rst                 |  2 +-
>  .../linux_gsg/nic_perf_intel_platform.rst     | 10 ++--
>  doc/guides/nics/cnxk.rst                      |  4 +-
>  doc/guides/nics/dpaa2.rst                     | 19 +++----
>  doc/guides/nics/enetc.rst                     |  6 +--
>  doc/guides/nics/enetfec.rst                   | 12 ++---
>  doc/guides/nics/i40e.rst                      | 16 +++---
>  doc/guides/nics/mlx4.rst                      | 32 ++++++------
>  doc/guides/nics/mlx5.rst                      | 18 +++----
>  doc/guides/nics/mvpp2.rst                     | 49 ++++++++++---------
>  doc/guides/nics/pfe.rst                       |  8 +--
>  doc/guides/nics/tap.rst                       | 14 +++---
>  doc/guides/platform/bluefield.rst             |  4 +-
>  doc/guides/platform/cnxk.rst                  | 26 +++++-----
>  doc/guides/platform/dpaa.rst                  | 14 +++---
>  doc/guides/platform/dpaa2.rst                 | 20 ++++----
>  doc/guides/platform/mlx5.rst                  | 14 +++---
>  doc/guides/platform/octeontx.rst              | 22 ++++-----
>  .../prog_guide/env_abstraction_layer.rst      | 10 ++--
>  doc/guides/prog_guide/graph_lib.rst           | 39 ++++++++-------
>  doc/guides/prog_guide/rawdev.rst              | 28 ++++++-----
>  doc/guides/prog_guide/rte_flow.rst            | 12 ++---
>  doc/guides/prog_guide/stack_lib.rst           |  8 +--
>  doc/guides/prog_guide/trace_lib.rst           | 12 ++---
>  doc/guides/rawdevs/ifpga.rst                  |  5 +-
>  doc/guides/sample_app_ug/ip_pipeline.rst      |  4 +-
>  doc/guides/sample_app_ug/pipeline.rst         |  4 +-
>  doc/guides/sample_app_ug/vdpa.rst             | 26 +++++-----
>  doc/guides/windows_gsg/run_apps.rst           |  8 +--
>  30 files changed, 250 insertions(+), 225 deletions(-)
Looks good to me. Thank you.

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Best regards,
Dariusz Sosnowski

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/5] doc: fix some ordered lists
  2023-11-23 17:22   ` Dariusz Sosnowski
@ 2023-11-24  8:11     ` David Marchand
  0 siblings, 0 replies; 15+ messages in thread
From: David Marchand @ 2023-11-24  8:11 UTC (permalink / raw)
  To: Dariusz Sosnowski
  Cc: dev, NBU-Contact-Thomas Monjalon (EXTERNAL),
	stable, Slava Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Maxime Coquelin, Chenbo Xia, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, Jiayu Hu, Michael Baum,
	Jianfeng Tan, Yuanhan Liu, Tiwei Bie, Yinan Wang, Jerin Jacob,
	Mark Kavanagh, Konstantin Ananyev, John McNamara

On Thu, Nov 23, 2023 at 6:22 PM Dariusz Sosnowski <dsosnowski@nvidia.com> wrote:
>
> Hi,
>
> > diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index
> > 45379960f0..39a8c5d7b4 100644
> > --- a/doc/guides/nics/mlx5.rst
> > +++ b/doc/guides/nics/mlx5.rst
> > @@ -2326,19 +2326,18 @@ This command performs:
> >
> >  #. Call the regular ``port attach`` function with updated identifier.
> >
> > -For example, to attach a port whose PCI address is ``0000:0a:00.0`` -and its
> > socket path is ``/var/run/import_ipc_socket``:
> > +   For example, to attach a port whose PCI address is ``0000:0a:00.0``
> > +   and its socket path is ``/var/run/import_ipc_socket``:
> >
> > -.. code-block:: console
> > -
> > -   testpmd> mlx5 port attach 0000:0a:00.0
> > socket=/var/run/import_ipc_socket
> > -   testpmd: MLX5 socket path is /var/run/import_ipc_socket
> > -   testpmd: Attach port with extra devargs
> > 0000:0a:00.0,cmd_fd=40,pd_handle=1
> > -   Attaching a new port...
> > -   EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket
> > 0)
> > -   Port 0 is attached. Now total ports is 1
> > -   Done
> > +   .. code-block:: console
> >
> > +      testpmd> mlx5 port attach 0000:0a:00.0
> > socket=/var/run/import_ipc_socket
> > +      testpmd: MLX5 socket path is /var/run/import_ipc_socket
> > +      testpmd: Attach port with extra devargs
> > 0000:0a:00.0,cmd_fd=40,pd_handle=1
> > +      Attaching a new port...
> > +      EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:0a:00.0 (socket
> > 0)
> > +      Port 0 is attached. Now total ports is 1
> > +      Done
> >
> >  port map external Rx queue
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~
> The preceding list explains what "mlx5 port attach" command does and the following section provides an example of usage.
> I don't think this section should be a part of that list.

Re-reading this section, I agree.
I will drop this hunk.

Thanks Dariusz.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/5] Some documentation fixes
  2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
                   ` (4 preceding siblings ...)
  2023-11-23 11:44 ` [PATCH 5/5] doc: use ordered lists David Marchand
@ 2023-11-24 12:49 ` David Marchand
  5 siblings, 0 replies; 15+ messages in thread
From: David Marchand @ 2023-11-24 12:49 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, thomas, Bruce Richardson, Dariusz Sosnowski

On Thu, Nov 23, 2023 at 12:44 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> Not urgent for the release (especially the last patch which is scary by
> its size) but here are some cleanups in the documentation.
>
>
> --
> David Marchand
>
> David Marchand (5):
>   doc: remove restriction on ixgbe vector support
>   doc: enhance readability in memif example commands
>   doc: fix some ordered lists
>   doc: remove number of commands in vDPA guide
>   doc: use ordered list

Series applied with minor changes following comments.
Thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-11-24 12:50 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-23 11:44 [PATCH 0/5] Some documentation fixes David Marchand
2023-11-23 11:44 ` [PATCH 1/5] doc: remove restriction on ixgbe vector support David Marchand
2023-11-23 11:45   ` Bruce Richardson
2023-11-23 11:44 ` [PATCH 2/5] doc: enhance readability in memif example commands David Marchand
2023-11-23 11:48   ` Bruce Richardson
2023-11-23 11:44 ` [PATCH 3/5] doc: fix some ordered lists David Marchand
2023-11-23 11:49   ` Bruce Richardson
2023-11-23 17:22   ` Dariusz Sosnowski
2023-11-24  8:11     ` David Marchand
2023-11-23 11:44 ` [PATCH 4/5] doc: remove number of commands in vDPA guide David Marchand
2023-11-23 12:43   ` Thomas Monjalon
2023-11-23 11:44 ` [PATCH 5/5] doc: use ordered lists David Marchand
2023-11-23 11:53   ` Bruce Richardson
2023-11-23 17:23   ` Dariusz Sosnowski
2023-11-24 12:49 ` [PATCH 0/5] Some documentation fixes David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).