* [dpdk-dev] [PATCH v2 01/10] mempool/octeontx: add HW constants
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 02/10] mempool/octeontx: add build and log infrastructure Santosh Shukla
` (10 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
add HW constants of octeontx fpa mempool device.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.h | 71 +++++++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.h
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
new file mode 100644
index 000000000..5c4ee04f7
--- /dev/null
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -0,0 +1,71 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) 2017 Cavium Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Cavium networks nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OCTEONTX_FPAVF_H__
+#define __OCTEONTX_FPAVF_H__
+
+/* fpa pool Vendor ID and Device ID */
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
+
+#define FPA_VF_MAX 32
+
+/* FPA VF register offsets */
+#define FPA_VF_INT(x) (0x200ULL | ((x) << 22))
+#define FPA_VF_INT_W1S(x) (0x210ULL | ((x) << 22))
+#define FPA_VF_INT_ENA_W1S(x) (0x220ULL | ((x) << 22))
+#define FPA_VF_INT_ENA_W1C(x) (0x230ULL | ((x) << 22))
+
+#define FPA_VF_VHPOOL_AVAILABLE(vhpool) (0x04150 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_THRESHOLD(vhpool) (0x04160 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_START_ADDR(vhpool) (0x04200 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_END_ADDR(vhpool) (0x04210 | ((vhpool)&0x0))
+
+#define FPA_VF_VHAURA_CNT(vaura) (0x20120 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_ADD(vaura) (0x20128 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_LIMIT(vaura) (0x20130 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_THRESHOLD(vaura) (0x20140 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_OP_ALLOC(vaura) (0x30000 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_OP_FREE(vaura) (0x38000 | ((vaura)&0xf)<<18)
+
+#define FPA_VF_FREE_ADDRS_S(x, y, z) \
+ ((x) | (((y) & 0x1ff) << 3) | ((((z) & 1)) << 14))
+
+/* FPA VF register offsets from VF_BAR4, size 2 MByte */
+#define FPA_VF_MSIX_VEC_ADDR 0x00000
+#define FPA_VF_MSIX_VEC_CTL 0x00008
+#define FPA_VF_MSIX_PBA 0xF0000
+
+#define FPA_VF0_APERTURE_SHIFT 22
+#define FPA_AURA_SET_SIZE 16
+
+#endif /* __OCTEONTX_FPAVF_H__ */
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 02/10] mempool/octeontx: add build and log infrastructure
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 01/10] mempool/octeontx: add HW constants Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 03/10] mempool/octeontx: probe fpavf pcie devices Santosh Shukla
` (9 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
config/common_base | 6 +++
drivers/Makefile | 5 +-
drivers/mempool/Makefile | 2 +
drivers/mempool/octeontx/Makefile | 60 ++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.c | 31 +++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 19 +++++++
.../octeontx/rte_mempool_octeontx_version.map | 4 ++
mk/rte.app.mk | 1 +
8 files changed, 126 insertions(+), 2 deletions(-)
create mode 100644 drivers/mempool/octeontx/Makefile
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.c
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx_version.map
diff --git a/config/common_base b/config/common_base
index 5e97a08b6..0720e6569 100644
--- a/config/common_base
+++ b/config/common_base
@@ -558,6 +558,12 @@ CONFIG_RTE_DRIVER_MEMPOOL_RING=y
CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
#
+# Compile PMD for octeontx fpa mempool device
+#
+CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL=y
+CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG=n
+
+#
# Compile librte_mbuf
#
CONFIG_RTE_LIBRTE_MBUF=y
diff --git a/drivers/Makefile b/drivers/Makefile
index 7fef66d71..c4483faa7 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,13 +32,14 @@
include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += bus
+DEPDIRS-event := bus
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
DIRS-y += mempool
DEPDIRS-mempool := bus
+DEPDIRS-mempool := event
DIRS-y += net
DEPDIRS-net := bus mempool
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := mempool
-DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
-DEPDIRS-event := bus
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index efd55f23e..e2a701089 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -38,5 +38,7 @@ DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
DEPDIRS-ring = $(core-libs)
DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) += stack
DEPDIRS-stack = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx
+DEPDIRS-octeontx = $(core-libs) librte_pmd_octeontx_ssovf
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
new file mode 100644
index 000000000..55ca1d944
--- /dev/null
+++ b/drivers/mempool/octeontx/Makefile
@@ -0,0 +1,60 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Cavium Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Cavium Networks nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_octeontx.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG),y)
+CFLAGS += -O0 -g
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+EXPORT_MAP := rte_mempool_octeontx_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
+
+LDLIBS += -lrte_pmd_octeontx_ssovf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
new file mode 100644
index 000000000..9bb7759c0
--- /dev/null
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -0,0 +1,31 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) Cavium Inc. 2017. All Right reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Cavium networks nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 5c4ee04f7..1c703725c 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -33,6 +33,25 @@
#ifndef __OCTEONTX_FPAVF_H__
#define __OCTEONTX_FPAVF_H__
+#include <rte_debug.h>
+
+#ifdef RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG
+#define fpavf_log_info(fmt, args...) \
+ RTE_LOG(INFO, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+#define fpavf_log_dbg(fmt, args...) \
+ RTE_LOG(DEBUG, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+#else
+#define fpavf_log_info(fmt, args...)
+#define fpavf_log_dbg(fmt, args...)
+#endif
+
+#define fpavf_func_trace fpavf_log_dbg
+#define fpavf_log_err(fmt, args...) \
+ RTE_LOG(ERR, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+
/* fpa pool Vendor ID and Device ID */
#define PCI_VENDOR_ID_CAVIUM 0x177D
#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
new file mode 100644
index 000000000..a70bd197b
--- /dev/null
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index c25fdd9f5..55e98f222 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -175,6 +175,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += -lrte_pmd_sw_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += -lrte_pmd_octeontx_ssovf
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += -lrte_pmd_dpaa2_event
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx
endif # CONFIG_RTE_LIBRTE_EVENTDEV
ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 03/10] mempool/octeontx: probe fpavf pcie devices
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 01/10] mempool/octeontx: add HW constants Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 02/10] mempool/octeontx: add build and log infrastructure Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc Santosh Shukla
` (8 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
A mempool device is set of PCIe vfs.
On Octeontx HW, each mempool devices are enumerated as
separate SRIOV VF PCIe device.
In order to expose as a mempool device:
On PCIe probe, the driver stores the information associated with the
PCIe device and later upon application pool request
(e.g. rte_mempool_create_empty), Infrastructure creates a pool device
with earlier probed PCIe VF devices.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.c | 151 ++++++++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 39 ++++++++
2 files changed, 190 insertions(+)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 9bb7759c0..0b4a9357f 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -29,3 +29,154 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <sys/mman.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_pci.h>
+#include <rte_errno.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "octeontx_fpavf.h"
+
+struct fpavf_res {
+ void *pool_stack_base;
+ void *bar0;
+ uint64_t stack_ln_ptr;
+ uint16_t domain_id;
+ uint16_t vf_id; /* gpool_id */
+ uint16_t sz128; /* Block size in cache lines */
+ bool is_inuse;
+};
+
+struct octeontx_fpadev {
+ rte_spinlock_t lock;
+ uint8_t total_gpool_cnt;
+ struct fpavf_res pool[FPA_VF_MAX];
+};
+
+static struct octeontx_fpadev fpadev;
+
+static void
+octeontx_fpavf_setup(void)
+{
+ uint8_t i;
+ static bool init_once;
+
+ if (!init_once) {
+ rte_spinlock_init(&fpadev.lock);
+ fpadev.total_gpool_cnt = 0;
+
+ for (i = 0; i < FPA_VF_MAX; i++) {
+
+ fpadev.pool[i].domain_id = ~0;
+ fpadev.pool[i].stack_ln_ptr = 0;
+ fpadev.pool[i].sz128 = 0;
+ fpadev.pool[i].bar0 = NULL;
+ fpadev.pool[i].pool_stack_base = NULL;
+ fpadev.pool[i].is_inuse = false;
+ }
+ init_once = 1;
+ }
+}
+
+static int
+octeontx_fpavf_identify(void *bar0)
+{
+ uint64_t val;
+ uint16_t domain_id;
+ uint16_t vf_id;
+ uint64_t stack_ln_ptr;
+
+ val = fpavf_read64((void *)((uintptr_t)bar0 +
+ FPA_VF_VHAURA_CNT_THRESHOLD(0)));
+
+ domain_id = (val >> 8) & 0xffff;
+ vf_id = (val >> 24) & 0xffff;
+
+ stack_ln_ptr = fpavf_read64((void *)((uintptr_t)bar0 +
+ FPA_VF_VHPOOL_THRESHOLD(0)));
+ if (vf_id >= FPA_VF_MAX) {
+ fpavf_log_err("vf_id(%d) greater than max vf (32)\n", vf_id);
+ return -1;
+ }
+
+ if (fpadev.pool[vf_id].is_inuse) {
+ fpavf_log_err("vf_id %d is_inuse\n", vf_id);
+ return -1;
+ }
+
+ fpadev.pool[vf_id].domain_id = domain_id;
+ fpadev.pool[vf_id].vf_id = vf_id;
+ fpadev.pool[vf_id].bar0 = bar0;
+ fpadev.pool[vf_id].stack_ln_ptr = stack_ln_ptr;
+
+ /* SUCCESS */
+ return vf_id;
+}
+
+/* FPAVF pcie device aka mempool probe */
+static int
+fpavf_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ uint8_t *idreg;
+ int res;
+ struct fpavf_res *fpa;
+
+ RTE_SET_USED(pci_drv);
+ RTE_SET_USED(fpa);
+
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ if (pci_dev->mem_resource[0].addr == NULL) {
+ fpavf_log_err("Empty bars %p ", pci_dev->mem_resource[0].addr);
+ return -ENODEV;
+ }
+ idreg = pci_dev->mem_resource[0].addr;
+
+ octeontx_fpavf_setup();
+
+ res = octeontx_fpavf_identify(idreg);
+ if (res < 0)
+ return -1;
+
+ fpa = &fpadev.pool[res];
+ fpadev.total_gpool_cnt++;
+ rte_wmb();
+
+ fpavf_log_dbg("total_fpavfs %d bar0 %p domain %d vf %d stk_ln_ptr 0x%x",
+ fpadev.total_gpool_cnt, fpa->bar0, fpa->domain_id,
+ fpa->vf_id, (unsigned int)fpa->stack_ln_ptr);
+
+ return 0;
+}
+
+static const struct rte_pci_id pci_fpavf_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVICE_ID_OCTEONTX_FPA_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_fpavf = {
+ .id_table = pci_fpavf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+ .probe = fpavf_probe,
+};
+
+RTE_PMD_REGISTER_PCI(octeontx_fpavf, pci_fpavf);
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 1c703725c..c43b1a7d2 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -34,6 +34,7 @@
#define __OCTEONTX_FPAVF_H__
#include <rte_debug.h>
+#include <rte_io.h>
#ifdef RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG
#define fpavf_log_info(fmt, args...) \
@@ -87,4 +88,42 @@
#define FPA_VF0_APERTURE_SHIFT 22
#define FPA_AURA_SET_SIZE 16
+
+/*
+ * In Cavium OcteonTX SoC, all accesses to the device registers are
+ * implictly strongly ordered. So, The relaxed version of IO operation is
+ * safe to use with out any IO memory barriers.
+ */
+#define fpavf_read64 rte_read64_relaxed
+#define fpavf_write64 rte_write64_relaxed
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define fpavf_load_pair(val0, val1, addr) ({ \
+ asm volatile( \
+ "ldp %x[x0], %x[x1], [%x[p1]]" \
+ :[x0]"=r"(val0), [x1]"=r"(val1) \
+ :[p1]"r"(addr) \
+ ); })
+
+#define fpavf_store_pair(val0, val1, addr) ({ \
+ asm volatile( \
+ "stp %x[x0], %x[x1], [%x[p1]]" \
+ ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
+ ); })
+#else /* Un optimized functions for building on non arm64 arch */
+
+#define fpavf_load_pair(val0, val1, addr) \
+do { \
+ val0 = rte_read64(addr); \
+ val1 = rte_read64(((uint8_t *)addr) + 8); \
+} while (0)
+
+#define fpavf_store_pair(val0, val1, addr) \
+do { \
+ rte_write64(val0, addr); \
+ rte_write64(val1, (((uint8_t *)addr) + 8)); \
+} while (0)
+#endif
+
#endif /* __OCTEONTX_FPAVF_H__ */
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (2 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 03/10] mempool/octeontx: probe fpavf pcie devices Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-10-06 20:51 ` Thomas Monjalon
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 05/10] mempool/octeontx: implement pool free Santosh Shukla
` (7 subsequent siblings)
11 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Upon pool allocation request by application, Octeontx FPA alloc
does following:
- Gets free pool from pci fpavf array.
- Uses mbox to communicate fpapf driver about,
* gpool-id
* pool block_sz
* alignemnt
- Programs fpavf pool boundary.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 --> v2:
- removed for loop iterator approach for gpool2handle translation,
Now gpood-id resides in pool_handle (5bit of LSB).
- removed rte_octeontx_bufpool_gpool() glbal api and keeping
inlined func named octeontx_fpa_bufpool_gpool().
drivers/mempool/octeontx/Makefile | 1 +
drivers/mempool/octeontx/octeontx_fpavf.c | 514 ++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 17 +
drivers/mempool/octeontx/rte_mempool_octeontx.c | 88 ++++
4 files changed, 620 insertions(+)
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx.c
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
index 55ca1d944..9c3389608 100644
--- a/drivers/mempool/octeontx/Makefile
+++ b/drivers/mempool/octeontx/Makefile
@@ -51,6 +51,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c
# this lib depends upon:
DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 0b4a9357f..c0c9d8325 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -46,9 +46,75 @@
#include <rte_memory.h>
#include <rte_malloc.h>
#include <rte_spinlock.h>
+#include <rte_mbuf.h>
+#include <rte_pmd_octeontx_ssovf.h>
#include "octeontx_fpavf.h"
+/* FPA Mbox Message */
+#define IDENTIFY 0x0
+
+#define FPA_CONFIGSET 0x1
+#define FPA_CONFIGGET 0x2
+#define FPA_START_COUNT 0x3
+#define FPA_STOP_COUNT 0x4
+#define FPA_ATTACHAURA 0x5
+#define FPA_DETACHAURA 0x6
+#define FPA_SETAURALVL 0x7
+#define FPA_GETAURALVL 0x8
+
+#define FPA_COPROC 0x1
+
+/* fpa mbox struct */
+struct octeontx_mbox_fpa_cfg {
+ int aid;
+ uint64_t pool_cfg;
+ uint64_t pool_stack_base;
+ uint64_t pool_stack_end;
+ uint64_t aura_cfg;
+};
+
+struct __attribute__((__packed__)) gen_req {
+ uint32_t value;
+};
+
+struct __attribute__((__packed__)) idn_req {
+ uint8_t domain_id;
+};
+
+struct __attribute__((__packed__)) gen_resp {
+ uint16_t domain_id;
+ uint16_t vfid;
+};
+
+struct __attribute__((__packed__)) dcfg_resp {
+ uint8_t sso_count;
+ uint8_t ssow_count;
+ uint8_t fpa_count;
+ uint8_t pko_count;
+ uint8_t tim_count;
+ uint8_t net_port_count;
+ uint8_t virt_port_count;
+};
+
+#define FPA_MAX_POOL 32
+#define FPA_PF_PAGE_SZ 4096
+
+#define FPA_LN_SIZE 128
+#define FPA_ROUND_UP(x, size) \
+ ((((unsigned long)(x)) + size-1) & (~(size-1)))
+#define FPA_OBJSZ_2_CACHE_LINE(sz) (((sz) + RTE_CACHE_LINE_MASK) >> 7)
+#define FPA_CACHE_LINE_2_OBJSZ(sz) ((sz) << 7)
+
+#define POOL_ENA (0x1 << 0)
+#define POOL_DIS (0x0 << 0)
+#define POOL_SET_NAT_ALIGN (0x1 << 1)
+#define POOL_DIS_NAT_ALIGN (0x0 << 1)
+#define POOL_STYPE(x) (((x) & 0x1) << 2)
+#define POOL_LTYPE(x) (((x) & 0x3) << 3)
+#define POOL_BUF_OFFSET(x) (((x) & 0x7fffULL) << 16)
+#define POOL_BUF_SIZE(x) (((x) & 0x7ffULL) << 32)
+
struct fpavf_res {
void *pool_stack_base;
void *bar0;
@@ -67,6 +133,454 @@ struct octeontx_fpadev {
static struct octeontx_fpadev fpadev;
+/* lock is taken by caller */
+static int
+octeontx_fpa_gpool_alloc(unsigned int object_size)
+{
+ struct fpavf_res *res = NULL;
+ uint16_t gpool;
+ unsigned int sz128;
+
+ sz128 = FPA_OBJSZ_2_CACHE_LINE(object_size);
+
+ for (gpool = 0; gpool < FPA_VF_MAX; gpool++) {
+
+ /* Skip VF that is not mapped Or _inuse */
+ if ((fpadev.pool[gpool].bar0 == NULL) ||
+ (fpadev.pool[gpool].is_inuse == true))
+ continue;
+
+ res = &fpadev.pool[gpool];
+
+ RTE_ASSERT(res->domain_id != (uint16_t)~0);
+ RTE_ASSERT(res->vf_id != (uint16_t)~0);
+ RTE_ASSERT(res->stack_ln_ptr != 0);
+
+ if (res->sz128 == 0) {
+ res->sz128 = sz128;
+
+ fpavf_log_dbg("gpool %d blk_sz %d\n", gpool, sz128);
+ return gpool;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+/* lock is taken by caller */
+static __rte_always_inline uintptr_t
+octeontx_fpa_gpool2handle(uint16_t gpool)
+{
+ struct fpavf_res *res = NULL;
+
+ RTE_ASSERT(gpool < FPA_VF_MAX);
+
+ res = &fpadev.pool[gpool];
+ if (unlikely(res == NULL))
+ return 0;
+
+ return (uintptr_t)res->bar0 | gpool;
+}
+
+static __rte_always_inline bool
+octeontx_fpa_handle_valid(uintptr_t handle)
+{
+ struct fpavf_res *res = NULL;
+ uint8_t gpool;
+ int i;
+ bool ret = false;
+
+ if (unlikely(!handle))
+ return ret;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* get the bar address */
+ handle &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (i = 0; i < FPA_VF_MAX; i++) {
+ if ((uintptr_t)fpadev.pool[i].bar0 != handle)
+ continue;
+
+ /* validate gpool */
+ if (gpool != i)
+ return false;
+
+ res = &fpadev.pool[i];
+
+ if (res->sz128 == 0 || res->domain_id == (uint16_t)~0 ||
+ res->stack_ln_ptr == 0)
+ ret = false;
+ else
+ ret = true;
+ break;
+ }
+
+ return ret;
+}
+
+static int
+octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size,
+ signed short buf_offset, unsigned int max_buf_count)
+{
+ void *memptr = NULL;
+ phys_addr_t phys_addr;
+ unsigned int memsz;
+ struct fpavf_res *fpa = NULL;
+ uint64_t reg;
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ int ret = -1;
+
+ fpa = &fpadev.pool[gpool];
+ memsz = FPA_ROUND_UP(max_buf_count / fpa->stack_ln_ptr, FPA_LN_SIZE) *
+ FPA_LN_SIZE;
+
+ /* Round-up to page size */
+ memsz = (memsz + FPA_PF_PAGE_SZ - 1) & ~(uintptr_t)(FPA_PF_PAGE_SZ-1);
+ memptr = rte_malloc(NULL, memsz, RTE_CACHE_LINE_SIZE);
+ if (memptr == NULL) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ /* Configure stack */
+ fpa->pool_stack_base = memptr;
+ phys_addr = rte_malloc_virt2phy(memptr);
+
+ buf_size /= FPA_LN_SIZE;
+
+ /* POOL setup */
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_CONFIGSET;
+ hdr.vfid = fpa->vf_id;
+ hdr.res_code = 0;
+
+ buf_offset /= FPA_LN_SIZE;
+ reg = POOL_BUF_SIZE(buf_size) | POOL_BUF_OFFSET(buf_offset) |
+ POOL_LTYPE(0x2) | POOL_STYPE(0) | POOL_SET_NAT_ALIGN |
+ POOL_ENA;
+
+ cfg.aid = 0;
+ cfg.pool_cfg = reg;
+ cfg.pool_stack_base = phys_addr;
+ cfg.pool_stack_end = phys_addr + memsz;
+ cfg.aura_cfg = (1 << 9);
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ ret = -EACCES;
+ goto err;
+ }
+
+ fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n",
+ fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg,
+ cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg);
+
+ /* Now pool is in_use */
+ fpa->is_inuse = true;
+
+err:
+ if (ret < 0)
+ rte_free(memptr);
+
+ return ret;
+}
+
+static int
+octeontx_fpapf_pool_destroy(unsigned int gpool_index)
+{
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ struct fpavf_res *fpa = NULL;
+ int ret = -1;
+
+ fpa = &fpadev.pool[gpool_index];
+
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_CONFIGSET;
+ hdr.vfid = fpa->vf_id;
+ hdr.res_code = 0;
+
+ /* reset and free the pool */
+ cfg.aid = 0;
+ cfg.pool_cfg = 0;
+ cfg.pool_stack_base = 0;
+ cfg.pool_stack_end = 0;
+ cfg.aura_cfg = 0;
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ ret = -EACCES;
+ goto err;
+ }
+
+ ret = 0;
+err:
+ /* anycase free pool stack memory */
+ rte_free(fpa->pool_stack_base);
+ fpa->pool_stack_base = NULL;
+ return ret;
+}
+
+static int
+octeontx_fpapf_aura_attach(unsigned int gpool_index)
+{
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ int ret = 0;
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_ATTACHAURA;
+ hdr.vfid = gpool_index;
+ hdr.res_code = 0;
+ memset(&cfg, 0x0, sizeof(struct octeontx_mbox_fpa_cfg));
+ cfg.aid = gpool_index; /* gpool is guara */
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ fpavf_log_err("Could not attach fpa ");
+ fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n",
+ gpool_index, gpool_index, ret, hdr.res_code);
+ ret = -EACCES;
+ goto err;
+ }
+err:
+ return ret;
+}
+
+static int
+octeontx_fpapf_aura_detach(unsigned int gpool_index)
+{
+ struct octeontx_mbox_fpa_cfg cfg = {0};
+ struct octeontx_mbox_hdr hdr = {0};
+ int ret = 0;
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ cfg.aid = gpool_index; /* gpool is gaura */
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_DETACHAURA;
+ hdr.vfid = gpool_index;
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0);
+ if (ret < 0) {
+ fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n",
+ gpool_index, ret, hdr.res_code);
+ ret = -EINVAL;
+ }
+
+err:
+ return ret;
+}
+
+static int
+octeontx_fpavf_pool_setup(uintptr_t handle, unsigned long memsz,
+ void *memva, uint16_t gpool)
+{
+ uint64_t va_end;
+
+ if (unlikely(!handle))
+ return -ENODEV;
+
+ va_end = (uintptr_t)memva + memsz;
+ va_end &= ~RTE_CACHE_LINE_MASK;
+
+ /* VHPOOL setup */
+ fpavf_write64((uintptr_t)memva,
+ (void *)((uintptr_t)handle +
+ FPA_VF_VHPOOL_START_ADDR(gpool)));
+ fpavf_write64(va_end,
+ (void *)((uintptr_t)handle +
+ FPA_VF_VHPOOL_END_ADDR(gpool)));
+ return 0;
+}
+
+static int
+octeontx_fpapf_start_count(uint16_t gpool_index)
+{
+ int ret = 0;
+ struct octeontx_mbox_hdr hdr = {0};
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_START_COUNT;
+ hdr.vfid = gpool_index;
+ ret = octeontx_ssovf_mbox_send(&hdr, NULL, 0, NULL, 0);
+ if (ret < 0) {
+ fpavf_log_err("Could not start buffer counting for ");
+ fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n",
+ gpool_index, ret, hdr.res_code);
+ ret = -EINVAL;
+ goto err;
+ }
+
+err:
+ return ret;
+}
+
+static __rte_always_inline int
+octeontx_fpavf_free(unsigned int gpool)
+{
+ int ret = 0;
+
+ if (gpool >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* Pool is free */
+ fpadev.pool[gpool].is_inuse = false;
+
+err:
+ return ret;
+}
+
+static __rte_always_inline int
+octeontx_gpool_free(uint16_t gpool)
+{
+ if (fpadev.pool[gpool].sz128 != 0) {
+ fpadev.pool[gpool].sz128 = 0;
+ return 0;
+ }
+ return -EINVAL;
+}
+
+/*
+ * Return buffer size for a given pool
+ */
+int
+octeontx_fpa_bufpool_block_size(uintptr_t handle)
+{
+ struct fpavf_res *res = NULL;
+ uint8_t gpool;
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+ res = &fpadev.pool[gpool];
+ return FPA_CACHE_LINE_2_OBJSZ(res->sz128);
+}
+
+uintptr_t
+octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
+ unsigned int buf_offset, char **va_start,
+ int node_id)
+{
+ unsigned int gpool;
+ void *memva;
+ unsigned long memsz;
+ uintptr_t gpool_handle;
+ uintptr_t pool_bar;
+ int res;
+
+ RTE_SET_USED(node_id);
+ FPAVF_STATIC_ASSERTION(sizeof(struct rte_mbuf) <=
+ OCTEONTX_FPAVF_BUF_OFFSET);
+
+ if (unlikely(*va_start == NULL))
+ goto error_end;
+
+ object_size = RTE_CACHE_LINE_ROUNDUP(object_size);
+ if (object_size > FPA_MAX_OBJ_SIZE) {
+ errno = EINVAL;
+ goto error_end;
+ }
+
+ rte_spinlock_lock(&fpadev.lock);
+ res = octeontx_fpa_gpool_alloc(object_size);
+
+ /* Bail if failed */
+ if (unlikely(res < 0)) {
+ errno = res;
+ goto error_unlock;
+ }
+
+ /* get fpavf */
+ gpool = res;
+
+ /* get pool handle */
+ gpool_handle = octeontx_fpa_gpool2handle(gpool);
+ if (!octeontx_fpa_handle_valid(gpool_handle)) {
+ errno = ENOSPC;
+ goto error_gpool_free;
+ }
+
+ /* Get pool bar address from handle */
+ pool_bar = gpool_handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ res = octeontx_fpapf_pool_setup(gpool, object_size, buf_offset,
+ object_count);
+ if (res < 0) {
+ errno = res;
+ goto error_gpool_free;
+ }
+
+ /* populate AURA fields */
+ res = octeontx_fpapf_aura_attach(gpool);
+ if (res < 0) {
+ errno = res;
+ goto error_pool_destroy;
+ }
+
+ /* vf pool setup */
+ memsz = object_size * object_count;
+ memva = *va_start;
+ res = octeontx_fpavf_pool_setup(pool_bar, memsz, memva, gpool);
+ if (res < 0) {
+ errno = res;
+ goto error_gaura_detach;
+ }
+
+ /* Release lock */
+ rte_spinlock_unlock(&fpadev.lock);
+
+ /* populate AURA registers */
+ fpavf_write64(object_count, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ fpavf_write64(object_count, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(object_count + 1, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ octeontx_fpapf_start_count(gpool);
+
+ return gpool_handle;
+
+error_gaura_detach:
+ (void) octeontx_fpapf_aura_detach(gpool);
+error_pool_destroy:
+ octeontx_fpavf_free(gpool);
+ octeontx_fpapf_pool_destroy(gpool);
+error_gpool_free:
+ octeontx_gpool_free(gpool);
+error_unlock:
+ rte_spinlock_unlock(&fpadev.lock);
+error_end:
+ return (uintptr_t)NULL;
+}
+
static void
octeontx_fpavf_setup(void)
{
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index c43b1a7d2..23a458363 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -58,6 +58,7 @@
#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
#define FPA_VF_MAX 32
+#define FPA_GPOOL_MASK (FPA_VF_MAX-1)
/* FPA VF register offsets */
#define FPA_VF_INT(x) (0x200ULL | ((x) << 22))
@@ -88,6 +89,10 @@
#define FPA_VF0_APERTURE_SHIFT 22
#define FPA_AURA_SET_SIZE 16
+#define FPA_MAX_OBJ_SIZE (128 * 1024)
+#define OCTEONTX_FPAVF_BUF_OFFSET 128
+
+#define FPAVF_STATIC_ASSERTION(s) _Static_assert(s, #s)
/*
* In Cavium OcteonTX SoC, all accesses to the device registers are
@@ -126,4 +131,16 @@ do { \
} while (0)
#endif
+uintptr_t
+octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
+ unsigned int buf_offset, char **va_start,
+ int node);
+int
+octeontx_fpa_bufpool_block_size(uintptr_t handle);
+
+static __rte_always_inline uint8_t
+octeontx_fpa_bufpool_gpool(uintptr_t handle)
+{
+ return (uint8_t)handle & FPA_GPOOL_MASK;
+}
#endif /* __OCTEONTX_FPAVF_H__ */
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
new file mode 100644
index 000000000..73648aa7f
--- /dev/null
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -0,0 +1,88 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) 2017 Cavium Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "octeontx_fpavf.h"
+
+static int
+octeontx_fpavf_alloc(struct rte_mempool *mp)
+{
+ uintptr_t pool;
+ uint32_t memseg_count = mp->size;
+ uint32_t object_size;
+ uintptr_t va_start;
+ int rc = 0;
+
+ /* virtual hugepage mapped addr */
+ va_start = ~(uint64_t)0;
+
+ object_size = mp->elt_size + mp->header_size + mp->trailer_size;
+
+ pool = octeontx_fpa_bufpool_create(object_size, memseg_count,
+ OCTEONTX_FPAVF_BUF_OFFSET,
+ (char **)&va_start,
+ mp->socket_id);
+ rc = octeontx_fpa_bufpool_block_size(pool);
+ if (rc < 0)
+ goto _end;
+
+ if ((uint32_t)rc != object_size)
+ fpavf_log_err("buffer size mismatch: %d instead of %u\n",
+ rc, object_size);
+
+ fpavf_log_info("Pool created %p with .. ", (void *)pool);
+ fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count);
+
+ /* assign pool handle to mempool */
+ mp->pool_id = (uint64_t)pool;
+
+ return 0;
+
+_end:
+ return rc;
+}
+
+static struct rte_mempool_ops octeontx_fpavf_ops = {
+ .name = "octeontx_fpavf",
+ .alloc = octeontx_fpavf_alloc,
+ .free = NULL,
+ .enqueue = NULL,
+ .dequeue = NULL,
+ .get_count = NULL,
+ .get_capabilities = NULL,
+ .update_range = NULL,
+};
+
+MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc Santosh Shukla
@ 2017-10-06 20:51 ` Thomas Monjalon
2017-10-07 3:49 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-06 20:51 UTC (permalink / raw)
To: Santosh Shukla
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
31/08/2017 08:37, Santosh Shukla:
> Upon pool allocation request by application, Octeontx FPA alloc
> does following:
> - Gets free pool from pci fpavf array.
> - Uses mbox to communicate fpapf driver about,
> * gpool-id
> * pool block_sz
> * alignemnt
> - Programs fpavf pool boundary.
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
There is an error at compilation:
drivers/mempool/octeontx/rte_mempool_octeontx.c:85:3: fatal error
field designator 'update_range' does not refer to any field in
type 'struct rte_mempool_ops'
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc
2017-10-06 20:51 ` Thomas Monjalon
@ 2017-10-07 3:49 ` santosh
0 siblings, 0 replies; 78+ messages in thread
From: santosh @ 2017-10-07 3:49 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
On Saturday 07 October 2017 02:21 AM, Thomas Monjalon wrote:
> 31/08/2017 08:37, Santosh Shukla:
>> Upon pool allocation request by application, Octeontx FPA alloc
>> does following:
>> - Gets free pool from pci fpavf array.
>> - Uses mbox to communicate fpapf driver about,
>> * gpool-id
>> * pool block_sz
>> * alignemnt
>> - Programs fpavf pool boundary.
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> There is an error at compilation:
> drivers/mempool/octeontx/rte_mempool_octeontx.c:85:3: fatal error
> field designator 'update_range' does not refer to any field in
> type 'struct rte_mempool_ops'
>
Yes, update_ops renamed to rte_mempool_ops_register_memory_area which you have just merged [1].
Also, external mempool pmd targetted for -rc2, I will send out changes by next week for review.
Thanks.
[1] http://dpdk.org/dev/patchwork/patch/29463/
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 05/10] mempool/octeontx: implement pool free
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (3 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 04/10] mempool/octeontx: implement pool alloc Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 06/10] mempool/octeontx: implement pool enq and deq Santosh Shukla
` (6 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Upon pool free request from application, Octeon FPA free
does following:
- Uses mbox to reset fpapf pool setup.
- frees fpavf resources.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 --> v2:
- Using newly introduced api 'octeontx_fpa_bufpool_gpool()'
for getting the pool-id from handle.
drivers/mempool/octeontx/octeontx_fpavf.c | 111 ++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 2 +
drivers/mempool/octeontx/rte_mempool_octeontx.c | 12 ++-
3 files changed, 124 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index c0c9d8325..44253b09e 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -581,6 +581,117 @@ octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
return (uintptr_t)NULL;
}
+/*
+ * Destroy a buffer pool.
+ */
+int
+octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
+{
+ void **node, **curr, *head = NULL;
+ uint64_t sz;
+ uint64_t cnt, avail;
+ uint8_t gpool;
+ uintptr_t pool_bar;
+ int ret;
+
+ RTE_SET_USED(node_id);
+
+ /* Wait for all outstanding writes to be comitted */
+ rte_smp_wmb();
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the pool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* Get pool bar address from handle */
+ pool_bar = handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ /* Check for no outstanding buffers */
+ cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ if (cnt) {
+ fpavf_log_dbg("buffer exist in pool cnt %ld\n", cnt);
+ return -EBUSY;
+ }
+
+ rte_spinlock_lock(&fpadev.lock);
+
+ avail = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_AVAILABLE(gpool)));
+
+ /* Prepare to empty the entire POOL */
+ fpavf_write64(avail, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(avail + 1, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ /* Empty the pool */
+ /* Invalidate the POOL */
+ octeontx_gpool_free(gpool);
+
+ /* Process all buffers in the pool */
+ while (avail--) {
+
+ /* Yank a buffer from the pool */
+ node = (void *)(uintptr_t)
+ fpavf_read64((void *)
+ (pool_bar + FPA_VF_VHAURA_OP_ALLOC(gpool)));
+
+ if (node == NULL) {
+ fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf\n",
+ gpool, avail);
+ break;
+ }
+
+ /* Imsert it into an ordered linked list */
+ for (curr = &head; curr[0] != NULL; curr = curr[0]) {
+ if ((uintptr_t)node <= (uintptr_t)curr[0])
+ break;
+ }
+ node[0] = curr[0];
+ curr[0] = node;
+ }
+
+ /* Verify the linked list to be a perfect series */
+ sz = octeontx_fpa_bufpool_block_size(handle) << 7;
+ for (curr = head; curr != NULL && curr[0] != NULL;
+ curr = curr[0]) {
+ if (curr == curr[0] ||
+ ((uintptr_t)curr != ((uintptr_t)curr[0] - sz))) {
+ fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)\n",
+ gpool, curr, curr[0]);
+ }
+ }
+
+ /* Disable pool operation */
+ fpavf_write64(~0ul, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_START_ADDR(gpool)));
+ fpavf_write64(~0ul, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_END_ADDR(gpool)));
+
+ (void)octeontx_fpapf_pool_destroy(gpool);
+
+ /* Deactivate the AURA */
+ fpavf_write64(0, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(0, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ ret = octeontx_fpapf_aura_detach(gpool);
+ if (ret) {
+ fpavf_log_err("Failed to dettach gaura %u. error code=%d\n",
+ gpool, ret);
+ }
+
+ /* Free VF */
+ (void)octeontx_fpavf_free(gpool);
+
+ rte_spinlock_unlock(&fpadev.lock);
+ return 0;
+}
+
static void
octeontx_fpavf_setup(void)
{
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 23a458363..28440e810 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -136,6 +136,8 @@ octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
unsigned int buf_offset, char **va_start,
int node);
int
+octeontx_fpa_bufpool_destroy(uintptr_t handle, int node);
+int
octeontx_fpa_bufpool_block_size(uintptr_t handle);
static __rte_always_inline uint8_t
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 73648aa7f..6754a78c0 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -74,10 +74,20 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
return rc;
}
+static void
+octeontx_fpavf_free(struct rte_mempool *mp)
+{
+ uintptr_t pool;
+
+ pool = (uintptr_t)mp->pool_id;
+
+ octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
- .free = NULL,
+ .free = octeontx_fpavf_free,
.enqueue = NULL,
.dequeue = NULL,
.get_count = NULL,
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 06/10] mempool/octeontx: implement pool enq and deq
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (4 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 05/10] mempool/octeontx: implement pool free Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 07/10] mempool/octeontx: implement pool get count Santosh Shukla
` (5 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 --> v2:
- Mask the gpool-id from pool handle and pass pool bar to pool enq
and deq ops.
drivers/mempool/octeontx/Makefile | 13 +++++
drivers/mempool/octeontx/rte_mempool_octeontx.c | 69 ++++++++++++++++++++++++-
2 files changed, 80 insertions(+), 2 deletions(-)
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
index 9c3389608..0b2043842 100644
--- a/drivers/mempool/octeontx/Makefile
+++ b/drivers/mempool/octeontx/Makefile
@@ -53,6 +53,19 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c
+ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
+CFLAGS_rte_mempool_octeontx.o += -fno-prefetch-loop-arrays
+
+ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
+CFLAGS_rte_mempool_octeontx.o += -Ofast
+else
+CFLAGS_rte_mempool_octeontx.o += -O3 -ffast-math
+endif
+
+else
+CFLAGS_rte_mempool_octeontx.o += -Ofast
+endif
+
# this lib depends upon:
DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 6754a78c0..9477469f0 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -84,12 +84,77 @@ octeontx_fpavf_free(struct rte_mempool *mp)
octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
}
+static __rte_always_inline void *
+octeontx_fpa_bufpool_alloc(uintptr_t handle)
+{
+ return (void *)(uintptr_t)fpavf_read64((void *)(handle +
+ FPA_VF_VHAURA_OP_ALLOC(0)));
+}
+
+static __rte_always_inline void
+octeontx_fpa_bufpool_free(uintptr_t handle, void *buf)
+{
+ uint64_t free_addr = FPA_VF_FREE_ADDRS_S(FPA_VF_VHAURA_OP_FREE(0),
+ 0 /* DWB */, 1 /* FABS */);
+
+ fpavf_write64((uintptr_t)buf, (void *)(uintptr_t)(handle + free_addr));
+}
+
+static int
+octeontx_fpavf_enqueue(struct rte_mempool *mp, void * const *obj_table,
+ unsigned int n)
+{
+ uintptr_t pool;
+ unsigned int index;
+
+ pool = (uintptr_t)mp->pool_id;
+ /* Get pool bar address from handle */
+ pool &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (index = 0; index < n; index++, obj_table++)
+ octeontx_fpa_bufpool_free(pool, *obj_table);
+
+ return 0;
+}
+
+static int
+octeontx_fpavf_dequeue(struct rte_mempool *mp, void **obj_table,
+ unsigned int n)
+{
+ unsigned int index;
+ uintptr_t pool;
+ void *obj;
+
+ pool = (uintptr_t)mp->pool_id;
+ /* Get pool bar address from handle */
+ pool &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (index = 0; index < n; index++, obj_table++) {
+ obj = octeontx_fpa_bufpool_alloc(pool);
+ if (obj == NULL) {
+ /*
+ * Failed to allocate the requested number of objects
+ * from the pool. Current pool implementation requires
+ * completing the entire request or returning error
+ * otherwise.
+ * Free already allocated buffers to the pool.
+ */
+ for (; index > 0; index--) {
+ obj_table--;
+ octeontx_fpa_bufpool_free(pool, *obj_table);
+ }
+ return -ENOMEM;
+ }
+ *obj_table = obj;
+ }
+
+ return 0;
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
.free = octeontx_fpavf_free,
- .enqueue = NULL,
- .dequeue = NULL,
+ .enqueue = octeontx_fpavf_enqueue,
+ .dequeue = octeontx_fpavf_dequeue,
.get_count = NULL,
.get_capabilities = NULL,
.update_range = NULL,
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 07/10] mempool/octeontx: implement pool get count
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (5 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 06/10] mempool/octeontx: implement pool enq and deq Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 08/10] mempool/octeontx: implement pool get capability Santosh Shukla
` (4 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 --> v2:
- Use new api octeontx_fpa_bufpool_gpool() so to get pool-id.
drivers/mempool/octeontx/octeontx_fpavf.c | 27 +++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 2 ++
drivers/mempool/octeontx/rte_mempool_octeontx.c | 12 ++++++++++-
3 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 44253b09e..e4a0a7f42 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -483,6 +483,33 @@ octeontx_fpa_bufpool_block_size(uintptr_t handle)
return FPA_CACHE_LINE_2_OBJSZ(res->sz128);
}
+int
+octeontx_fpa_bufpool_free_count(uintptr_t handle)
+{
+ uint64_t cnt, limit, avail;
+ uint8_t gpool;
+ uintptr_t pool_bar;
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* Get pool bar address from handle */
+ pool_bar = handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ limit = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+
+ avail = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_AVAILABLE(gpool)));
+
+ return RTE_MIN(avail, (limit - cnt));
+}
+
uintptr_t
octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
unsigned int buf_offset, char **va_start,
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 28440e810..471fe711a 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -139,6 +139,8 @@ int
octeontx_fpa_bufpool_destroy(uintptr_t handle, int node);
int
octeontx_fpa_bufpool_block_size(uintptr_t handle);
+int
+octeontx_fpa_bufpool_free_count(uintptr_t handle);
static __rte_always_inline uint8_t
octeontx_fpa_bufpool_gpool(uintptr_t handle)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 9477469f0..d2859a818 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -149,13 +149,23 @@ octeontx_fpavf_dequeue(struct rte_mempool *mp, void **obj_table,
return 0;
}
+static unsigned int
+octeontx_fpavf_get_count(const struct rte_mempool *mp)
+{
+ uintptr_t pool;
+
+ pool = (uintptr_t)mp->pool_id;
+
+ return octeontx_fpa_bufpool_free_count(pool);
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
.free = octeontx_fpavf_free,
.enqueue = octeontx_fpavf_enqueue,
.dequeue = octeontx_fpavf_dequeue,
- .get_count = NULL,
+ .get_count = octeontx_fpavf_get_count,
.get_capabilities = NULL,
.update_range = NULL,
};
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 08/10] mempool/octeontx: implement pool get capability
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (6 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 07/10] mempool/octeontx: implement pool get count Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 09/10] mempool/octeontx: implement pool update range Santosh Shukla
` (3 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Allow mempool HW manager to advertise his pool capability.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/rte_mempool_octeontx.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d2859a818..fd3e5af07 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -159,6 +159,14 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
return octeontx_fpa_bufpool_free_count(pool);
}
+static int
+octeontx_fpavf_get_capabilities(struct rte_mempool *mp)
+{
+ mp->flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
+ MEMPOOL_F_POOL_BLK_SZ_ALIGNED);
+ return 0;
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
@@ -166,7 +174,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
.enqueue = octeontx_fpavf_enqueue,
.dequeue = octeontx_fpavf_dequeue,
.get_count = octeontx_fpavf_get_count,
- .get_capabilities = NULL,
+ .get_capabilities = octeontx_fpavf_get_capabilities,
.update_range = NULL,
};
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 09/10] mempool/octeontx: implement pool update range
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (7 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 08/10] mempool/octeontx: implement pool get capability Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
` (2 subsequent siblings)
11 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
Add support for update range ops in mempool driver.
Allow more than one HW pool when using OcteonTx mempool driver:
By storing each pool information to the list and find appropriate
list element by matching the rte_mempool pointers.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/rte_mempool_octeontx.c | 73 ++++++++++++++++++++++++-
1 file changed, 71 insertions(+), 2 deletions(-)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index fd3e5af07..4885191d7 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -36,17 +36,49 @@
#include "octeontx_fpavf.h"
+/*
+ * Per-pool descriptor.
+ * Links mempool with the corresponding memzone,
+ * that provides memory under the pool's elements.
+ */
+struct octeontx_pool_info {
+ const struct rte_mempool *mp;
+ uintptr_t mz_addr;
+
+ SLIST_ENTRY(octeontx_pool_info) link;
+};
+
+SLIST_HEAD(octeontx_pool_list, octeontx_pool_info);
+
+/* List of the allocated pools */
+static struct octeontx_pool_list octeontx_pool_head =
+ SLIST_HEAD_INITIALIZER(octeontx_pool_head);
+/* Spinlock to protect pool list */
+static rte_spinlock_t pool_list_lock = RTE_SPINLOCK_INITIALIZER;
+
static int
octeontx_fpavf_alloc(struct rte_mempool *mp)
{
uintptr_t pool;
+ struct octeontx_pool_info *pool_info;
uint32_t memseg_count = mp->size;
uint32_t object_size;
uintptr_t va_start;
int rc = 0;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_FOREACH(pool_info, &octeontx_pool_head, link) {
+ if (pool_info->mp == mp)
+ break;
+ }
+ if (pool_info == NULL) {
+ rte_spinlock_unlock(&pool_list_lock);
+ return -ENXIO;
+ }
+
/* virtual hugepage mapped addr */
- va_start = ~(uint64_t)0;
+ va_start = pool_info->mz_addr;
+ rte_spinlock_unlock(&pool_list_lock);
object_size = mp->elt_size + mp->header_size + mp->trailer_size;
@@ -77,10 +109,27 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
static void
octeontx_fpavf_free(struct rte_mempool *mp)
{
+ struct octeontx_pool_info *pool_info;
uintptr_t pool;
pool = (uintptr_t)mp->pool_id;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_FOREACH(pool_info, &octeontx_pool_head, link) {
+ if (pool_info->mp == mp)
+ break;
+ }
+
+ if (pool_info == NULL) {
+ rte_spinlock_unlock(&pool_list_lock);
+ rte_panic("%s: trying to free pool with no valid metadata",
+ __func__);
+ }
+
+ SLIST_REMOVE(&octeontx_pool_head, pool_info, octeontx_pool_info, link);
+ rte_spinlock_unlock(&pool_list_lock);
+
+ rte_free(pool_info);
octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
}
@@ -167,6 +216,26 @@ octeontx_fpavf_get_capabilities(struct rte_mempool *mp)
return 0;
}
+static void
+octeontx_fpavf_update_range(const struct rte_mempool *mp,
+ char *vaddr, phys_addr_t paddr, size_t len)
+{
+ struct octeontx_pool_info *pool_info;
+
+ RTE_SET_USED(paddr);
+ RTE_SET_USED(len);
+
+ pool_info = rte_malloc("octeontx_pool_info", sizeof(*pool_info), 0);
+ if (pool_info == NULL)
+ return;
+
+ pool_info->mp = mp;
+ pool_info->mz_addr = (uintptr_t)vaddr;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_INSERT_HEAD(&octeontx_pool_head, pool_info, link);
+ rte_spinlock_unlock(&pool_list_lock);
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
@@ -175,7 +244,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
.dequeue = octeontx_fpavf_dequeue,
.get_count = octeontx_fpavf_get_count,
.get_capabilities = octeontx_fpavf_get_capabilities,
- .update_range = NULL,
+ .update_range = octeontx_fpavf_update_range,
};
MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v2 10/10] doc: add mempool and octeontx mempool device
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (8 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 09/10] mempool/octeontx: implement pool update range Santosh Shukla
@ 2017-08-31 6:37 ` Santosh Shukla
2017-09-19 13:52 ` Mcnamara, John
2017-09-19 8:29 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver santosh
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
11 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-08-31 6:37 UTC (permalink / raw)
To: dev, olivier.matz
Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal, Santosh Shukla
This commit adds a section to the docs listing the mempool
device PMDs available.
It then adds the octeontx fpavf mempool PMD to the listed mempool
devices.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
v1 --> v2:
- release doc cleanup.
- Added Jerin in maintainer list.
MAINTAINERS | 7 +++
doc/guides/index.rst | 1 +
doc/guides/mempool/index.rst | 40 +++++++++++++
doc/guides/mempool/octeontx.rst | 127 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 175 insertions(+)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index a0cd75e15..58287ea0d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -337,6 +337,13 @@ F: drivers/net/liquidio/
F: doc/guides/nics/liquidio.rst
F: doc/guides/nics/features/liquidio.ini
+Cavium Octeontx Mempool
+M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+F: drivers/mempool/octeontx
+F: doc/guides/mempool/index.rst
+F: doc/guides/mempool/octeontx.rst
+
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
F: drivers/net/cxgbe/
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 63716b095..98f4b7aab 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -44,6 +44,7 @@ DPDK documentation
nics/index
cryptodevs/index
eventdevs/index
+ mempool/index
xen/index
contributing/index
rel_notes/index
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
new file mode 100644
index 000000000..38bbca1c4
--- /dev/null
+++ b/doc/guides/mempool/index.rst
@@ -0,0 +1,40 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Cavium Inc. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Mempool Device driver
+=====================
+
+The following are a list of mempool PMDs, which can be used from an
+application through the mempool API.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/mempool/octeontx.rst b/doc/guides/mempool/octeontx.rst
new file mode 100644
index 000000000..3b5ec32a7
--- /dev/null
+++ b/doc/guides/mempool/octeontx.rst
@@ -0,0 +1,127 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX FPAVF Mempool Driver
+==============================
+
+Features OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is mempool
+driver for offload mempool device found in **Cavium OCTEONTX** SoC
+family.
+
+More information can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Feature
+--------
+
+Features of the OCTEONTX FPAVF PMD are:
+- 32 SR-IOV Virtual functions
+- 32 Pools
+- HW mempool manager
+
+Supported OCTEONTX SoCs
+-----------------------
+- CN83xx
+
+Prerequisites
+-------------
+
+There are three main pre-perquisites for executing FPAVF PMD on a OCTEONTX
+compatible board:
+
+1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
+
+ The OCTEONTX Linux kernel drivers (including the required PF driver for the
+ FPAVF) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
+ along with build, install and dpdk usage instructions.
+
+2. **ARM64 Tool Chain**
+
+ For example, the *aarch64* Linaro Toolchain, which can be obtained from
+ `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+ from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+ As an alternative method, FPAVF PMD can also be executed using images provided
+ as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
+ to bring up a OCTEONTX board.
+
+ SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` ( set to ``octeontx_fpavf``)
+
+ Set default mempool ops to octeontx_fpavf.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL`` (default ``y``)
+
+ Toggle compilation of the ``librte_mempool_octeontx`` driver.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG`` (default ``n``)
+
+ Toggle display of generic debugging messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the OCTEONTX FPAVF MEMPOOL PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-thunderx-linuxapp-gcc test-build
+
+
+Initialization
+--------------
+
+The octeontx fpavf mempool initialization similar to other mempool
+drivers like ring. However user need to pass --base-virtaddr as
+command line input to application example test_mempool.c application.
+
+Example:
+
+.. code-block:: console
+
+ ./build/app/test -c 0xf --base-virtaddr=0x100000000000 --mbuf-pool-ops="octeontx_fpavf"
--
2.11.0
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 10/10] doc: add mempool and octeontx mempool device
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-09-19 13:52 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-09-19 13:52 UTC (permalink / raw)
To: Santosh Shukla, dev, olivier.matz; +Cc: jerin.jacob, thomas, hemant.agrawal
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Thursday, August 31, 2017 7:37 AM
> To: dev@dpdk.org; olivier.matz@6wind.com
> Cc: jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; thomas@monjalon.net; hemant.agrawal@nxp.com;
> Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v2 10/10] doc: add mempool and octeontx mempool device
>
> This commit adds a section to the docs listing the mempool device PMDs
> available.
>
> It then adds the octeontx fpavf mempool PMD to the listed mempool devices.
>
> Cc: John McNamara <john.mcnamara@intel.com>
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Minor comments below.
> +
> +OCTEONTX FPAVF Mempool Driver
> +==============================
Underscores should match the length of the title.
> +
> +Features OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is mempool
> +driver for offload mempool device found in **Cavium OCTEONTX** SoC
> +family.
I'd suggest something like:
The OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is a mempool
driver for offload mempool device found in **Cavium OCTEONTX** SoC
family.
> +
> +More information can be found at `Cavium, Inc Official Website
> +<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
> +
> +Feature
> +--------
> +
> +Features of the OCTEONTX FPAVF PMD are:
> +- 32 SR-IOV Virtual functions
> +- 32 Pools
> +- HW mempool manager
A bullet list requires a blank line between the previous paragraph to render properly:
Features of the OCTEONTX FPAVF PMD are:
- 32 SR-IOV Virtual functions
- 32 Pools
- HW mempool manager
> +
> +Supported OCTEONTX SoCs
> +-----------------------
> +- CN83xx
Add a blank line after the title.
> + SDK and related information can be obtained from: `Cavium support site
> <https://support.cavium.com/>`_.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to
> setup the basic DPDK environment.
Remove the "-" bullet at the start of this line.
> +Initialization
> +--------------
> +
> +The octeontx fpavf mempool initialization similar to other mempool
> +drivers like ring. However user need to pass --base-virtaddr as command
> +line input to application example test_mempool.c application.
> +
> +Example:
> +
> +.. code-block:: console
> +
> + ./build/app/test -c 0xf --base-virtaddr=0x100000000000 --mbuf-pool-
> ops="octeontx_fpavf"
Wrap the long commandline for the pdf docs:
.. code-block:: console
./build/app/test -c 0xf --base-virtaddr=0x100000000000 \
--mbuf-pool-ops="octeontx_fpavf"
Reviewed-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (9 preceding siblings ...)
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-09-19 8:29 ` santosh
2017-10-06 20:55 ` Thomas Monjalon
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
11 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-09-19 8:29 UTC (permalink / raw)
To: dev, olivier.matz; +Cc: jerin.jacob, john.mcnamara, thomas, hemant.agrawal
On Thursday 31 August 2017 12:07 PM, Santosh Shukla wrote:
> v2:
> Patch implements the HW mempool offload driver for packets buffer.
> This HW mempool offload driver has dependency on:
> - IOVA infrastrucure [1].
> - Dynamically configure mempool handle (ie.. --mbuf-pool-ops eal arg) [2].
> - Infrastructure to support octeontx HW mempool manager [3].
>
> Mempool driver based on v17.11-rc0. Series has dependency
> on upstream patches [1],[2],[3]. Git source repo for all those dependancy
> patches + external mempool driver patches are located at [4].
>
>
> A new pool handle called "octeontx_fpavf" introduced and is being configured
> using eal arg ----mbuf-pool-ops="octeontx_fpavf", Note that this --eal arg is
> under review.
> Or
> Can be configured statically like below:
> CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="octeontx_fpavf"
>
> A new mempool driver specific CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL config
> is introduced.
>
> Refer doc patch [10/10] for build and run steps.
>
> v1 --> v2:
> - Removed global rte_octeontx_fpa_bufpool_gpool() api, keeping inline
> function name octeontx_fpa_bufpool_gpool() in header file.
> - Release doc cleanup.
> - removed gpool2handle for loop iterate approach, Now gpool-id stays with
> gpool_handle's (lsb 5bit of handle).
>
> v1:
> Patch summary:
> - [1/10] : add mempool offload HW block definition.
> - [2/10] : support for build and log infra, needed for pmd driver.
> - [3/10] : probe mempool PCIe vf device
> - [4/10] : support pool alloc
> - [5/10] : support pool free
> - [6/10] : support pool enq and deq
> - [7/10] : support pool get count
> - [8/10] : support pool get capability
> - [9/10] : support pool update range
> - [10/10] : doc and release info
>
> Checkpatch status:
> - Noticed false positive line over 80 char debug warning
> - asm_ false +ve error.
>
> Thanks.
>
> [1] http://dpdk.org/ml/archives/dev/2017-August/072871.html
> [2] http://dpdk.org/ml/archives/dev/2017-August/072910.html
> [3] http://dpdk.org/ml/archives/dev/2017-August/072892.html
> [4] https://github.com/sshukla82/dpdk branch: mempool-v2
Ping?
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-09-19 8:29 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver santosh
@ 2017-10-06 20:55 ` Thomas Monjalon
2017-10-07 3:51 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-06 20:55 UTC (permalink / raw)
To: santosh; +Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
19/09/2017 10:29, santosh:
> Ping?
Pong
It seems you forgot this series.
There is a compilation error and some doc changes requested.
About commit titles, you might probably reword starting
without "implement pool" which is redundant with "mempool/octeontx:".
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-10-06 20:55 ` Thomas Monjalon
@ 2017-10-07 3:51 ` santosh
2017-10-07 4:26 ` Ferruh Yigit
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-07 3:51 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
On Saturday 07 October 2017 02:25 AM, Thomas Monjalon wrote:
> 19/09/2017 10:29, santosh:
>> Ping?
> Pong
>
> It seems you forgot this series.
This series review is pending and planned for -rc2 window.
> There is a compilation error and some doc changes requested.
Yes, I have renamed few mempool api which was just merged in -rc1 then
will send out v3 series.
> About commit titles, you might probably reword starting
> without "implement pool" which is redundant with "mempool/octeontx:".
+1.
Thanks.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-10-07 3:51 ` santosh
@ 2017-10-07 4:26 ` Ferruh Yigit
2017-10-07 4:46 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Ferruh Yigit @ 2017-10-07 4:26 UTC (permalink / raw)
To: santosh, Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
On 10/7/2017 4:51 AM, santosh wrote:
>
> On Saturday 07 October 2017 02:25 AM, Thomas Monjalon wrote:
>> 19/09/2017 10:29, santosh:
>>> Ping?
>> Pong
>>
>> It seems you forgot this series.
>
> This series review is pending and planned for -rc2 window.
Hi Santosh,
octeontx net pmd has dependency on this patch [1], pushing this to rc2
will push net pmd to rc2 as well. And I believe we should get new pmd in
rc1 as much as possible.
If you can send a new version of this patch at the beginning of the next
week, both this and pmd one still can have chance to go in to rc1, can
this be possible?
[1]
http://dpdk.org/ml/archives/dev/2017-October/077322.html
>
>> There is a compilation error and some doc changes requested.
>
> Yes, I have renamed few mempool api which was just merged in -rc1 then
> will send out v3 series.
>
>> About commit titles, you might probably reword starting
>> without "implement pool" which is redundant with "mempool/octeontx:".
>
> +1.
>
> Thanks.
>
>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-10-07 4:26 ` Ferruh Yigit
@ 2017-10-07 4:46 ` santosh
2017-10-08 13:12 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-07 4:46 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
On Saturday 07 October 2017 09:56 AM, Ferruh Yigit wrote:
> On 10/7/2017 4:51 AM, santosh wrote:
>> On Saturday 07 October 2017 02:25 AM, Thomas Monjalon wrote:
>>> 19/09/2017 10:29, santosh:
>>>> Ping?
>>> Pong
>>>
>>> It seems you forgot this series.
>> This series review is pending and planned for -rc2 window.
> Hi Santosh,
>
> octeontx net pmd has dependency on this patch [1], pushing this to rc2
> will push net pmd to rc2 as well. And I believe we should get new pmd in
> rc1 as much as possible.
>
> If you can send a new version of this patch at the beginning of the next
> week, both this and pmd one still can have chance to go in to rc1, can
> this be possible?
>
> [1]
> http://dpdk.org/ml/archives/dev/2017-October/077322.html
>
Hi Ferruh,
Yes,
I will push this pmd v3 series and rebased ONA pmd [1] both
by early next week for sure.
http://dpdk.org/ml/archives/dev/2017-October/077322.html
+1 to your suggestion.
Thanks.
[1]
http://dpdk.org/ml/archives/dev/2017-October/077322.html
>>> There is a compilation error and some doc changes requested.
>> Yes, I have renamed few mempool api which was just merged in -rc1 then
>> will send out v3 series.
>>
>>> About commit titles, you might probably reword starting
>>> without "implement pool" which is redundant with "mempool/octeontx:".
>> +1.
>>
>> Thanks.
>>
>>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver
2017-10-07 4:46 ` santosh
@ 2017-10-08 13:12 ` santosh
0 siblings, 0 replies; 78+ messages in thread
From: santosh @ 2017-10-08 13:12 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, john.mcnamara, hemant.agrawal
On Saturday 07 October 2017 10:16 AM, santosh wrote:
> On Saturday 07 October 2017 09:56 AM, Ferruh Yigit wrote:
>> On 10/7/2017 4:51 AM, santosh wrote:
>>> On Saturday 07 October 2017 02:25 AM, Thomas Monjalon wrote:
>>>> 19/09/2017 10:29, santosh:
>>>>> Ping?
>>>> Pong
>>>>
>>>> It seems you forgot this series.
>>> This series review is pending and planned for -rc2 window.
>> Hi Santosh,
>>
>> octeontx net pmd has dependency on this patch [1], pushing this to rc2
>> will push net pmd to rc2 as well. And I believe we should get new pmd in
>> rc1 as much as possible.
>>
>> If you can send a new version of this patch at the beginning of the next
>> week, both this and pmd one still can have chance to go in to rc1, can
>> this be possible?
>>
>> [1]
>> http://dpdk.org/ml/archives/dev/2017-October/077322.html
>>
> Hi Ferruh,
>
> Yes,
>
> I will push this pmd v3 series and rebased ONA pmd [1] both
> by early next week for sure.
> http://dpdk.org/ml/archives/dev/2017-October/077322.html
>
> +1 to your suggestion.
>
> Thanks.
> [1]
> http://dpdk.org/ml/archives/dev/2017-October/077322.html
Hi ferruh,
I have pushed v2 octeontx PMD [1] just now and has no dependency.
Also pushed v3 octeontx external mempool pmd [2],, its rebased on tip.
Both series are ready to merge for -rc1.
Order for series is, first apply [2] and then [1] on tip.
Thanks.
[1] http://dpdk.org/ml/archives/dev/2017-October/078071.html
[2] http://dpdk.org/ml/archives/dev/2017-October/078060.html
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 00/10] Cavium Octeontx external mempool driver
2017-08-31 6:37 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver Santosh Shukla
` (10 preceding siblings ...)
2017-09-19 8:29 ` [dpdk-dev] [PATCH v2 00/10] Cavium Octeontx external mempool driver santosh
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 01/10] mempool/octeontx: add HW constants Santosh Shukla
` (10 more replies)
11 siblings, 11 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
v3:
- Incorporated mempool handle changes for
* .get_capabilities
* .register_memory_area
- Rebased on upstream tip commit: 3fb1ea032bd6ff8317af5dac9af901f1f324cab4
Patch implements the HW mempool offload driver for packets buffer.
A new pool handle called "octeontx_fpavf" introduced and is being configured
using eal arg ----mbuf-pool-ops-name="octeontx_fpavf".
Or
Can be configured statically like below:
CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS="octeontx_fpavf"
A new mempool driver specific CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL config
is introduced.
Refer doc patch [10/10] for build and run steps.
Refer [1] for rebased tree includes octeontx_fpavf mempool pmd + ONA pmd on
top for user to test for.
v2 --> v3:
(Major changes is above, below are minor)
- Incorporated doc review comment (John Macnamara)
- Renamed pool alloc/free/enq/deq/ge count v2 patch title
from implement pool to add support for. (Identified by Thomas)
- Update doc with correct pool handle name from
mbuf-pool-ops to mbuf-pool-ops-name.
v1 --> v2:
- Removed global rte_octeontx_fpa_bufpool_gpool() api, keeping inline
function name octeontx_fpa_bufpool_gpool() in header file.
- Release doc cleanup.
- removed gpool2handle for loop iterate approach, Now gpool-id stays with
gpool_handle's (lsb 5bit of handle).
v1:
Patch summary:
- [1/10] : add mempool offload HW block definition.
- [2/10] : support for build and log infra, needed for pmd driver.
- [3/10] : probe mempool PCIe vf device
- [4/10] : support pool alloc
- [5/10] : support pool free
- [6/10] : support pool enq and deq
- [7/10] : support pool get count
- [8/10] : support pool get capability
- [9/10] : support pool update range
- [10/10] : doc and release info
Checkpatch status:
- Noticed false positive line over 80 char debug warning
- asm_ false +ve error.
Thanks.
[1] https://github.com/sshukla82/dpdk branch: octeontx-ona-pmd-v2
Santosh Shukla (10):
mempool/octeontx: add HW constants
mempool/octeontx: add build and log infrastructure
mempool/octeontx: probe fpavf pcie devices
mempool/octeontx: add support for alloc
mempool/octeontx: add support for free
mempool/octeontx: add support for enq and deq
mempool/octeontx: add support for get count
mempool/octeontx: add support for get capability
mempool/octeontx: add support for memory area ops
doc: add mempool and octeontx mempool device
MAINTAINERS | 7 +
config/common_base | 6 +
doc/guides/index.rst | 1 +
doc/guides/mempool/index.rst | 40 +
doc/guides/mempool/octeontx.rst | 130 ++++
drivers/Makefile | 5 +-
drivers/mempool/Makefile | 2 +
drivers/mempool/octeontx/Makefile | 74 ++
drivers/mempool/octeontx/octeontx_fpavf.c | 834 +++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 150 ++++
drivers/mempool/octeontx/rte_mempool_octeontx.c | 253 +++++++
.../octeontx/rte_mempool_octeontx_version.map | 4 +
mk/rte.app.mk | 1 +
13 files changed, 1505 insertions(+), 2 deletions(-)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
create mode 100644 drivers/mempool/octeontx/Makefile
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.c
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.h
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx.c
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx_version.map
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 01/10] mempool/octeontx: add HW constants
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure Santosh Shukla
` (9 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
add HW constants of octeontx fpa mempool device.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.h | 71 +++++++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.h
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
new file mode 100644
index 000000000..5c4ee04f7
--- /dev/null
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -0,0 +1,71 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) 2017 Cavium Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Cavium networks nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __OCTEONTX_FPAVF_H__
+#define __OCTEONTX_FPAVF_H__
+
+/* fpa pool Vendor ID and Device ID */
+#define PCI_VENDOR_ID_CAVIUM 0x177D
+#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
+
+#define FPA_VF_MAX 32
+
+/* FPA VF register offsets */
+#define FPA_VF_INT(x) (0x200ULL | ((x) << 22))
+#define FPA_VF_INT_W1S(x) (0x210ULL | ((x) << 22))
+#define FPA_VF_INT_ENA_W1S(x) (0x220ULL | ((x) << 22))
+#define FPA_VF_INT_ENA_W1C(x) (0x230ULL | ((x) << 22))
+
+#define FPA_VF_VHPOOL_AVAILABLE(vhpool) (0x04150 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_THRESHOLD(vhpool) (0x04160 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_START_ADDR(vhpool) (0x04200 | ((vhpool)&0x0))
+#define FPA_VF_VHPOOL_END_ADDR(vhpool) (0x04210 | ((vhpool)&0x0))
+
+#define FPA_VF_VHAURA_CNT(vaura) (0x20120 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_ADD(vaura) (0x20128 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_LIMIT(vaura) (0x20130 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_CNT_THRESHOLD(vaura) (0x20140 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_OP_ALLOC(vaura) (0x30000 | ((vaura)&0xf)<<18)
+#define FPA_VF_VHAURA_OP_FREE(vaura) (0x38000 | ((vaura)&0xf)<<18)
+
+#define FPA_VF_FREE_ADDRS_S(x, y, z) \
+ ((x) | (((y) & 0x1ff) << 3) | ((((z) & 1)) << 14))
+
+/* FPA VF register offsets from VF_BAR4, size 2 MByte */
+#define FPA_VF_MSIX_VEC_ADDR 0x00000
+#define FPA_VF_MSIX_VEC_CTL 0x00008
+#define FPA_VF_MSIX_PBA 0xF0000
+
+#define FPA_VF0_APERTURE_SHIFT 22
+#define FPA_AURA_SET_SIZE 16
+
+#endif /* __OCTEONTX_FPAVF_H__ */
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 01/10] mempool/octeontx: add HW constants Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 17:16 ` Thomas Monjalon
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 03/10] mempool/octeontx: probe fpavf pcie devices Santosh Shukla
` (8 subsequent siblings)
10 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
config/common_base | 6 +++
drivers/Makefile | 5 +-
drivers/mempool/Makefile | 2 +
drivers/mempool/octeontx/Makefile | 60 ++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.c | 31 +++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 19 +++++++
.../octeontx/rte_mempool_octeontx_version.map | 4 ++
mk/rte.app.mk | 1 +
8 files changed, 126 insertions(+), 2 deletions(-)
create mode 100644 drivers/mempool/octeontx/Makefile
create mode 100644 drivers/mempool/octeontx/octeontx_fpavf.c
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx_version.map
diff --git a/config/common_base b/config/common_base
index ca4761527..f4b140263 100644
--- a/config/common_base
+++ b/config/common_base
@@ -560,6 +560,12 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
CONFIG_RTE_DRIVER_MEMPOOL_RING=y
CONFIG_RTE_DRIVER_MEMPOOL_STACK=y
+#
+# Compile PMD for octeontx fpa mempool device
+#
+CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL=y
+CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG=n
+
#
# Compile librte_mbuf
#
diff --git a/drivers/Makefile b/drivers/Makefile
index 7fef66d71..c4483faa7 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,13 +32,14 @@
include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += bus
+DEPDIRS-event := bus
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
DIRS-y += mempool
DEPDIRS-mempool := bus
+DEPDIRS-mempool := event
DIRS-y += net
DEPDIRS-net := bus mempool
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := mempool
-DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
-DEPDIRS-event := bus
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile
index bfc5f0067..18cbaa293 100644
--- a/drivers/mempool/Makefile
+++ b/drivers/mempool/Makefile
@@ -40,5 +40,7 @@ DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += ring
DEPDIRS-ring = $(core-libs)
DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) += stack
DEPDIRS-stack = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx
+DEPDIRS-octeontx = $(core-libs) librte_pmd_octeontx_ssovf
include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
new file mode 100644
index 000000000..55ca1d944
--- /dev/null
+++ b/drivers/mempool/octeontx/Makefile
@@ -0,0 +1,60 @@
+# BSD LICENSE
+#
+# Copyright(c) 2017 Cavium Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Cavium Networks nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_mempool_octeontx.a
+
+ifeq ($(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG),y)
+CFLAGS += -O0 -g
+else
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+endif
+
+EXPORT_MAP := rte_mempool_octeontx_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
+
+LDLIBS += -lrte_pmd_octeontx_ssovf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
new file mode 100644
index 000000000..9bb7759c0
--- /dev/null
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -0,0 +1,31 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) Cavium Inc. 2017. All Right reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Cavium networks nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 5c4ee04f7..1c703725c 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -33,6 +33,25 @@
#ifndef __OCTEONTX_FPAVF_H__
#define __OCTEONTX_FPAVF_H__
+#include <rte_debug.h>
+
+#ifdef RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG
+#define fpavf_log_info(fmt, args...) \
+ RTE_LOG(INFO, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+#define fpavf_log_dbg(fmt, args...) \
+ RTE_LOG(DEBUG, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+#else
+#define fpavf_log_info(fmt, args...)
+#define fpavf_log_dbg(fmt, args...)
+#endif
+
+#define fpavf_func_trace fpavf_log_dbg
+#define fpavf_log_err(fmt, args...) \
+ RTE_LOG(ERR, PMD, "%s() line %u: " fmt "\n", \
+ __func__, __LINE__, ## args)
+
/* fpa pool Vendor ID and Device ID */
#define PCI_VENDOR_ID_CAVIUM 0x177D
#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
new file mode 100644
index 000000000..a70bd197b
--- /dev/null
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -0,0 +1,4 @@
+DPDK_17.11 {
+
+ local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 29507dcd2..76b1db3f7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -180,6 +180,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += -lrte_pmd_sw_event
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += -lrte_pmd_octeontx_ssovf
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_EVENTDEV) += -lrte_pmd_dpaa2_event
+_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx
endif # CONFIG_RTE_LIBRTE_EVENTDEV
ifeq ($(CONFIG_RTE_LIBRTE_DPAA2_PMD),y)
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure Santosh Shukla
@ 2017-10-08 17:16 ` Thomas Monjalon
2017-10-09 5:03 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-08 17:16 UTC (permalink / raw)
To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal
08/10/2017 14:40, Santosh Shukla:
> DEPDIRS-mempool := bus
> +DEPDIRS-mempool := event
It is overriding dependency on bus.
Is is not a big deal because event depends on bus.
Fixed as
DEPDIRS-mempool := bus event
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure
2017-10-08 17:16 ` Thomas Monjalon
@ 2017-10-09 5:03 ` santosh
0 siblings, 0 replies; 78+ messages in thread
From: santosh @ 2017-10-09 5:03 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal
On Sunday 08 October 2017 10:46 PM, Thomas Monjalon wrote:
> 08/10/2017 14:40, Santosh Shukla:
>> DEPDIRS-mempool := bus
>> +DEPDIRS-mempool := event
> It is overriding dependency on bus.
> Is is not a big deal because event depends on bus.
>
> Fixed as
> DEPDIRS-mempool := bus event
Thanks.
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 03/10] mempool/octeontx: probe fpavf pcie devices
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 01/10] mempool/octeontx: add HW constants Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 02/10] mempool/octeontx: add build and log infrastructure Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 04/10] mempool/octeontx: add support for alloc Santosh Shukla
` (7 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
A mempool device is set of PCIe vfs.
On Octeontx HW, each mempool devices are enumerated as
separate SRIOV VF PCIe device.
In order to expose as a mempool device:
On PCIe probe, the driver stores the information associated with the
PCIe device and later upon application pool request
(e.g. rte_mempool_create_empty), Infrastructure creates a pool device
with earlier probed PCIe VF devices.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.c | 151 ++++++++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 39 ++++++++
2 files changed, 190 insertions(+)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 9bb7759c0..0b4a9357f 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -29,3 +29,154 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+
+#include <stdlib.h>
+#include <string.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <sys/mman.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_pci.h>
+#include <rte_errno.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "octeontx_fpavf.h"
+
+struct fpavf_res {
+ void *pool_stack_base;
+ void *bar0;
+ uint64_t stack_ln_ptr;
+ uint16_t domain_id;
+ uint16_t vf_id; /* gpool_id */
+ uint16_t sz128; /* Block size in cache lines */
+ bool is_inuse;
+};
+
+struct octeontx_fpadev {
+ rte_spinlock_t lock;
+ uint8_t total_gpool_cnt;
+ struct fpavf_res pool[FPA_VF_MAX];
+};
+
+static struct octeontx_fpadev fpadev;
+
+static void
+octeontx_fpavf_setup(void)
+{
+ uint8_t i;
+ static bool init_once;
+
+ if (!init_once) {
+ rte_spinlock_init(&fpadev.lock);
+ fpadev.total_gpool_cnt = 0;
+
+ for (i = 0; i < FPA_VF_MAX; i++) {
+
+ fpadev.pool[i].domain_id = ~0;
+ fpadev.pool[i].stack_ln_ptr = 0;
+ fpadev.pool[i].sz128 = 0;
+ fpadev.pool[i].bar0 = NULL;
+ fpadev.pool[i].pool_stack_base = NULL;
+ fpadev.pool[i].is_inuse = false;
+ }
+ init_once = 1;
+ }
+}
+
+static int
+octeontx_fpavf_identify(void *bar0)
+{
+ uint64_t val;
+ uint16_t domain_id;
+ uint16_t vf_id;
+ uint64_t stack_ln_ptr;
+
+ val = fpavf_read64((void *)((uintptr_t)bar0 +
+ FPA_VF_VHAURA_CNT_THRESHOLD(0)));
+
+ domain_id = (val >> 8) & 0xffff;
+ vf_id = (val >> 24) & 0xffff;
+
+ stack_ln_ptr = fpavf_read64((void *)((uintptr_t)bar0 +
+ FPA_VF_VHPOOL_THRESHOLD(0)));
+ if (vf_id >= FPA_VF_MAX) {
+ fpavf_log_err("vf_id(%d) greater than max vf (32)\n", vf_id);
+ return -1;
+ }
+
+ if (fpadev.pool[vf_id].is_inuse) {
+ fpavf_log_err("vf_id %d is_inuse\n", vf_id);
+ return -1;
+ }
+
+ fpadev.pool[vf_id].domain_id = domain_id;
+ fpadev.pool[vf_id].vf_id = vf_id;
+ fpadev.pool[vf_id].bar0 = bar0;
+ fpadev.pool[vf_id].stack_ln_ptr = stack_ln_ptr;
+
+ /* SUCCESS */
+ return vf_id;
+}
+
+/* FPAVF pcie device aka mempool probe */
+static int
+fpavf_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
+{
+ uint8_t *idreg;
+ int res;
+ struct fpavf_res *fpa;
+
+ RTE_SET_USED(pci_drv);
+ RTE_SET_USED(fpa);
+
+ /* For secondary processes, the primary has done all the work */
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ if (pci_dev->mem_resource[0].addr == NULL) {
+ fpavf_log_err("Empty bars %p ", pci_dev->mem_resource[0].addr);
+ return -ENODEV;
+ }
+ idreg = pci_dev->mem_resource[0].addr;
+
+ octeontx_fpavf_setup();
+
+ res = octeontx_fpavf_identify(idreg);
+ if (res < 0)
+ return -1;
+
+ fpa = &fpadev.pool[res];
+ fpadev.total_gpool_cnt++;
+ rte_wmb();
+
+ fpavf_log_dbg("total_fpavfs %d bar0 %p domain %d vf %d stk_ln_ptr 0x%x",
+ fpadev.total_gpool_cnt, fpa->bar0, fpa->domain_id,
+ fpa->vf_id, (unsigned int)fpa->stack_ln_ptr);
+
+ return 0;
+}
+
+static const struct rte_pci_id pci_fpavf_map[] = {
+ {
+ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
+ PCI_DEVICE_ID_OCTEONTX_FPA_VF)
+ },
+ {
+ .vendor_id = 0,
+ },
+};
+
+static struct rte_pci_driver pci_fpavf = {
+ .id_table = pci_fpavf_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA,
+ .probe = fpavf_probe,
+};
+
+RTE_PMD_REGISTER_PCI(octeontx_fpavf, pci_fpavf);
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 1c703725c..c43b1a7d2 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -34,6 +34,7 @@
#define __OCTEONTX_FPAVF_H__
#include <rte_debug.h>
+#include <rte_io.h>
#ifdef RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG
#define fpavf_log_info(fmt, args...) \
@@ -87,4 +88,42 @@
#define FPA_VF0_APERTURE_SHIFT 22
#define FPA_AURA_SET_SIZE 16
+
+/*
+ * In Cavium OcteonTX SoC, all accesses to the device registers are
+ * implictly strongly ordered. So, The relaxed version of IO operation is
+ * safe to use with out any IO memory barriers.
+ */
+#define fpavf_read64 rte_read64_relaxed
+#define fpavf_write64 rte_write64_relaxed
+
+/* ARM64 specific functions */
+#if defined(RTE_ARCH_ARM64)
+#define fpavf_load_pair(val0, val1, addr) ({ \
+ asm volatile( \
+ "ldp %x[x0], %x[x1], [%x[p1]]" \
+ :[x0]"=r"(val0), [x1]"=r"(val1) \
+ :[p1]"r"(addr) \
+ ); })
+
+#define fpavf_store_pair(val0, val1, addr) ({ \
+ asm volatile( \
+ "stp %x[x0], %x[x1], [%x[p1]]" \
+ ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
+ ); })
+#else /* Un optimized functions for building on non arm64 arch */
+
+#define fpavf_load_pair(val0, val1, addr) \
+do { \
+ val0 = rte_read64(addr); \
+ val1 = rte_read64(((uint8_t *)addr) + 8); \
+} while (0)
+
+#define fpavf_store_pair(val0, val1, addr) \
+do { \
+ rte_write64(val0, addr); \
+ rte_write64(val1, (((uint8_t *)addr) + 8)); \
+} while (0)
+#endif
+
#endif /* __OCTEONTX_FPAVF_H__ */
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 04/10] mempool/octeontx: add support for alloc
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (2 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 03/10] mempool/octeontx: probe fpavf pcie devices Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 05/10] mempool/octeontx: add support for free Santosh Shukla
` (6 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Upon pool allocation request by application, Octeontx FPA alloc
does following:
- Gets free pool from pci fpavf array.
- Uses mbox to communicate fpapf driver about,
* gpool-id
* pool block_sz
* alignemnt
- Programs fpavf pool boundary.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/Makefile | 1 +
drivers/mempool/octeontx/octeontx_fpavf.c | 514 ++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 17 +
drivers/mempool/octeontx/rte_mempool_octeontx.c | 88 ++++
4 files changed, 620 insertions(+)
create mode 100644 drivers/mempool/octeontx/rte_mempool_octeontx.c
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
index 55ca1d944..9c3389608 100644
--- a/drivers/mempool/octeontx/Makefile
+++ b/drivers/mempool/octeontx/Makefile
@@ -51,6 +51,7 @@ LIBABIVER := 1
# all source are stored in SRCS-y
#
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
+SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c
# this lib depends upon:
DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 0b4a9357f..c0c9d8325 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -46,9 +46,75 @@
#include <rte_memory.h>
#include <rte_malloc.h>
#include <rte_spinlock.h>
+#include <rte_mbuf.h>
+#include <rte_pmd_octeontx_ssovf.h>
#include "octeontx_fpavf.h"
+/* FPA Mbox Message */
+#define IDENTIFY 0x0
+
+#define FPA_CONFIGSET 0x1
+#define FPA_CONFIGGET 0x2
+#define FPA_START_COUNT 0x3
+#define FPA_STOP_COUNT 0x4
+#define FPA_ATTACHAURA 0x5
+#define FPA_DETACHAURA 0x6
+#define FPA_SETAURALVL 0x7
+#define FPA_GETAURALVL 0x8
+
+#define FPA_COPROC 0x1
+
+/* fpa mbox struct */
+struct octeontx_mbox_fpa_cfg {
+ int aid;
+ uint64_t pool_cfg;
+ uint64_t pool_stack_base;
+ uint64_t pool_stack_end;
+ uint64_t aura_cfg;
+};
+
+struct __attribute__((__packed__)) gen_req {
+ uint32_t value;
+};
+
+struct __attribute__((__packed__)) idn_req {
+ uint8_t domain_id;
+};
+
+struct __attribute__((__packed__)) gen_resp {
+ uint16_t domain_id;
+ uint16_t vfid;
+};
+
+struct __attribute__((__packed__)) dcfg_resp {
+ uint8_t sso_count;
+ uint8_t ssow_count;
+ uint8_t fpa_count;
+ uint8_t pko_count;
+ uint8_t tim_count;
+ uint8_t net_port_count;
+ uint8_t virt_port_count;
+};
+
+#define FPA_MAX_POOL 32
+#define FPA_PF_PAGE_SZ 4096
+
+#define FPA_LN_SIZE 128
+#define FPA_ROUND_UP(x, size) \
+ ((((unsigned long)(x)) + size-1) & (~(size-1)))
+#define FPA_OBJSZ_2_CACHE_LINE(sz) (((sz) + RTE_CACHE_LINE_MASK) >> 7)
+#define FPA_CACHE_LINE_2_OBJSZ(sz) ((sz) << 7)
+
+#define POOL_ENA (0x1 << 0)
+#define POOL_DIS (0x0 << 0)
+#define POOL_SET_NAT_ALIGN (0x1 << 1)
+#define POOL_DIS_NAT_ALIGN (0x0 << 1)
+#define POOL_STYPE(x) (((x) & 0x1) << 2)
+#define POOL_LTYPE(x) (((x) & 0x3) << 3)
+#define POOL_BUF_OFFSET(x) (((x) & 0x7fffULL) << 16)
+#define POOL_BUF_SIZE(x) (((x) & 0x7ffULL) << 32)
+
struct fpavf_res {
void *pool_stack_base;
void *bar0;
@@ -67,6 +133,454 @@ struct octeontx_fpadev {
static struct octeontx_fpadev fpadev;
+/* lock is taken by caller */
+static int
+octeontx_fpa_gpool_alloc(unsigned int object_size)
+{
+ struct fpavf_res *res = NULL;
+ uint16_t gpool;
+ unsigned int sz128;
+
+ sz128 = FPA_OBJSZ_2_CACHE_LINE(object_size);
+
+ for (gpool = 0; gpool < FPA_VF_MAX; gpool++) {
+
+ /* Skip VF that is not mapped Or _inuse */
+ if ((fpadev.pool[gpool].bar0 == NULL) ||
+ (fpadev.pool[gpool].is_inuse == true))
+ continue;
+
+ res = &fpadev.pool[gpool];
+
+ RTE_ASSERT(res->domain_id != (uint16_t)~0);
+ RTE_ASSERT(res->vf_id != (uint16_t)~0);
+ RTE_ASSERT(res->stack_ln_ptr != 0);
+
+ if (res->sz128 == 0) {
+ res->sz128 = sz128;
+
+ fpavf_log_dbg("gpool %d blk_sz %d\n", gpool, sz128);
+ return gpool;
+ }
+ }
+
+ return -ENOSPC;
+}
+
+/* lock is taken by caller */
+static __rte_always_inline uintptr_t
+octeontx_fpa_gpool2handle(uint16_t gpool)
+{
+ struct fpavf_res *res = NULL;
+
+ RTE_ASSERT(gpool < FPA_VF_MAX);
+
+ res = &fpadev.pool[gpool];
+ if (unlikely(res == NULL))
+ return 0;
+
+ return (uintptr_t)res->bar0 | gpool;
+}
+
+static __rte_always_inline bool
+octeontx_fpa_handle_valid(uintptr_t handle)
+{
+ struct fpavf_res *res = NULL;
+ uint8_t gpool;
+ int i;
+ bool ret = false;
+
+ if (unlikely(!handle))
+ return ret;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* get the bar address */
+ handle &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (i = 0; i < FPA_VF_MAX; i++) {
+ if ((uintptr_t)fpadev.pool[i].bar0 != handle)
+ continue;
+
+ /* validate gpool */
+ if (gpool != i)
+ return false;
+
+ res = &fpadev.pool[i];
+
+ if (res->sz128 == 0 || res->domain_id == (uint16_t)~0 ||
+ res->stack_ln_ptr == 0)
+ ret = false;
+ else
+ ret = true;
+ break;
+ }
+
+ return ret;
+}
+
+static int
+octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size,
+ signed short buf_offset, unsigned int max_buf_count)
+{
+ void *memptr = NULL;
+ phys_addr_t phys_addr;
+ unsigned int memsz;
+ struct fpavf_res *fpa = NULL;
+ uint64_t reg;
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ int ret = -1;
+
+ fpa = &fpadev.pool[gpool];
+ memsz = FPA_ROUND_UP(max_buf_count / fpa->stack_ln_ptr, FPA_LN_SIZE) *
+ FPA_LN_SIZE;
+
+ /* Round-up to page size */
+ memsz = (memsz + FPA_PF_PAGE_SZ - 1) & ~(uintptr_t)(FPA_PF_PAGE_SZ-1);
+ memptr = rte_malloc(NULL, memsz, RTE_CACHE_LINE_SIZE);
+ if (memptr == NULL) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ /* Configure stack */
+ fpa->pool_stack_base = memptr;
+ phys_addr = rte_malloc_virt2phy(memptr);
+
+ buf_size /= FPA_LN_SIZE;
+
+ /* POOL setup */
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_CONFIGSET;
+ hdr.vfid = fpa->vf_id;
+ hdr.res_code = 0;
+
+ buf_offset /= FPA_LN_SIZE;
+ reg = POOL_BUF_SIZE(buf_size) | POOL_BUF_OFFSET(buf_offset) |
+ POOL_LTYPE(0x2) | POOL_STYPE(0) | POOL_SET_NAT_ALIGN |
+ POOL_ENA;
+
+ cfg.aid = 0;
+ cfg.pool_cfg = reg;
+ cfg.pool_stack_base = phys_addr;
+ cfg.pool_stack_end = phys_addr + memsz;
+ cfg.aura_cfg = (1 << 9);
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ ret = -EACCES;
+ goto err;
+ }
+
+ fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n",
+ fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg,
+ cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg);
+
+ /* Now pool is in_use */
+ fpa->is_inuse = true;
+
+err:
+ if (ret < 0)
+ rte_free(memptr);
+
+ return ret;
+}
+
+static int
+octeontx_fpapf_pool_destroy(unsigned int gpool_index)
+{
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ struct fpavf_res *fpa = NULL;
+ int ret = -1;
+
+ fpa = &fpadev.pool[gpool_index];
+
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_CONFIGSET;
+ hdr.vfid = fpa->vf_id;
+ hdr.res_code = 0;
+
+ /* reset and free the pool */
+ cfg.aid = 0;
+ cfg.pool_cfg = 0;
+ cfg.pool_stack_base = 0;
+ cfg.pool_stack_end = 0;
+ cfg.aura_cfg = 0;
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ ret = -EACCES;
+ goto err;
+ }
+
+ ret = 0;
+err:
+ /* anycase free pool stack memory */
+ rte_free(fpa->pool_stack_base);
+ fpa->pool_stack_base = NULL;
+ return ret;
+}
+
+static int
+octeontx_fpapf_aura_attach(unsigned int gpool_index)
+{
+ struct octeontx_mbox_hdr hdr;
+ struct dcfg_resp resp;
+ struct octeontx_mbox_fpa_cfg cfg;
+ int ret = 0;
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_ATTACHAURA;
+ hdr.vfid = gpool_index;
+ hdr.res_code = 0;
+ memset(&cfg, 0x0, sizeof(struct octeontx_mbox_fpa_cfg));
+ cfg.aid = gpool_index; /* gpool is guara */
+
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg,
+ sizeof(struct octeontx_mbox_fpa_cfg),
+ &resp, sizeof(resp));
+ if (ret < 0) {
+ fpavf_log_err("Could not attach fpa ");
+ fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n",
+ gpool_index, gpool_index, ret, hdr.res_code);
+ ret = -EACCES;
+ goto err;
+ }
+err:
+ return ret;
+}
+
+static int
+octeontx_fpapf_aura_detach(unsigned int gpool_index)
+{
+ struct octeontx_mbox_fpa_cfg cfg = {0};
+ struct octeontx_mbox_hdr hdr = {0};
+ int ret = 0;
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ cfg.aid = gpool_index; /* gpool is gaura */
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_DETACHAURA;
+ hdr.vfid = gpool_index;
+ ret = octeontx_ssovf_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0);
+ if (ret < 0) {
+ fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n",
+ gpool_index, ret, hdr.res_code);
+ ret = -EINVAL;
+ }
+
+err:
+ return ret;
+}
+
+static int
+octeontx_fpavf_pool_setup(uintptr_t handle, unsigned long memsz,
+ void *memva, uint16_t gpool)
+{
+ uint64_t va_end;
+
+ if (unlikely(!handle))
+ return -ENODEV;
+
+ va_end = (uintptr_t)memva + memsz;
+ va_end &= ~RTE_CACHE_LINE_MASK;
+
+ /* VHPOOL setup */
+ fpavf_write64((uintptr_t)memva,
+ (void *)((uintptr_t)handle +
+ FPA_VF_VHPOOL_START_ADDR(gpool)));
+ fpavf_write64(va_end,
+ (void *)((uintptr_t)handle +
+ FPA_VF_VHPOOL_END_ADDR(gpool)));
+ return 0;
+}
+
+static int
+octeontx_fpapf_start_count(uint16_t gpool_index)
+{
+ int ret = 0;
+ struct octeontx_mbox_hdr hdr = {0};
+
+ if (gpool_index >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ hdr.coproc = FPA_COPROC;
+ hdr.msg = FPA_START_COUNT;
+ hdr.vfid = gpool_index;
+ ret = octeontx_ssovf_mbox_send(&hdr, NULL, 0, NULL, 0);
+ if (ret < 0) {
+ fpavf_log_err("Could not start buffer counting for ");
+ fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n",
+ gpool_index, ret, hdr.res_code);
+ ret = -EINVAL;
+ goto err;
+ }
+
+err:
+ return ret;
+}
+
+static __rte_always_inline int
+octeontx_fpavf_free(unsigned int gpool)
+{
+ int ret = 0;
+
+ if (gpool >= FPA_MAX_POOL) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* Pool is free */
+ fpadev.pool[gpool].is_inuse = false;
+
+err:
+ return ret;
+}
+
+static __rte_always_inline int
+octeontx_gpool_free(uint16_t gpool)
+{
+ if (fpadev.pool[gpool].sz128 != 0) {
+ fpadev.pool[gpool].sz128 = 0;
+ return 0;
+ }
+ return -EINVAL;
+}
+
+/*
+ * Return buffer size for a given pool
+ */
+int
+octeontx_fpa_bufpool_block_size(uintptr_t handle)
+{
+ struct fpavf_res *res = NULL;
+ uint8_t gpool;
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+ res = &fpadev.pool[gpool];
+ return FPA_CACHE_LINE_2_OBJSZ(res->sz128);
+}
+
+uintptr_t
+octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
+ unsigned int buf_offset, char **va_start,
+ int node_id)
+{
+ unsigned int gpool;
+ void *memva;
+ unsigned long memsz;
+ uintptr_t gpool_handle;
+ uintptr_t pool_bar;
+ int res;
+
+ RTE_SET_USED(node_id);
+ FPAVF_STATIC_ASSERTION(sizeof(struct rte_mbuf) <=
+ OCTEONTX_FPAVF_BUF_OFFSET);
+
+ if (unlikely(*va_start == NULL))
+ goto error_end;
+
+ object_size = RTE_CACHE_LINE_ROUNDUP(object_size);
+ if (object_size > FPA_MAX_OBJ_SIZE) {
+ errno = EINVAL;
+ goto error_end;
+ }
+
+ rte_spinlock_lock(&fpadev.lock);
+ res = octeontx_fpa_gpool_alloc(object_size);
+
+ /* Bail if failed */
+ if (unlikely(res < 0)) {
+ errno = res;
+ goto error_unlock;
+ }
+
+ /* get fpavf */
+ gpool = res;
+
+ /* get pool handle */
+ gpool_handle = octeontx_fpa_gpool2handle(gpool);
+ if (!octeontx_fpa_handle_valid(gpool_handle)) {
+ errno = ENOSPC;
+ goto error_gpool_free;
+ }
+
+ /* Get pool bar address from handle */
+ pool_bar = gpool_handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ res = octeontx_fpapf_pool_setup(gpool, object_size, buf_offset,
+ object_count);
+ if (res < 0) {
+ errno = res;
+ goto error_gpool_free;
+ }
+
+ /* populate AURA fields */
+ res = octeontx_fpapf_aura_attach(gpool);
+ if (res < 0) {
+ errno = res;
+ goto error_pool_destroy;
+ }
+
+ /* vf pool setup */
+ memsz = object_size * object_count;
+ memva = *va_start;
+ res = octeontx_fpavf_pool_setup(pool_bar, memsz, memva, gpool);
+ if (res < 0) {
+ errno = res;
+ goto error_gaura_detach;
+ }
+
+ /* Release lock */
+ rte_spinlock_unlock(&fpadev.lock);
+
+ /* populate AURA registers */
+ fpavf_write64(object_count, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ fpavf_write64(object_count, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(object_count + 1, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ octeontx_fpapf_start_count(gpool);
+
+ return gpool_handle;
+
+error_gaura_detach:
+ (void) octeontx_fpapf_aura_detach(gpool);
+error_pool_destroy:
+ octeontx_fpavf_free(gpool);
+ octeontx_fpapf_pool_destroy(gpool);
+error_gpool_free:
+ octeontx_gpool_free(gpool);
+error_unlock:
+ rte_spinlock_unlock(&fpadev.lock);
+error_end:
+ return (uintptr_t)NULL;
+}
+
static void
octeontx_fpavf_setup(void)
{
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index c43b1a7d2..23a458363 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -58,6 +58,7 @@
#define PCI_DEVICE_ID_OCTEONTX_FPA_VF 0xA053
#define FPA_VF_MAX 32
+#define FPA_GPOOL_MASK (FPA_VF_MAX-1)
/* FPA VF register offsets */
#define FPA_VF_INT(x) (0x200ULL | ((x) << 22))
@@ -88,6 +89,10 @@
#define FPA_VF0_APERTURE_SHIFT 22
#define FPA_AURA_SET_SIZE 16
+#define FPA_MAX_OBJ_SIZE (128 * 1024)
+#define OCTEONTX_FPAVF_BUF_OFFSET 128
+
+#define FPAVF_STATIC_ASSERTION(s) _Static_assert(s, #s)
/*
* In Cavium OcteonTX SoC, all accesses to the device registers are
@@ -126,4 +131,16 @@ do { \
} while (0)
#endif
+uintptr_t
+octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
+ unsigned int buf_offset, char **va_start,
+ int node);
+int
+octeontx_fpa_bufpool_block_size(uintptr_t handle);
+
+static __rte_always_inline uint8_t
+octeontx_fpa_bufpool_gpool(uintptr_t handle)
+{
+ return (uint8_t)handle & FPA_GPOOL_MASK;
+}
#endif /* __OCTEONTX_FPAVF_H__ */
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
new file mode 100644
index 000000000..d930a81f9
--- /dev/null
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -0,0 +1,88 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (C) 2017 Cavium Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdio.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+
+#include "octeontx_fpavf.h"
+
+static int
+octeontx_fpavf_alloc(struct rte_mempool *mp)
+{
+ uintptr_t pool;
+ uint32_t memseg_count = mp->size;
+ uint32_t object_size;
+ uintptr_t va_start;
+ int rc = 0;
+
+ /* virtual hugepage mapped addr */
+ va_start = ~(uint64_t)0;
+
+ object_size = mp->elt_size + mp->header_size + mp->trailer_size;
+
+ pool = octeontx_fpa_bufpool_create(object_size, memseg_count,
+ OCTEONTX_FPAVF_BUF_OFFSET,
+ (char **)&va_start,
+ mp->socket_id);
+ rc = octeontx_fpa_bufpool_block_size(pool);
+ if (rc < 0)
+ goto _end;
+
+ if ((uint32_t)rc != object_size)
+ fpavf_log_err("buffer size mismatch: %d instead of %u\n",
+ rc, object_size);
+
+ fpavf_log_info("Pool created %p with .. ", (void *)pool);
+ fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count);
+
+ /* assign pool handle to mempool */
+ mp->pool_id = (uint64_t)pool;
+
+ return 0;
+
+_end:
+ return rc;
+}
+
+static struct rte_mempool_ops octeontx_fpavf_ops = {
+ .name = "octeontx_fpavf",
+ .alloc = octeontx_fpavf_alloc,
+ .free = NULL,
+ .enqueue = NULL,
+ .dequeue = NULL,
+ .get_count = NULL,
+ .get_capabilities = NULL,
+ .register_memory_area = NULL,
+};
+
+MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 05/10] mempool/octeontx: add support for free
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (3 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 04/10] mempool/octeontx: add support for alloc Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 06/10] mempool/octeontx: add support for enq and deq Santosh Shukla
` (5 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Upon pool free request from application, Octeon FPA free
does following:
- Uses mbox to reset fpapf pool setup.
- frees fpavf resources.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.c | 111 ++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 2 +
drivers/mempool/octeontx/rte_mempool_octeontx.c | 12 ++-
3 files changed, 124 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index c0c9d8325..44253b09e 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -581,6 +581,117 @@ octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
return (uintptr_t)NULL;
}
+/*
+ * Destroy a buffer pool.
+ */
+int
+octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
+{
+ void **node, **curr, *head = NULL;
+ uint64_t sz;
+ uint64_t cnt, avail;
+ uint8_t gpool;
+ uintptr_t pool_bar;
+ int ret;
+
+ RTE_SET_USED(node_id);
+
+ /* Wait for all outstanding writes to be comitted */
+ rte_smp_wmb();
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the pool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* Get pool bar address from handle */
+ pool_bar = handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ /* Check for no outstanding buffers */
+ cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ if (cnt) {
+ fpavf_log_dbg("buffer exist in pool cnt %ld\n", cnt);
+ return -EBUSY;
+ }
+
+ rte_spinlock_lock(&fpadev.lock);
+
+ avail = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_AVAILABLE(gpool)));
+
+ /* Prepare to empty the entire POOL */
+ fpavf_write64(avail, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(avail + 1, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ /* Empty the pool */
+ /* Invalidate the POOL */
+ octeontx_gpool_free(gpool);
+
+ /* Process all buffers in the pool */
+ while (avail--) {
+
+ /* Yank a buffer from the pool */
+ node = (void *)(uintptr_t)
+ fpavf_read64((void *)
+ (pool_bar + FPA_VF_VHAURA_OP_ALLOC(gpool)));
+
+ if (node == NULL) {
+ fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf\n",
+ gpool, avail);
+ break;
+ }
+
+ /* Imsert it into an ordered linked list */
+ for (curr = &head; curr[0] != NULL; curr = curr[0]) {
+ if ((uintptr_t)node <= (uintptr_t)curr[0])
+ break;
+ }
+ node[0] = curr[0];
+ curr[0] = node;
+ }
+
+ /* Verify the linked list to be a perfect series */
+ sz = octeontx_fpa_bufpool_block_size(handle) << 7;
+ for (curr = head; curr != NULL && curr[0] != NULL;
+ curr = curr[0]) {
+ if (curr == curr[0] ||
+ ((uintptr_t)curr != ((uintptr_t)curr[0] - sz))) {
+ fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)\n",
+ gpool, curr, curr[0]);
+ }
+ }
+
+ /* Disable pool operation */
+ fpavf_write64(~0ul, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_START_ADDR(gpool)));
+ fpavf_write64(~0ul, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_END_ADDR(gpool)));
+
+ (void)octeontx_fpapf_pool_destroy(gpool);
+
+ /* Deactivate the AURA */
+ fpavf_write64(0, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+ fpavf_write64(0, (void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_THRESHOLD(gpool)));
+
+ ret = octeontx_fpapf_aura_detach(gpool);
+ if (ret) {
+ fpavf_log_err("Failed to dettach gaura %u. error code=%d\n",
+ gpool, ret);
+ }
+
+ /* Free VF */
+ (void)octeontx_fpavf_free(gpool);
+
+ rte_spinlock_unlock(&fpadev.lock);
+ return 0;
+}
+
static void
octeontx_fpavf_setup(void)
{
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 23a458363..28440e810 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -136,6 +136,8 @@ octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
unsigned int buf_offset, char **va_start,
int node);
int
+octeontx_fpa_bufpool_destroy(uintptr_t handle, int node);
+int
octeontx_fpa_bufpool_block_size(uintptr_t handle);
static __rte_always_inline uint8_t
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index d930a81f9..6ac4b7dc0 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -74,10 +74,20 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
return rc;
}
+static void
+octeontx_fpavf_free(struct rte_mempool *mp)
+{
+ uintptr_t pool;
+
+ pool = (uintptr_t)mp->pool_id;
+
+ octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
- .free = NULL,
+ .free = octeontx_fpavf_free,
.enqueue = NULL,
.dequeue = NULL,
.get_count = NULL,
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 06/10] mempool/octeontx: add support for enq and deq
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (4 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 05/10] mempool/octeontx: add support for free Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 07/10] mempool/octeontx: add support for get count Santosh Shukla
` (4 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/Makefile | 13 +++++
drivers/mempool/octeontx/rte_mempool_octeontx.c | 69 ++++++++++++++++++++++++-
2 files changed, 80 insertions(+), 2 deletions(-)
diff --git a/drivers/mempool/octeontx/Makefile b/drivers/mempool/octeontx/Makefile
index 9c3389608..0b2043842 100644
--- a/drivers/mempool/octeontx/Makefile
+++ b/drivers/mempool/octeontx/Makefile
@@ -53,6 +53,19 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += octeontx_fpavf.c
SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += rte_mempool_octeontx.c
+ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
+CFLAGS_rte_mempool_octeontx.o += -fno-prefetch-loop-arrays
+
+ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
+CFLAGS_rte_mempool_octeontx.o += -Ofast
+else
+CFLAGS_rte_mempool_octeontx.o += -O3 -ffast-math
+endif
+
+else
+CFLAGS_rte_mempool_octeontx.o += -Ofast
+endif
+
# this lib depends upon:
DEPDIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += lib/librte_mbuf
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 6ac4b7dc0..10264a6bf 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -84,12 +84,77 @@ octeontx_fpavf_free(struct rte_mempool *mp)
octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
}
+static __rte_always_inline void *
+octeontx_fpa_bufpool_alloc(uintptr_t handle)
+{
+ return (void *)(uintptr_t)fpavf_read64((void *)(handle +
+ FPA_VF_VHAURA_OP_ALLOC(0)));
+}
+
+static __rte_always_inline void
+octeontx_fpa_bufpool_free(uintptr_t handle, void *buf)
+{
+ uint64_t free_addr = FPA_VF_FREE_ADDRS_S(FPA_VF_VHAURA_OP_FREE(0),
+ 0 /* DWB */, 1 /* FABS */);
+
+ fpavf_write64((uintptr_t)buf, (void *)(uintptr_t)(handle + free_addr));
+}
+
+static int
+octeontx_fpavf_enqueue(struct rte_mempool *mp, void * const *obj_table,
+ unsigned int n)
+{
+ uintptr_t pool;
+ unsigned int index;
+
+ pool = (uintptr_t)mp->pool_id;
+ /* Get pool bar address from handle */
+ pool &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (index = 0; index < n; index++, obj_table++)
+ octeontx_fpa_bufpool_free(pool, *obj_table);
+
+ return 0;
+}
+
+static int
+octeontx_fpavf_dequeue(struct rte_mempool *mp, void **obj_table,
+ unsigned int n)
+{
+ unsigned int index;
+ uintptr_t pool;
+ void *obj;
+
+ pool = (uintptr_t)mp->pool_id;
+ /* Get pool bar address from handle */
+ pool &= ~(uint64_t)FPA_GPOOL_MASK;
+ for (index = 0; index < n; index++, obj_table++) {
+ obj = octeontx_fpa_bufpool_alloc(pool);
+ if (obj == NULL) {
+ /*
+ * Failed to allocate the requested number of objects
+ * from the pool. Current pool implementation requires
+ * completing the entire request or returning error
+ * otherwise.
+ * Free already allocated buffers to the pool.
+ */
+ for (; index > 0; index--) {
+ obj_table--;
+ octeontx_fpa_bufpool_free(pool, *obj_table);
+ }
+ return -ENOMEM;
+ }
+ *obj_table = obj;
+ }
+
+ return 0;
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
.free = octeontx_fpavf_free,
- .enqueue = NULL,
- .dequeue = NULL,
+ .enqueue = octeontx_fpavf_enqueue,
+ .dequeue = octeontx_fpavf_dequeue,
.get_count = NULL,
.get_capabilities = NULL,
.register_memory_area = NULL,
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 07/10] mempool/octeontx: add support for get count
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (5 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 06/10] mempool/octeontx: add support for enq and deq Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 08/10] mempool/octeontx: add support for get capability Santosh Shukla
` (3 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/octeontx_fpavf.c | 27 +++++++++++++++++++++++++
drivers/mempool/octeontx/octeontx_fpavf.h | 2 ++
drivers/mempool/octeontx/rte_mempool_octeontx.c | 12 ++++++++++-
3 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 44253b09e..e4a0a7f42 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -483,6 +483,33 @@ octeontx_fpa_bufpool_block_size(uintptr_t handle)
return FPA_CACHE_LINE_2_OBJSZ(res->sz128);
}
+int
+octeontx_fpa_bufpool_free_count(uintptr_t handle)
+{
+ uint64_t cnt, limit, avail;
+ uint8_t gpool;
+ uintptr_t pool_bar;
+
+ if (unlikely(!octeontx_fpa_handle_valid(handle)))
+ return -EINVAL;
+
+ /* get the gpool */
+ gpool = octeontx_fpa_bufpool_gpool(handle);
+
+ /* Get pool bar address from handle */
+ pool_bar = handle & ~(uint64_t)FPA_GPOOL_MASK;
+
+ cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT(gpool)));
+ limit = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHAURA_CNT_LIMIT(gpool)));
+
+ avail = fpavf_read64((void *)((uintptr_t)pool_bar +
+ FPA_VF_VHPOOL_AVAILABLE(gpool)));
+
+ return RTE_MIN(avail, (limit - cnt));
+}
+
uintptr_t
octeontx_fpa_bufpool_create(unsigned int object_size, unsigned int object_count,
unsigned int buf_offset, char **va_start,
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.h b/drivers/mempool/octeontx/octeontx_fpavf.h
index 28440e810..471fe711a 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.h
+++ b/drivers/mempool/octeontx/octeontx_fpavf.h
@@ -139,6 +139,8 @@ int
octeontx_fpa_bufpool_destroy(uintptr_t handle, int node);
int
octeontx_fpa_bufpool_block_size(uintptr_t handle);
+int
+octeontx_fpa_bufpool_free_count(uintptr_t handle);
static __rte_always_inline uint8_t
octeontx_fpa_bufpool_gpool(uintptr_t handle)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 10264a6bf..42d93b833 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -149,13 +149,23 @@ octeontx_fpavf_dequeue(struct rte_mempool *mp, void **obj_table,
return 0;
}
+static unsigned int
+octeontx_fpavf_get_count(const struct rte_mempool *mp)
+{
+ uintptr_t pool;
+
+ pool = (uintptr_t)mp->pool_id;
+
+ return octeontx_fpa_bufpool_free_count(pool);
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
.free = octeontx_fpavf_free,
.enqueue = octeontx_fpavf_enqueue,
.dequeue = octeontx_fpavf_dequeue,
- .get_count = NULL,
+ .get_count = octeontx_fpavf_get_count,
.get_capabilities = NULL,
.register_memory_area = NULL,
};
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 08/10] mempool/octeontx: add support for get capability
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (6 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 07/10] mempool/octeontx: add support for get count Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 09/10] mempool/octeontx: add support for memory area ops Santosh Shukla
` (2 subsequent siblings)
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/rte_mempool_octeontx.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 42d93b833..09df114c0 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -159,6 +159,16 @@ octeontx_fpavf_get_count(const struct rte_mempool *mp)
return octeontx_fpa_bufpool_free_count(pool);
}
+static int
+octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
+ unsigned int *flags)
+{
+ RTE_SET_USED(mp);
+ *flags |= (MEMPOOL_F_CAPA_PHYS_CONTIG |
+ MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS);
+ return 0;
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
@@ -166,7 +176,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
.enqueue = octeontx_fpavf_enqueue,
.dequeue = octeontx_fpavf_dequeue,
.get_count = octeontx_fpavf_get_count,
- .get_capabilities = NULL,
+ .get_capabilities = octeontx_fpavf_get_capabilities,
.register_memory_area = NULL,
};
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 09/10] mempool/octeontx: add support for memory area ops
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (7 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 08/10] mempool/octeontx: add support for get capability Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
2017-10-08 17:16 ` [dpdk-dev] [PATCH v3 00/10] Cavium Octeontx external mempool driver Thomas Monjalon
10 siblings, 0 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla
Add support for register_memory_area ops in mempool driver.
Allow more than one HW pool when using OcteonTx mempool driver:
By storing each pool information to the list and find appropriate
list element by matching the rte_mempool pointers.
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
drivers/mempool/octeontx/rte_mempool_octeontx.c | 74 ++++++++++++++++++++++++-
1 file changed, 72 insertions(+), 2 deletions(-)
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index 09df114c0..9f1c07f9d 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -36,17 +36,49 @@
#include "octeontx_fpavf.h"
+/*
+ * Per-pool descriptor.
+ * Links mempool with the corresponding memzone,
+ * that provides memory under the pool's elements.
+ */
+struct octeontx_pool_info {
+ const struct rte_mempool *mp;
+ uintptr_t mz_addr;
+
+ SLIST_ENTRY(octeontx_pool_info) link;
+};
+
+SLIST_HEAD(octeontx_pool_list, octeontx_pool_info);
+
+/* List of the allocated pools */
+static struct octeontx_pool_list octeontx_pool_head =
+ SLIST_HEAD_INITIALIZER(octeontx_pool_head);
+/* Spinlock to protect pool list */
+static rte_spinlock_t pool_list_lock = RTE_SPINLOCK_INITIALIZER;
+
static int
octeontx_fpavf_alloc(struct rte_mempool *mp)
{
uintptr_t pool;
+ struct octeontx_pool_info *pool_info;
uint32_t memseg_count = mp->size;
uint32_t object_size;
uintptr_t va_start;
int rc = 0;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_FOREACH(pool_info, &octeontx_pool_head, link) {
+ if (pool_info->mp == mp)
+ break;
+ }
+ if (pool_info == NULL) {
+ rte_spinlock_unlock(&pool_list_lock);
+ return -ENXIO;
+ }
+
/* virtual hugepage mapped addr */
- va_start = ~(uint64_t)0;
+ va_start = pool_info->mz_addr;
+ rte_spinlock_unlock(&pool_list_lock);
object_size = mp->elt_size + mp->header_size + mp->trailer_size;
@@ -77,10 +109,27 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
static void
octeontx_fpavf_free(struct rte_mempool *mp)
{
+ struct octeontx_pool_info *pool_info;
uintptr_t pool;
pool = (uintptr_t)mp->pool_id;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_FOREACH(pool_info, &octeontx_pool_head, link) {
+ if (pool_info->mp == mp)
+ break;
+ }
+
+ if (pool_info == NULL) {
+ rte_spinlock_unlock(&pool_list_lock);
+ rte_panic("%s: trying to free pool with no valid metadata",
+ __func__);
+ }
+
+ SLIST_REMOVE(&octeontx_pool_head, pool_info, octeontx_pool_info, link);
+ rte_spinlock_unlock(&pool_list_lock);
+
+ rte_free(pool_info);
octeontx_fpa_bufpool_destroy(pool, mp->socket_id);
}
@@ -169,6 +218,27 @@ octeontx_fpavf_get_capabilities(const struct rte_mempool *mp,
return 0;
}
+static int
+octeontx_fpavf_register_memory_area(const struct rte_mempool *mp,
+ char *vaddr, phys_addr_t paddr, size_t len)
+{
+ struct octeontx_pool_info *pool_info;
+
+ RTE_SET_USED(paddr);
+ RTE_SET_USED(len);
+
+ pool_info = rte_malloc("octeontx_pool_info", sizeof(*pool_info), 0);
+ if (pool_info == NULL)
+ return -ENOMEM;
+
+ pool_info->mp = mp;
+ pool_info->mz_addr = (uintptr_t)vaddr;
+ rte_spinlock_lock(&pool_list_lock);
+ SLIST_INSERT_HEAD(&octeontx_pool_head, pool_info, link);
+ rte_spinlock_unlock(&pool_list_lock);
+ return 0;
+}
+
static struct rte_mempool_ops octeontx_fpavf_ops = {
.name = "octeontx_fpavf",
.alloc = octeontx_fpavf_alloc,
@@ -177,7 +247,7 @@ static struct rte_mempool_ops octeontx_fpavf_ops = {
.dequeue = octeontx_fpavf_dequeue,
.get_count = octeontx_fpavf_get_count,
.get_capabilities = octeontx_fpavf_get_capabilities,
- .register_memory_area = NULL,
+ .register_memory_area = octeontx_fpavf_register_memory_area,
};
MEMPOOL_REGISTER_OPS(octeontx_fpavf_ops);
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (8 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 09/10] mempool/octeontx: add support for memory area ops Santosh Shukla
@ 2017-10-08 12:40 ` Santosh Shukla
2017-10-08 16:43 ` Thomas Monjalon
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
2017-10-08 17:16 ` [dpdk-dev] [PATCH v3 00/10] Cavium Octeontx external mempool driver Thomas Monjalon
10 siblings, 2 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-08 12:40 UTC (permalink / raw)
To: olivier.matz, dev
Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla, John McNamara
This commit adds a section to the docs listing the mempool
device PMDs available.
It then adds the octeontx fpavf mempool PMD to the listed mempool
devices.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Reviewed-by: John McNamara <john.mcnamara@intel.com>
---
v2 --> v3:
- Incorporated doc review comment (Suggested by John)
MAINTAINERS | 7 +++
doc/guides/index.rst | 1 +
doc/guides/mempool/index.rst | 40 +++++++++++++
doc/guides/mempool/octeontx.rst | 130 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 178 insertions(+)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index cd0d6bcac..59e782809 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -340,6 +340,13 @@ F: drivers/net/liquidio/
F: doc/guides/nics/liquidio.rst
F: doc/guides/nics/features/liquidio.ini
+Cavium Octeontx Mempool
+M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+F: drivers/mempool/octeontx
+F: doc/guides/mempool/index.rst
+F: doc/guides/mempool/octeontx.rst
+
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
F: drivers/net/cxgbe/
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 63716b095..98f4b7aab 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -44,6 +44,7 @@ DPDK documentation
nics/index
cryptodevs/index
eventdevs/index
+ mempool/index
xen/index
contributing/index
rel_notes/index
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
new file mode 100644
index 000000000..440fb175a
--- /dev/null
+++ b/doc/guides/mempool/index.rst
@@ -0,0 +1,40 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Cavium Inc. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Mempool Device Driver
+=====================
+
+The following are a list of mempool PMDs, which can be used from an
+application through the mempool API.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/mempool/octeontx.rst b/doc/guides/mempool/octeontx.rst
new file mode 100644
index 000000000..02a9b0212
--- /dev/null
+++ b/doc/guides/mempool/octeontx.rst
@@ -0,0 +1,130 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX FPAVF Mempool Driver
+=============================
+
+The OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is a mempool
+driver for offload mempool device found in **Cavium OCTEONTX** SoC
+family.
+
+More information can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Feature
+-------
+
+Features of the OCTEONTX FPAVF PMD are:
+
+- 32 SR-IOV Virtual functions
+- 32 Pools
+- HW mempool manager
+
+Supported OCTEONTX SoCs
+-----------------------
+
+- CN83xx
+
+Prerequisites
+-------------
+
+There are three main pre-perquisites for executing FPAVF PMD on a OCTEONTX
+compatible board:
+
+1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
+
+ The OCTEONTX Linux kernel drivers (including the required PF driver for the
+ FPAVF) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
+ along with build, install and dpdk usage instructions.
+
+2. **ARM64 Tool Chain**
+
+ For example, the *aarch64* Linaro Toolchain, which can be obtained from
+ `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+ from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+ As an alternative method, FPAVF PMD can also be executed using images provided
+ as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
+ to bring up a OCTEONTX board.
+
+ SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
+
+Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` ( set to ``octeontx_fpavf``)
+
+ Set default mempool ops to octeontx_fpavf.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL`` (default ``y``)
+
+ Toggle compilation of the ``librte_mempool_octeontx`` driver.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG`` (default ``n``)
+
+ Toggle display of generic debugging messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the OCTEONTX FPAVF MEMPOOL PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-thunderx-linuxapp-gcc test-build
+
+
+Initialization
+--------------
+
+The octeontx fpavf mempool initialization similar to other mempool
+drivers like ring. However user need to pass --base-virtaddr as
+command line input to application example test_mempool.c application.
+
+Example:
+
+.. code-block:: console
+
+ ./build/app/test -c 0xf --base-virtaddr=0x100000000000 \
+ --mbuf-pool-ops-name="octeontx_fpavf"
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-10-08 16:43 ` Thomas Monjalon
2017-10-09 5:01 ` santosh
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
1 sibling, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-08 16:43 UTC (permalink / raw)
To: Santosh Shukla
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
08/10/2017 14:40, Santosh Shukla:
> This commit adds a section to the docs listing the mempool
> device PMDs available.
It is confusing to add a mempool guide, given that we already have
a mempool section in the programmer's guide:
http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
And we will probably need also some doc for bus drivers.
I think it would be more interesting to create a platform guide
where you can describe the bus and the mempool.
OK for doc/guides/platform/octeontx.rst ?
I choose to integrate this series without this last patch.
I mark this patch as rejected.
Please submit a new one separately.
> It then adds the octeontx fpavf mempool PMD to the listed mempool
> devices.
>
> Cc: John McNamara <john.mcnamara@intel.com>
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Reviewed-by: John McNamara <john.mcnamara@intel.com>
> ---
[...]
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
> F: doc/guides/nics/liquidio.rst
> F: doc/guides/nics/features/liquidio.ini
>
> +Cavium Octeontx Mempool
> +M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> +F: drivers/mempool/octeontx
A slash is missing at the end of the directory.
Until now, the mempool and bus drivers are listed with net drivers.
We could move them in a platform section later.
For now, let's put it as "Cavium OcteonTX" in net drivers.
I fixed and merged it with the first patch.
> +F: doc/guides/mempool/index.rst
The index must not be part of Octeontx section.
> +F: doc/guides/mempool/octeontx.rst
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-08 16:43 ` Thomas Monjalon
@ 2017-10-09 5:01 ` santosh
2017-10-09 5:46 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-09 5:01 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
Hi Thomas,
On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> 08/10/2017 14:40, Santosh Shukla:
>> This commit adds a section to the docs listing the mempool
>> device PMDs available.
> It is confusing to add a mempool guide, given that we already have
> a mempool section in the programmer's guide:
> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>
> And we will probably need also some doc for bus drivers.
>
> I think it would be more interesting to create a platform guide
> where you can describe the bus and the mempool.
> OK for doc/guides/platform/octeontx.rst ?
No Strong opinion,
But IMO, purpose of introducing mempool PMD was inspired from
eventdev, Which I find pretty organized.
Yes, we have mempool_lib guide but that is more about common mempool
layer details like api, structure layout etc.. I wanted
to add guide which tells about mempool PMD's and their capability
if any, thats why included octeontx as strarter and was thinking
that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
later.
If above said does not make sense then will follow Thomas proposition
and propose a patch.
Thoughts?
> I choose to integrate this series without this last patch.
> I mark this patch as rejected.
> Please submit a new one separately.
>
>> It then adds the octeontx fpavf mempool PMD to the listed mempool
>> devices.
>>
>> Cc: John McNamara <john.mcnamara@intel.com>
>>
>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> Reviewed-by: John McNamara <john.mcnamara@intel.com>
>> ---
> [...]
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
>> F: doc/guides/nics/liquidio.rst
>> F: doc/guides/nics/features/liquidio.ini
>>
>> +Cavium Octeontx Mempool
>> +M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> +F: drivers/mempool/octeontx
> A slash is missing at the end of the directory.
>
> Until now, the mempool and bus drivers are listed with net drivers.
> We could move them in a platform section later.
> For now, let's put it as "Cavium OcteonTX" in net drivers.
>
> I fixed and merged it with the first patch.
Thanks.
IMO, for MAINTAINERS file:
Just like we have entry for "Eventdev Driver" and underneath
to that- all vendor specific PMD sits, I was thinking to
introduce "Mempool Drivers" such that we place all
external mempool PMDs + s/w PMD (example: Ring) sits underneath.
thoughts?
>> +F: doc/guides/mempool/index.rst
> The index must not be part of Octeontx section.
>
>> +F: doc/guides/mempool/octeontx.rst
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-09 5:01 ` santosh
@ 2017-10-09 5:46 ` santosh
2017-10-09 8:48 ` Thomas Monjalon
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-09 5:46 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
On Monday 09 October 2017 10:31 AM, santosh wrote:
> Hi Thomas,
>
>
> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
>> 08/10/2017 14:40, Santosh Shukla:
>>> This commit adds a section to the docs listing the mempool
>>> device PMDs available.
>> It is confusing to add a mempool guide, given that we already have
>> a mempool section in the programmer's guide:
>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>>
>> And we will probably need also some doc for bus drivers.
>>
>> I think it would be more interesting to create a platform guide
>> where you can describe the bus and the mempool.
>> OK for doc/guides/platform/octeontx.rst ?
> No Strong opinion,
>
> But IMO, purpose of introducing mempool PMD was inspired from
> eventdev, Which I find pretty organized.
>
> Yes, we have mempool_lib guide but that is more about common mempool
> layer details like api, structure layout etc.. I wanted
> to add guide which tells about mempool PMD's and their capability
> if any, thats why included octeontx as strarter and was thinking
> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> later.
>
> If above said does not make sense then will follow Thomas proposition
> and propose a patch.
>
> Thoughts?
>
Additional input:
mempool PMD logically can work across nics.. could be a reason
to not to mention under platform/octeontx or platform/dpaa ..etc..
IMO, Its worth adding a new section for mempool PMD.
Thoughts?
Regards,
>> I choose to integrate this series without this last patch.
>> I mark this patch as rejected.
>> Please submit a new one separately.
>>
>>> It then adds the octeontx fpavf mempool PMD to the listed mempool
>>> devices.
>>>
>>> Cc: John McNamara <john.mcnamara@intel.com>
>>>
>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>> Reviewed-by: John McNamara <john.mcnamara@intel.com>
>>> ---
>> [...]
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
>>> F: doc/guides/nics/liquidio.rst
>>> F: doc/guides/nics/features/liquidio.ini
>>>
>>> +Cavium Octeontx Mempool
>>> +M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>> +F: drivers/mempool/octeontx
>> A slash is missing at the end of the directory.
>>
>> Until now, the mempool and bus drivers are listed with net drivers.
>> We could move them in a platform section later.
>> For now, let's put it as "Cavium OcteonTX" in net drivers.
>>
>> I fixed and merged it with the first patch.
> Thanks.
>
> IMO, for MAINTAINERS file:
> Just like we have entry for "Eventdev Driver" and underneath
> to that- all vendor specific PMD sits, I was thinking to
> introduce "Mempool Drivers" such that we place all
> external mempool PMDs + s/w PMD (example: Ring) sits underneath.
>
> thoughts?
>
>>> +F: doc/guides/mempool/index.rst
>> The index must not be part of Octeontx section.
>>
>>> +F: doc/guides/mempool/octeontx.rst
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-09 5:46 ` santosh
@ 2017-10-09 8:48 ` Thomas Monjalon
2017-10-09 9:19 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-09 8:48 UTC (permalink / raw)
To: santosh
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
09/10/2017 07:46, santosh:
>
> On Monday 09 October 2017 10:31 AM, santosh wrote:
> > Hi Thomas,
> >
> >
> > On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> >> 08/10/2017 14:40, Santosh Shukla:
> >>> This commit adds a section to the docs listing the mempool
> >>> device PMDs available.
> >> It is confusing to add a mempool guide, given that we already have
> >> a mempool section in the programmer's guide:
> >> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
> >>
> >> And we will probably need also some doc for bus drivers.
> >>
> >> I think it would be more interesting to create a platform guide
> >> where you can describe the bus and the mempool.
> >> OK for doc/guides/platform/octeontx.rst ?
> > No Strong opinion,
> >
> > But IMO, purpose of introducing mempool PMD was inspired from
> > eventdev, Which I find pretty organized.
> >
> > Yes, we have mempool_lib guide but that is more about common mempool
> > layer details like api, structure layout etc.. I wanted
> > to add guide which tells about mempool PMD's and their capability
> > if any, thats why included octeontx as strarter and was thinking
> > that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> > later.
Yes sure it is interesting.
The question is to know if mempool drivers make sense in their own guide
or if it's better to group them with all related platform specifics.
> > If above said does not make sense then will follow Thomas proposition
> > and propose a patch.
> >
> > Thoughts?
> >
> Additional input:
>
> mempool PMD logically can work across nics.. could be a reason
> to not to mention under platform/octeontx or platform/dpaa ..etc..
I don't understand. OcteonTx mempool works only on OcteonTX?
Are you saying that OcteonTX can be managed as a device?
> IMO, Its worth adding a new section for mempool PMD.
>
> Thoughts?
>
> Regards,
>
> >> I choose to integrate this series without this last patch.
> >> I mark this patch as rejected.
> >> Please submit a new one separately.
> >>
> >>> It then adds the octeontx fpavf mempool PMD to the listed mempool
> >>> devices.
> >>>
> >>> Cc: John McNamara <john.mcnamara@intel.com>
> >>>
> >>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >>> Reviewed-by: John McNamara <john.mcnamara@intel.com>
> >>> ---
> >> [...]
> >>> --- a/MAINTAINERS
> >>> +++ b/MAINTAINERS
> >>> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
> >>> F: doc/guides/nics/liquidio.rst
> >>> F: doc/guides/nics/features/liquidio.ini
> >>>
> >>> +Cavium Octeontx Mempool
> >>> +M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> >>> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >>> +F: drivers/mempool/octeontx
> >> A slash is missing at the end of the directory.
> >>
> >> Until now, the mempool and bus drivers are listed with net drivers.
> >> We could move them in a platform section later.
> >> For now, let's put it as "Cavium OcteonTX" in net drivers.
> >>
> >> I fixed and merged it with the first patch.
> > Thanks.
> >
> > IMO, for MAINTAINERS file:
> > Just like we have entry for "Eventdev Driver" and underneath
> > to that- all vendor specific PMD sits, I was thinking to
> > introduce "Mempool Drivers" such that we place all
> > external mempool PMDs + s/w PMD (example: Ring) sits underneath.
> >
> > thoughts?
No need to move SW mempool drivers in a different section.
They are maintained by Olivier with the mempool core code.
I have the feeling that all platform specific stuff
(bus, mempool, makefile and config file) are maintained by
the same persons.
I think it is easier to know who contact for issues with a platform.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-09 8:48 ` Thomas Monjalon
@ 2017-10-09 9:19 ` santosh
2017-10-18 12:17 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-09 9:19 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> 09/10/2017 07:46, santosh:
>> On Monday 09 October 2017 10:31 AM, santosh wrote:
>>> Hi Thomas,
>>>
>>>
>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
>>>> 08/10/2017 14:40, Santosh Shukla:
>>>>> This commit adds a section to the docs listing the mempool
>>>>> device PMDs available.
>>>> It is confusing to add a mempool guide, given that we already have
>>>> a mempool section in the programmer's guide:
>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>>>>
>>>> And we will probably need also some doc for bus drivers.
>>>>
>>>> I think it would be more interesting to create a platform guide
>>>> where you can describe the bus and the mempool.
>>>> OK for doc/guides/platform/octeontx.rst ?
>>> No Strong opinion,
>>>
>>> But IMO, purpose of introducing mempool PMD was inspired from
>>> eventdev, Which I find pretty organized.
>>>
>>> Yes, we have mempool_lib guide but that is more about common mempool
>>> layer details like api, structure layout etc.. I wanted
>>> to add guide which tells about mempool PMD's and their capability
>>> if any, thats why included octeontx as strarter and was thinking
>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
>>> later.
> Yes sure it is interesting.
> The question is to know if mempool drivers make sense in their own guide
> or if it's better to group them with all related platform specifics.
I vote for keeping them just like Eventdev/cryptodev,
has vendor specific PMD's under one roof.. (has both s/w and hw).
>>> If above said does not make sense then will follow Thomas proposition
>>> and propose a patch.
>>>
>>> Thoughts?
>>>
>> Additional input:
>>
>> mempool PMD logically can work across nics.. could be a reason
>> to not to mention under platform/octeontx or platform/dpaa ..etc..
> I don't understand. OcteonTx mempool works only on OcteonTX?
Can work on other external PCI-e nics though current pmd don;t support.
> Are you saying that OcteonTX can be managed as a device?
>
Yes.
For example:
We have standalone test application for mempool for test purpose,
so to test standlone mempool device, right?
if user gives ]octeontx_fpavf' pool handle
then test works just like for s/w ring.
BTW: HW mempool offload behaves just like ring for example,
Only difference buffer mgmt is ofloaded. Having said that
In theory, offload mempool driver Or s/w pool driver should be agnostic.
(my 2 cents).
Thanks.
>> IMO, Its worth adding a new section for mempool PMD.
>>
>> Thoughts?
>>
>> Regards,
>>
>>>> I choose to integrate this series without this last patch.
>>>> I mark this patch as rejected.
>>>> Please submit a new one separately.
>>>>
>>>>> It then adds the octeontx fpavf mempool PMD to the listed mempool
>>>>> devices.
>>>>>
>>>>> Cc: John McNamara <john.mcnamara@intel.com>
>>>>>
>>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>>> Reviewed-by: John McNamara <john.mcnamara@intel.com>
>>>>> ---
>>>> [...]
>>>>> --- a/MAINTAINERS
>>>>> +++ b/MAINTAINERS
>>>>> @@ -340,6 +340,13 @@ F: drivers/net/liquidio/
>>>>> F: doc/guides/nics/liquidio.rst
>>>>> F: doc/guides/nics/features/liquidio.ini
>>>>>
>>>>> +Cavium Octeontx Mempool
>>>>> +M: Santosh Shukla <santosh.shukla@caviumnetworks.com>
>>>>> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>>> +F: drivers/mempool/octeontx
>>>> A slash is missing at the end of the directory.
>>>>
>>>> Until now, the mempool and bus drivers are listed with net drivers.
>>>> We could move them in a platform section later.
>>>> For now, let's put it as "Cavium OcteonTX" in net drivers.
>>>>
>>>> I fixed and merged it with the first patch.
>>> Thanks.
>>>
>>> IMO, for MAINTAINERS file:
>>> Just like we have entry for "Eventdev Driver" and underneath
>>> to that- all vendor specific PMD sits, I was thinking to
>>> introduce "Mempool Drivers" such that we place all
>>> external mempool PMDs + s/w PMD (example: Ring) sits underneath.
>>>
>>> thoughts?
> No need to move SW mempool drivers in a different section.
> They are maintained by Olivier with the mempool core code.
>
> I have the feeling that all platform specific stuff
> (bus, mempool, makefile and config file) are maintained by
> the same persons.
> I think it is easier to know who contact for issues with a platform.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-09 9:19 ` santosh
@ 2017-10-18 12:17 ` santosh
2017-10-18 13:45 ` Thomas Monjalon
0 siblings, 1 reply; 78+ messages in thread
From: santosh @ 2017-10-18 12:17 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, John McNamara,
ferruh.yigit
Hi Thomas,
On Monday 09 October 2017 02:49 PM, santosh wrote:
> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
>> 09/10/2017 07:46, santosh:
>>> On Monday 09 October 2017 10:31 AM, santosh wrote:
>>>> Hi Thomas,
>>>>
>>>>
>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
>>>>> 08/10/2017 14:40, Santosh Shukla:
>>>>>> This commit adds a section to the docs listing the mempool
>>>>>> device PMDs available.
>>>>> It is confusing to add a mempool guide, given that we already have
>>>>> a mempool section in the programmer's guide:
>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>>>>>
>>>>> And we will probably need also some doc for bus drivers.
>>>>>
>>>>> I think it would be more interesting to create a platform guide
>>>>> where you can describe the bus and the mempool.
>>>>> OK for doc/guides/platform/octeontx.rst ?
>>>> No Strong opinion,
>>>>
>>>> But IMO, purpose of introducing mempool PMD was inspired from
>>>> eventdev, Which I find pretty organized.
>>>>
>>>> Yes, we have mempool_lib guide but that is more about common mempool
>>>> layer details like api, structure layout etc.. I wanted
>>>> to add guide which tells about mempool PMD's and their capability
>>>> if any, thats why included octeontx as strarter and was thinking
>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
>>>> later.
>> Yes sure it is interesting.
>> The question is to know if mempool drivers make sense in their own guide
>> or if it's better to group them with all related platform specifics.
> I vote for keeping them just like Eventdev/cryptodev,
> has vendor specific PMD's under one roof.. (has both s/w and hw).
To be clear and move on to v3 for this patch:
* Your proposition to mention about mempool block in dir struct like
doc/guides/platform/octeontx.rst.
And right now we have more than one reference for octeontx.rst in dpdk
example:
./doc/guides/nics/octeontx.rst --> NIC
./doc/guides/eventdevs/octeontx.rst --> eventdev device
Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
and bundle them under one roof OR go by my current proposal.
Who'll take a call on that?
Thanks.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-18 12:17 ` santosh
@ 2017-10-18 13:45 ` Thomas Monjalon
2017-10-18 14:02 ` santosh
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-18 13:45 UTC (permalink / raw)
To: santosh, John McNamara
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, ferruh.yigit
18/10/2017 14:17, santosh:
> Hi Thomas,
>
>
> On Monday 09 October 2017 02:49 PM, santosh wrote:
> > On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> >> 09/10/2017 07:46, santosh:
> >>> On Monday 09 October 2017 10:31 AM, santosh wrote:
> >>>> Hi Thomas,
> >>>>
> >>>>
> >>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> >>>>> 08/10/2017 14:40, Santosh Shukla:
> >>>>>> This commit adds a section to the docs listing the mempool
> >>>>>> device PMDs available.
> >>>>> It is confusing to add a mempool guide, given that we already have
> >>>>> a mempool section in the programmer's guide:
> >>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
> >>>>>
> >>>>> And we will probably need also some doc for bus drivers.
> >>>>>
> >>>>> I think it would be more interesting to create a platform guide
> >>>>> where you can describe the bus and the mempool.
> >>>>> OK for doc/guides/platform/octeontx.rst ?
> >>>> No Strong opinion,
> >>>>
> >>>> But IMO, purpose of introducing mempool PMD was inspired from
> >>>> eventdev, Which I find pretty organized.
> >>>>
> >>>> Yes, we have mempool_lib guide but that is more about common mempool
> >>>> layer details like api, structure layout etc.. I wanted
> >>>> to add guide which tells about mempool PMD's and their capability
> >>>> if any, thats why included octeontx as strarter and was thinking
> >>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> >>>> later.
> >> Yes sure it is interesting.
> >> The question is to know if mempool drivers make sense in their own guide
> >> or if it's better to group them with all related platform specifics.
> > I vote for keeping them just like Eventdev/cryptodev,
> > has vendor specific PMD's under one roof.. (has both s/w and hw).
>
> To be clear and move on to v3 for this patch:
> * Your proposition to mention about mempool block in dir struct like
> doc/guides/platform/octeontx.rst.
> And right now we have more than one reference for octeontx.rst in dpdk
> example:
> ./doc/guides/nics/octeontx.rst --> NIC
> ./doc/guides/eventdevs/octeontx.rst --> eventdev device
>
> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
>
> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
I think we must keep octeontx.rst in nics and eventdevs.
My proposal was to have a platform guide to give more explanations
about the common hardware and bus design.
Some infos for tuning Intel platforms are in the quick start guide,
and could be moved later in such a platform guide.
With this suggestion, we can include mempool drivers in the
platform guide as mempool is really specific to the platform.
I thought you agreed on it when talking on IRC.
> and bundle them under one roof OR go by my current proposal.
>
> Who'll take a call on that?
If you strongly feel that mempool driver is better outside,
you can make it outside in a mempool guide.
John do you have an opinion?
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-18 13:45 ` Thomas Monjalon
@ 2017-10-18 14:02 ` santosh
2017-10-18 14:26 ` Thomas Monjalon
2017-10-18 14:36 ` Jerin Jacob
0 siblings, 2 replies; 78+ messages in thread
From: santosh @ 2017-10-18 14:02 UTC (permalink / raw)
To: Thomas Monjalon, John McNamara
Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal, ferruh.yigit
On Wednesday 18 October 2017 07:15 PM, Thomas Monjalon wrote:
> 18/10/2017 14:17, santosh:
>> Hi Thomas,
>>
>>
>> On Monday 09 October 2017 02:49 PM, santosh wrote:
>>> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
>>>> 09/10/2017 07:46, santosh:
>>>>> On Monday 09 October 2017 10:31 AM, santosh wrote:
>>>>>> Hi Thomas,
>>>>>>
>>>>>>
>>>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
>>>>>>> 08/10/2017 14:40, Santosh Shukla:
>>>>>>>> This commit adds a section to the docs listing the mempool
>>>>>>>> device PMDs available.
>>>>>>> It is confusing to add a mempool guide, given that we already have
>>>>>>> a mempool section in the programmer's guide:
>>>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
>>>>>>>
>>>>>>> And we will probably need also some doc for bus drivers.
>>>>>>>
>>>>>>> I think it would be more interesting to create a platform guide
>>>>>>> where you can describe the bus and the mempool.
>>>>>>> OK for doc/guides/platform/octeontx.rst ?
>>>>>> No Strong opinion,
>>>>>>
>>>>>> But IMO, purpose of introducing mempool PMD was inspired from
>>>>>> eventdev, Which I find pretty organized.
>>>>>>
>>>>>> Yes, we have mempool_lib guide but that is more about common mempool
>>>>>> layer details like api, structure layout etc.. I wanted
>>>>>> to add guide which tells about mempool PMD's and their capability
>>>>>> if any, thats why included octeontx as strarter and was thinking
>>>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
>>>>>> later.
>>>> Yes sure it is interesting.
>>>> The question is to know if mempool drivers make sense in their own guide
>>>> or if it's better to group them with all related platform specifics.
>>> I vote for keeping them just like Eventdev/cryptodev,
>>> has vendor specific PMD's under one roof.. (has both s/w and hw).
>> To be clear and move on to v3 for this patch:
>> * Your proposition to mention about mempool block in dir struct like
>> doc/guides/platform/octeontx.rst.
>> And right now we have more than one reference for octeontx.rst in dpdk
>> example:
>> ./doc/guides/nics/octeontx.rst --> NIC
>> ./doc/guides/eventdevs/octeontx.rst --> eventdev device
>>
>> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
>>
>> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
> I think we must keep octeontx.rst in nics and eventdevs.
>
> My proposal was to have a platform guide to give more explanations
> about the common hardware and bus design.
That way, event device also a common hw block.. just like mempool block is
for octeontx platform. Also PCI bus is octeontx bus.. we don;t have platform
specific bus like dpaa has, so bus stuff not applicable to octeontx doc(imo).
> Some infos for tuning Intel platforms are in the quick start guide,
> and could be moved later in such a platform guide.
>
> With this suggestion, we can include mempool drivers in the
> platform guide as mempool is really specific to the platform.
>
> I thought you agreed on it when talking on IRC.
yes, we did discussed on IRC. But I'm still unsure about scope of that guide
from octeontx perspective: That new platform entry has info about only one block
which is mempool and for other common block or specific blocks :
user has to look around at different directories..
>> and bundle them under one roof OR go by my current proposal.
>>
>> Who'll take a call on that?
> If you strongly feel that mempool driver is better outside,
I don't have strong opinion on doc.. I'm just asking for more opinions here..
as I'm not fully convinced with your proposition.
> you can make it outside in a mempool guide.
> John do you have an opinion?
>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-18 14:02 ` santosh
@ 2017-10-18 14:26 ` Thomas Monjalon
2017-10-18 14:36 ` Jerin Jacob
1 sibling, 0 replies; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-18 14:26 UTC (permalink / raw)
To: santosh, John McNamara, olivier.matz
Cc: dev, jerin.jacob, hemant.agrawal, ferruh.yigit
18/10/2017 16:02, santosh:
>
> On Wednesday 18 October 2017 07:15 PM, Thomas Monjalon wrote:
> > 18/10/2017 14:17, santosh:
> >> Hi Thomas,
> >>
> >>
> >> On Monday 09 October 2017 02:49 PM, santosh wrote:
> >>> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> >>>> 09/10/2017 07:46, santosh:
> >>>>> On Monday 09 October 2017 10:31 AM, santosh wrote:
> >>>>>> Hi Thomas,
> >>>>>>
> >>>>>>
> >>>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> >>>>>>> 08/10/2017 14:40, Santosh Shukla:
> >>>>>>>> This commit adds a section to the docs listing the mempool
> >>>>>>>> device PMDs available.
> >>>>>>> It is confusing to add a mempool guide, given that we already have
> >>>>>>> a mempool section in the programmer's guide:
> >>>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
> >>>>>>>
> >>>>>>> And we will probably need also some doc for bus drivers.
> >>>>>>>
> >>>>>>> I think it would be more interesting to create a platform guide
> >>>>>>> where you can describe the bus and the mempool.
> >>>>>>> OK for doc/guides/platform/octeontx.rst ?
> >>>>>> No Strong opinion,
> >>>>>>
> >>>>>> But IMO, purpose of introducing mempool PMD was inspired from
> >>>>>> eventdev, Which I find pretty organized.
> >>>>>>
> >>>>>> Yes, we have mempool_lib guide but that is more about common mempool
> >>>>>> layer details like api, structure layout etc.. I wanted
> >>>>>> to add guide which tells about mempool PMD's and their capability
> >>>>>> if any, thats why included octeontx as strarter and was thinking
> >>>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> >>>>>> later.
> >>>> Yes sure it is interesting.
> >>>> The question is to know if mempool drivers make sense in their own guide
> >>>> or if it's better to group them with all related platform specifics.
> >>> I vote for keeping them just like Eventdev/cryptodev,
> >>> has vendor specific PMD's under one roof.. (has both s/w and hw).
> >> To be clear and move on to v3 for this patch:
> >> * Your proposition to mention about mempool block in dir struct like
> >> doc/guides/platform/octeontx.rst.
> >> And right now we have more than one reference for octeontx.rst in dpdk
> >> example:
> >> ./doc/guides/nics/octeontx.rst --> NIC
> >> ./doc/guides/eventdevs/octeontx.rst --> eventdev device
> >>
> >> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
> >>
> >> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
> > I think we must keep octeontx.rst in nics and eventdevs.
> >
> > My proposal was to have a platform guide to give more explanations
> > about the common hardware and bus design.
>
> That way, event device also a common hw block.. just like mempool block is
> for octeontx platform. Also PCI bus is octeontx bus.. we don;t have platform
> specific bus like dpaa has, so bus stuff not applicable to octeontx doc(imo).
Right.
> > Some infos for tuning Intel platforms are in the quick start guide,
> > and could be moved later in such a platform guide.
> >
> > With this suggestion, we can include mempool drivers in the
> > platform guide as mempool is really specific to the platform.
> >
> > I thought you agreed on it when talking on IRC.
>
> yes, we did discussed on IRC. But I'm still unsure about scope of that guide
> from octeontx perspective: That new platform entry has info about only one block
> which is mempool and for other common block or specific blocks :
> user has to look around at different directories..
Right.
You can point to other sections in the platform guide.
>From platform/octeontx.rst, you can point to eventdev/octeontx.rst,
nics/octeontx.rst and mempool/octeontx.rst (if you add it).
> >> and bundle them under one roof OR go by my current proposal.
> >>
> >> Who'll take a call on that?
> > If you strongly feel that mempool driver is better outside,
>
> I don't have strong opinion on doc.. I'm just asking for more opinions here..
Me too, I'm asking for more opinions.
> as I'm not fully convinced with your proposition.
I am convinced we must create a platform guide.
But I am not convinced about where put the mempool section:
either directly in the platform guide,
or in a new mempool guide which is referenced from the platform guide.
> > you can make it outside in a mempool guide.
> > John do you have an opinion?
If we do not have more opinions, do as you feel.
Anyway it will be possible to change it later if needed.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-18 14:02 ` santosh
2017-10-18 14:26 ` Thomas Monjalon
@ 2017-10-18 14:36 ` Jerin Jacob
2017-10-18 15:11 ` Thomas Monjalon
1 sibling, 1 reply; 78+ messages in thread
From: Jerin Jacob @ 2017-10-18 14:36 UTC (permalink / raw)
To: santosh
Cc: Thomas Monjalon, John McNamara, dev, olivier.matz,
hemant.agrawal, ferruh.yigit
-----Original Message-----
> Date: Wed, 18 Oct 2017 19:32:44 +0530
> From: santosh <santosh.shukla@caviumnetworks.com>
> To: Thomas Monjalon <thomas@monjalon.net>, John McNamara
> <john.mcnamara@intel.com>
> Cc: dev@dpdk.org, olivier.matz@6wind.com, jerin.jacob@caviumnetworks.com,
> hemant.agrawal@nxp.com, ferruh.yigit@intel.com
> Subject: Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx
> mempool device
> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101
> Thunderbird/45.5.1
>
>
> On Wednesday 18 October 2017 07:15 PM, Thomas Monjalon wrote:
> > 18/10/2017 14:17, santosh:
> >> Hi Thomas,
> >>
> >>
> >> On Monday 09 October 2017 02:49 PM, santosh wrote:
> >>> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> >>>> 09/10/2017 07:46, santosh:
> >>>>> On Monday 09 October 2017 10:31 AM, santosh wrote:
> >>>>>> Hi Thomas,
> >>>>>>
> >>>>>>
> >>>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> >>>>>>> 08/10/2017 14:40, Santosh Shukla:
> >>>>>>>> This commit adds a section to the docs listing the mempool
> >>>>>>>> device PMDs available.
> >>>>>>> It is confusing to add a mempool guide, given that we already have
> >>>>>>> a mempool section in the programmer's guide:
> >>>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
> >>>>>>>
> >>>>>>> And we will probably need also some doc for bus drivers.
> >>>>>>>
> >>>>>>> I think it would be more interesting to create a platform guide
> >>>>>>> where you can describe the bus and the mempool.
> >>>>>>> OK for doc/guides/platform/octeontx.rst ?
> >>>>>> No Strong opinion,
> >>>>>>
> >>>>>> But IMO, purpose of introducing mempool PMD was inspired from
> >>>>>> eventdev, Which I find pretty organized.
> >>>>>>
> >>>>>> Yes, we have mempool_lib guide but that is more about common mempool
> >>>>>> layer details like api, structure layout etc.. I wanted
> >>>>>> to add guide which tells about mempool PMD's and their capability
> >>>>>> if any, thats why included octeontx as strarter and was thinking
> >>>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> >>>>>> later.
> >>>> Yes sure it is interesting.
> >>>> The question is to know if mempool drivers make sense in their own guide
> >>>> or if it's better to group them with all related platform specifics.
> >>> I vote for keeping them just like Eventdev/cryptodev,
> >>> has vendor specific PMD's under one roof.. (has both s/w and hw).
> >> To be clear and move on to v3 for this patch:
> >> * Your proposition to mention about mempool block in dir struct like
> >> doc/guides/platform/octeontx.rst.
> >> And right now we have more than one reference for octeontx.rst in dpdk
> >> example:
> >> ./doc/guides/nics/octeontx.rst --> NIC
> >> ./doc/guides/eventdevs/octeontx.rst --> eventdev device
> >>
> >> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
> >>
> >> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
> > I think we must keep octeontx.rst in nics and eventdevs.
> >
> > My proposal was to have a platform guide to give more explanations
> > about the common hardware and bus design.
>
> That way, event device also a common hw block.. just like mempool block is
> for octeontx platform. Also PCI bus is octeontx bus.. we don;t have platform
> specific bus like dpaa has, so bus stuff not applicable to octeontx doc(imo).
>
> > Some infos for tuning Intel platforms are in the quick start guide,
> > and could be moved later in such a platform guide.
> >
> > With this suggestion, we can include mempool drivers in the
> > platform guide as mempool is really specific to the platform.
> >
> > I thought you agreed on it when talking on IRC.
>
> yes, we did discussed on IRC. But I'm still unsure about scope of that guide
> from octeontx perspective: That new platform entry has info about only one block
> which is mempool and for other common block or specific blocks :
> user has to look around at different directories..
>
> >> and bundle them under one roof OR go by my current proposal.
> >>
> >> Who'll take a call on that?
> > If you strongly feel that mempool driver is better outside,
>
> I don't have strong opinion on doc.. I'm just asking for more opinions here..
Combining both proposal. How about,
1) Create ./doc/guides/mempool/octeontx.rst to capture octeontx mempool
specific information.(Which is inline with driver/ hierarchy).
2) Create a platform specific document(say doc/guides/platform/octeontx.rst)
- We can use this file to capture information about the common content
between the three separate documents(doc/guides/nics/octeontx.rst,
./doc/guides/eventdevs/octeontx.rst and ./doc/guides/mempool/octeontx.rst) and
give reference to common file instead of duplicating the information in
driver documentation.
Thomas, John,
Thoughts?
> as I'm not fully convinced with your proposition.
>
> > you can make it outside in a mempool guide.
> > John do you have an opinion?
> >
>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device
2017-10-18 14:36 ` Jerin Jacob
@ 2017-10-18 15:11 ` Thomas Monjalon
0 siblings, 0 replies; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-18 15:11 UTC (permalink / raw)
To: Jerin Jacob, santosh, John McNamara, olivier.matz
Cc: dev, hemant.agrawal, ferruh.yigit
18/10/2017 16:36, Jerin Jacob:
> From: santosh <santosh.shukla@caviumnetworks.com>
> >
> > On Wednesday 18 October 2017 07:15 PM, Thomas Monjalon wrote:
> > > 18/10/2017 14:17, santosh:
> > >> Hi Thomas,
> > >>
> > >>
> > >> On Monday 09 October 2017 02:49 PM, santosh wrote:
> > >>> On Monday 09 October 2017 02:18 PM, Thomas Monjalon wrote:
> > >>>> 09/10/2017 07:46, santosh:
> > >>>>> On Monday 09 October 2017 10:31 AM, santosh wrote:
> > >>>>>> Hi Thomas,
> > >>>>>>
> > >>>>>>
> > >>>>>> On Sunday 08 October 2017 10:13 PM, Thomas Monjalon wrote:
> > >>>>>>> 08/10/2017 14:40, Santosh Shukla:
> > >>>>>>>> This commit adds a section to the docs listing the mempool
> > >>>>>>>> device PMDs available.
> > >>>>>>> It is confusing to add a mempool guide, given that we already have
> > >>>>>>> a mempool section in the programmer's guide:
> > >>>>>>> http://dpdk.org/doc/guides/prog_guide/mempool_lib.html
> > >>>>>>>
> > >>>>>>> And we will probably need also some doc for bus drivers.
> > >>>>>>>
> > >>>>>>> I think it would be more interesting to create a platform guide
> > >>>>>>> where you can describe the bus and the mempool.
> > >>>>>>> OK for doc/guides/platform/octeontx.rst ?
> > >>>>>> No Strong opinion,
> > >>>>>>
> > >>>>>> But IMO, purpose of introducing mempool PMD was inspired from
> > >>>>>> eventdev, Which I find pretty organized.
> > >>>>>>
> > >>>>>> Yes, we have mempool_lib guide but that is more about common mempool
> > >>>>>> layer details like api, structure layout etc.. I wanted
> > >>>>>> to add guide which tells about mempool PMD's and their capability
> > >>>>>> if any, thats why included octeontx as strarter and was thinking
> > >>>>>> that other external-mempool PMDs like dpaa/dpaa2 , sw ring pmd may come
> > >>>>>> later.
> > >>>> Yes sure it is interesting.
> > >>>> The question is to know if mempool drivers make sense in their own guide
> > >>>> or if it's better to group them with all related platform specifics.
> > >>> I vote for keeping them just like Eventdev/cryptodev,
> > >>> has vendor specific PMD's under one roof.. (has both s/w and hw).
> > >> To be clear and move on to v3 for this patch:
> > >> * Your proposition to mention about mempool block in dir struct like
> > >> doc/guides/platform/octeontx.rst.
> > >> And right now we have more than one reference for octeontx.rst in dpdk
> > >> example:
> > >> ./doc/guides/nics/octeontx.rst --> NIC
> > >> ./doc/guides/eventdevs/octeontx.rst --> eventdev device
> > >>
> > >> Keeping above order in mind: My current proposal was to introduce doc like eventdev for mempool block.
> > >>
> > >> So now, I am in two mind, Whether I opt your path If so then that should I remove all octeontx.rst reference from dpdk?
> > > I think we must keep octeontx.rst in nics and eventdevs.
> > >
> > > My proposal was to have a platform guide to give more explanations
> > > about the common hardware and bus design.
> >
> > That way, event device also a common hw block.. just like mempool block is
> > for octeontx platform. Also PCI bus is octeontx bus.. we don;t have platform
> > specific bus like dpaa has, so bus stuff not applicable to octeontx doc(imo).
> >
> > > Some infos for tuning Intel platforms are in the quick start guide,
> > > and could be moved later in such a platform guide.
> > >
> > > With this suggestion, we can include mempool drivers in the
> > > platform guide as mempool is really specific to the platform.
> > >
> > > I thought you agreed on it when talking on IRC.
> >
> > yes, we did discussed on IRC. But I'm still unsure about scope of that guide
> > from octeontx perspective: That new platform entry has info about only one block
> > which is mempool and for other common block or specific blocks :
> > user has to look around at different directories..
> >
> > >> and bundle them under one roof OR go by my current proposal.
> > >>
> > >> Who'll take a call on that?
> > > If you strongly feel that mempool driver is better outside,
> >
> > I don't have strong opinion on doc.. I'm just asking for more opinions here..
>
> Combining both proposal. How about,
> 1) Create ./doc/guides/mempool/octeontx.rst to capture octeontx mempool
> specific information.(Which is inline with driver/ hierarchy).
> 2) Create a platform specific document(say doc/guides/platform/octeontx.rst)
> - We can use this file to capture information about the common content
> between the three separate documents(doc/guides/nics/octeontx.rst,
> ./doc/guides/eventdevs/octeontx.rst and ./doc/guides/mempool/octeontx.rst) and
> give reference to common file instead of duplicating the information in
> driver documentation.
>
> Thomas, John,
>
> Thoughts?
This is one of the two options I described in my last email.
Our emails have crossed in the air :)
The other option is to merge the mempool guide in the platform guide,
assuming that a hardware mempool cannot be used with another platform,
and assuming that the platform guide will give the big picture about
memory addressing and capabilities, overlapping with mempool section.
I am OK with both options.
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
2017-10-08 16:43 ` Thomas Monjalon
@ 2017-10-20 15:21 ` Santosh Shukla
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 1/3] doc: add platform device Santosh Shukla
` (4 more replies)
1 sibling, 5 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-20 15:21 UTC (permalink / raw)
To: dev; +Cc: thomas, jerin.jacob, john.mcnamara, Santosh Shukla
Patch 1: Introduce platform/ entry in guide. That has information about
platform drivers. ( As per discussion[1])
Patch 2: Introduce mempool/ entry in guide. (Refer discussion[1])
Patch 3: Misc Fix for nic/octeontx.rst
Thanks.
[1] http://dpdk.org/dev/patchwork/patch/29893/
Santosh Shukla (3):
doc: add platform device
doc: add mempool and octeontx mempool device
doc: use correct mempool ops handle name
doc/guides/eventdevs/octeontx.rst | 28 +---------
doc/guides/index.rst | 2 +
doc/guides/mempool/index.rst | 40 +++++++++++++++
doc/guides/mempool/octeontx.rst | 104 ++++++++++++++++++++++++++++++++++++++
doc/guides/nics/octeontx.rst | 31 +-----------
doc/guides/platform/index.rst | 40 +++++++++++++++
doc/guides/platform/octeontx.rst | 78 ++++++++++++++++++++++++++++
7 files changed, 267 insertions(+), 56 deletions(-)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
create mode 100644 doc/guides/platform/index.rst
create mode 100644 doc/guides/platform/octeontx.rst
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] doc: add platform device
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
@ 2017-10-20 15:21 ` Santosh Shukla
2017-10-21 9:41 ` Jerin Jacob
2017-10-23 14:35 ` Mcnamara, John
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
` (3 subsequent siblings)
4 siblings, 2 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-20 15:21 UTC (permalink / raw)
To: dev; +Cc: thomas, jerin.jacob, john.mcnamara, Santosh Shukla
This commit adds a section to the docs listing the platform
device PMDs available.
It then adds the octeontx platform driver to the listed platform
devices.
Patch also removes platform specific duplicate setup information from
eventdev/octeontx.rst, nics/octeontx.rst and update to
plaform/octeontx.rst.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v3 --> v4:
- Added doc/guide/platform/ as per v3 discussion.
- Moved common content of "Prerequisite" section from eventdev/octeontx.rst
to platform/octeontx.rst.
doc/guides/eventdevs/octeontx.rst | 28 +--------------
doc/guides/index.rst | 1 +
doc/guides/nics/octeontx.rst | 29 +--------------
doc/guides/platform/index.rst | 40 +++++++++++++++++++++
doc/guides/platform/octeontx.rst | 75 +++++++++++++++++++++++++++++++++++++++
5 files changed, 118 insertions(+), 55 deletions(-)
create mode 100644 doc/guides/platform/index.rst
create mode 100644 doc/guides/platform/octeontx.rst
diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst
index b43d5155e..e834372af 100644
--- a/doc/guides/eventdevs/octeontx.rst
+++ b/doc/guides/eventdevs/octeontx.rst
@@ -63,33 +63,7 @@ Supported OCTEONTX SoCs
Prerequisites
-------------
-There are three main pre-perquisites for executing SSOVF PMD on a OCTEONTX
-compatible board:
-
-1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
-
- The OCTEONTX Linux kernel drivers (including the required PF driver for the
- SSOVF) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
- along with build, install and dpdk usage instructions.
-
-2. **ARM64 Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem can be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- As an alternative method, SSOVF PMD can also be executed using images provided
- as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
- to bring up a OCTEONTX board.
-
- SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
-
-- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+Ref: `OCTEONTX Platform Driver` for setup information of PMD.
Pre-Installation Configuration
------------------------------
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 5b6eb7ec5..9c24dcb46 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -47,3 +47,4 @@ DPDK documentation
contributing/index
rel_notes/index
faq/index
+ platform/index
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index a6631cd0e..2527aa3e3 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -71,34 +71,7 @@ The features supported by the device and not yet supported by this PMD include:
Prerequisites
-------------
-There are three main pre-perquisites for executing OCTEONTX PMD on a OCTEONTX
-compatible board:
-
-1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
-
- The OCTEONTX Linux kernel drivers (including the required PF driver for the
- all network acceleration blocks) are available on GitHub at
- `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
- along with build, install and dpdk usage instructions.
-
-2. **ARM64 Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem can be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- As an alternative method, OCTEONTX PMD can also be executed using images provided
- as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
- to bring up a OCTEONTX board.
-
- SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
-
-Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+Ref: `OCTEONTX Platform Driver` for setup information of PMD.
Pre-Installation Configuration
------------------------------
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
new file mode 100644
index 000000000..27b64e57d
--- /dev/null
+++ b/doc/guides/platform/index.rst
@@ -0,0 +1,40 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Platform Device Drivers
+=======================
+
+The following are a list of platform device PMD, That has information about
+their common hw blocks and steps to setup platform device.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
new file mode 100644
index 000000000..11cff6a11
--- /dev/null
+++ b/doc/guides/platform/octeontx.rst
@@ -0,0 +1,75 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX Platform Driver
+========================
+
+OCTEONTX Platform driver has information about common offload hw
+block drivers of **Cavium OCTEONTX** SoC family and has steps to setup platform
+driver.
+
+More information about SoC can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Common Offload HW Block Drivers
+-------------------------------
+
+1. **Eventdev Driver**
+ Ref:`OCTEONTX SSOVF Eventdev Driver`
+
+Steps To Setup Platform Driver
+------------------------------
+
+There are three main pre-prerequisites for setting up Platform drivers on
+OCTEONTX compatible board:
+
+1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
+
+ The OCTEONTX Linux kernel drivers (includes the required PF driver for the
+ Platform drivers) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
+ along with build, install and dpdk usage instructions.
+
+2. **ARM64 Tool Chain**
+
+ For example, the *aarch64* Linaro Toolchain, which can be obtained from
+ `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+ from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+ As an alternative method, Platform drivers can also be executed using images provided
+ as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
+ to bring up a OCTEONTX board.
+
+ SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] doc: add platform device
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 1/3] doc: add platform device Santosh Shukla
@ 2017-10-21 9:41 ` Jerin Jacob
2017-10-21 21:09 ` Thomas Monjalon
2017-10-23 14:35 ` Mcnamara, John
1 sibling, 1 reply; 78+ messages in thread
From: Jerin Jacob @ 2017-10-21 9:41 UTC (permalink / raw)
To: Santosh Shukla; +Cc: dev, thomas, john.mcnamara
-----Original Message-----
> Date: Fri, 20 Oct 2017 20:51:22 +0530
> From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> To: dev@dpdk.org
> Cc: thomas@monjalon.net, jerin.jacob@caviumnetworks.com,
> john.mcnamara@intel.com, Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 1/3] doc: add platform device
> X-Mailer: git-send-email 2.14.1
>
> This commit adds a section to the docs listing the platform
> device PMDs available.
>
> It then adds the octeontx platform driver to the listed platform
> devices.
>
> Patch also removes platform specific duplicate setup information from
> eventdev/octeontx.rst, nics/octeontx.rst and update to
> plaform/octeontx.rst.
>
> Cc: John McNamara <john.mcnamara@intel.com>
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> ---
> v3 --> v4:
> - Added doc/guide/platform/ as per v3 discussion.
> - Moved common content of "Prerequisite" section from eventdev/octeontx.rst
> to platform/octeontx.rst.
>
> doc/guides/eventdevs/octeontx.rst | 28 +--------------
> doc/guides/index.rst | 1 +
> doc/guides/nics/octeontx.rst | 29 +--------------
> doc/guides/platform/index.rst | 40 +++++++++++++++++++++
> doc/guides/platform/octeontx.rst | 75 +++++++++++++++++++++++++++++++++++++++
> 5 files changed, 118 insertions(+), 55 deletions(-)
> create mode 100644 doc/guides/platform/index.rst
> create mode 100644 doc/guides/platform/octeontx.rst
>
> diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst
> index b43d5155e..e834372af 100644
> --- a/doc/guides/eventdevs/octeontx.rst
> +++ b/doc/guides/eventdevs/octeontx.rst
> @@ -63,33 +63,7 @@ Supported OCTEONTX SoCs
> Prerequisites
> -------------
>
> -There are three main pre-perquisites for executing SSOVF PMD on a OCTEONTX
> -compatible board:
> -
> -1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
> -
> - The OCTEONTX Linux kernel drivers (including the required PF driver for the
> - SSOVF) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
> - along with build, install and dpdk usage instructions.
> -
> -2. **ARM64 Tool Chain**
> -
> - For example, the *aarch64* Linaro Toolchain, which can be obtained from
> - `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
> -
> -3. **Rootfile system**
> -
> - Any *aarch64* supporting filesystem can be used. For example,
> - Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
> - from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
> -
> - As an alternative method, SSOVF PMD can also be executed using images provided
> - as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
> - to bring up a OCTEONTX board.
> -
> - SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
> -
> -- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +Ref: `OCTEONTX Platform Driver` for setup information of PMD.
See last comment
>
> Pre-Installation Configuration
> ------------------------------
> diff --git a/doc/guides/index.rst b/doc/guides/index.rst
> index 5b6eb7ec5..9c24dcb46 100644
> --- a/doc/guides/index.rst
> +++ b/doc/guides/index.rst
> @@ -47,3 +47,4 @@ DPDK documentation
> contributing/index
> rel_notes/index
> faq/index
> + platform/index
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index a6631cd0e..2527aa3e3 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -71,34 +71,7 @@ The features supported by the device and not yet supported by this PMD include:
> Prerequisites
> -------------
>
> -There are three main pre-perquisites for executing OCTEONTX PMD on a OCTEONTX
> -compatible board:
> -
> -1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
> -
> - The OCTEONTX Linux kernel drivers (including the required PF driver for the
> - all network acceleration blocks) are available on GitHub at
> - `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
> - along with build, install and dpdk usage instructions.
> -
> -2. **ARM64 Tool Chain**
> -
> - For example, the *aarch64* Linaro Toolchain, which can be obtained from
> - `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
> -
> -3. **Rootfile system**
> -
> - Any *aarch64* supporting filesystem can be used. For example,
> - Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
> - from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
> -
> - As an alternative method, OCTEONTX PMD can also be executed using images provided
> - as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
> - to bring up a OCTEONTX board.
> -
> - SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
> -
> -Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> +Ref: `OCTEONTX Platform Driver` for setup information of PMD.
See last comment
>
> Pre-Installation Configuration
> ------------------------------
> diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
> new file mode 100644
> index 000000000..27b64e57d
> --- /dev/null
> +++ b/doc/guides/platform/index.rst
> @@ -0,0 +1,40 @@
> +.. BSD LICENSE
> + Copyright (C) Cavium, Inc. 2017. All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + * Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> + * Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> + * Neither the name of Intel Corporation nor the names of its
s/Intel Corporation/Cavium, Inc/g
Similar instance found in patch 2 as well.
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +Platform Device Drivers
> +=======================
IMO, We should not limit to Platform specific drivers. I think, all
platform specific information can go here. For x86 case, may be x86 tuning
guide can be moved here from QSG.
> +
> +The following are a list of platform device PMD, That has information about
> +their common hw blocks and steps to setup platform device.
> +
> +.. toctree::
> + :maxdepth: 2
> + :numbered:
> +
> + octeontx
> diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
> new file mode 100644
> index 000000000..11cff6a11
> --- /dev/null
> +++ b/doc/guides/platform/octeontx.rst
> @@ -0,0 +1,75 @@
> +.. BSD LICENSE
> + Copyright (C) Cavium, Inc. 2017. All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + * Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> + * Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> + * Neither the name of Cavium, Inc nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +OCTEONTX Platform Driver
> +========================
I think, it would more appropriate to call "OCTEONTX Board Support Package"
or something similar as it has details for PF driver, ARM64 toolchain and
Rootfile system etc
> +
> +OCTEONTX Platform driver has information about common offload hw
> +block drivers of **Cavium OCTEONTX** SoC family and has steps to setup platform
> +driver.
> +
> +More information about SoC can be found at `Cavium, Inc Official Website
> +<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
> +
> +Common Offload HW Block Drivers
> +-------------------------------
> +
> +1. **Eventdev Driver**
> + Ref:`OCTEONTX SSOVF Eventdev Driver`
> +
> +Steps To Setup Platform Driver
> +------------------------------
> +
> +There are three main pre-prerequisites for setting up Platform drivers on
> +OCTEONTX compatible board:
> +
> +1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
> +
> + The OCTEONTX Linux kernel drivers (includes the required PF driver for the
> + Platform drivers) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
> + along with build, install and dpdk usage instructions.
> +
> +2. **ARM64 Tool Chain**
> +
> + For example, the *aarch64* Linaro Toolchain, which can be obtained from
> + `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
> +
> +3. **Rootfile system**
> +
> + Any *aarch64* supporting filesystem can be used. For example,
> + Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
> + from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
> +
> + As an alternative method, Platform drivers can also be executed using images provided
> + as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
> + to bring up a OCTEONTX board.
> +
> + SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
> --
> 2.14.1
>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] doc: add platform device
2017-10-21 9:41 ` Jerin Jacob
@ 2017-10-21 21:09 ` Thomas Monjalon
0 siblings, 0 replies; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-21 21:09 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Santosh Shukla, dev, john.mcnamara
21/10/2017 11:41, Jerin Jacob:
> > +Platform Device Drivers
> > +=======================
>
> IMO, We should not limit to Platform specific drivers. I think, all
> platform specific information can go here. For x86 case, may be x86 tuning
> guide can be moved here from QSG.
+1
It must become a place to describe any platform-specific stuff.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/3] doc: add platform device
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 1/3] doc: add platform device Santosh Shukla
2017-10-21 9:41 ` Jerin Jacob
@ 2017-10-23 14:35 ` Mcnamara, John
1 sibling, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-10-23 14:35 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Friday, October 20, 2017 4:21 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 1/3] doc: add platform device
>
> This commit adds a section to the docs listing the platform device PMDs
> available.
>
> It then adds the octeontx platform driver to the listed platform devices.
>
> Patch also removes platform specific duplicate setup information from
> eventdev/octeontx.rst, nics/octeontx.rst and update to
> plaform/octeontx.rst.
>
> ...
> diff --git a/doc/guides/index.rst b/doc/guides/index.rst index
> 5b6eb7ec5..9c24dcb46 100644
> --- a/doc/guides/index.rst
> +++ b/doc/guides/index.rst
> @@ -47,3 +47,4 @@ DPDK documentation
> contributing/index
> rel_notes/index
> faq/index
> + platform/index
The new doc should go after the Event Device section and before the
Contributors guide.
> ...
> +Platform Device Drivers
> +=======================
> +
> +The following are a list of platform device PMD, That has information
> +about their common hw blocks and steps to setup platform device.
Maybe something like this instead:
Platform Specific Guides
========================
The following are platform specific guides and setup information.
John
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] doc: add mempool and octeontx mempool device
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 1/3] doc: add platform device Santosh Shukla
@ 2017-10-20 15:21 ` Santosh Shukla
2017-10-23 14:48 ` Mcnamara, John
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name Santosh Shukla
` (2 subsequent siblings)
4 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-10-20 15:21 UTC (permalink / raw)
To: dev; +Cc: thomas, jerin.jacob, john.mcnamara, Santosh Shukla
This commit adds a section to the docs listing the mempool
device PMDs available.
It then adds the octeontx fpavf mempool PMD to the listed mempool
devices.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v3 --> v4:
- Moved Prerequisite section from mempool/octeontx.rst to
platform/octeontx.rst.
doc/guides/index.rst | 1 +
doc/guides/mempool/index.rst | 40 +++++++++++++++
doc/guides/mempool/octeontx.rst | 104 +++++++++++++++++++++++++++++++++++++++
doc/guides/platform/octeontx.rst | 3 ++
4 files changed, 148 insertions(+)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 9c24dcb46..c92bb8003 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -48,3 +48,4 @@ DPDK documentation
rel_notes/index
faq/index
platform/index
+ mempool/index
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
new file mode 100644
index 000000000..440fb175a
--- /dev/null
+++ b/doc/guides/mempool/index.rst
@@ -0,0 +1,40 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Cavium Inc. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Mempool Device Driver
+=====================
+
+The following are a list of mempool PMDs, which can be used from an
+application through the mempool API.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/mempool/octeontx.rst b/doc/guides/mempool/octeontx.rst
new file mode 100644
index 000000000..88dabc5d2
--- /dev/null
+++ b/doc/guides/mempool/octeontx.rst
@@ -0,0 +1,104 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX FPAVF Mempool Driver
+=============================
+
+The OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is a mempool
+driver for offload mempool device found in **Cavium OCTEONTX** SoC
+family.
+
+More information can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Feature
+-------
+
+Features of the OCTEONTX FPAVF PMD are:
+
+- 32 SR-IOV Virtual functions
+- 32 Pools
+- HW mempool manager
+
+Supported OCTEONTX SoCs
+-----------------------
+
+- CN83xx
+
+Prerequisites
+-------------
+
+Ref: `OCTEONTX Platform Driver` for setup information of PMD.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` ( set to ``octeontx_fpavf``)
+
+ Set default mempool ops to octeontx_fpavf.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL`` (default ``y``)
+
+ Toggle compilation of the ``librte_mempool_octeontx`` driver.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG`` (default ``n``)
+
+ Toggle display of generic debugging messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the OCTEONTX FPAVF MEMPOOL PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-thunderx-linuxapp-gcc test-build
+
+
+Initialization
+--------------
+
+The octeontx fpavf mempool initialization similar to other mempool
+drivers like ring. However user need to pass --base-virtaddr as
+command line input to application example test_mempool.c application.
+
+Example:
+
+.. code-block:: console
+
+ ./build/app/test -c 0xf --base-virtaddr=0x100000000000 \
+ --mbuf-pool-ops-name="octeontx_fpavf"
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
index 11cff6a11..bd2bb156b 100644
--- a/doc/guides/platform/octeontx.rst
+++ b/doc/guides/platform/octeontx.rst
@@ -43,6 +43,9 @@ Common Offload HW Block Drivers
1. **Eventdev Driver**
Ref:`OCTEONTX SSOVF Eventdev Driver`
+2. **Mempool Driver**
+ Ref:`OCTEONTX FPAVF Mempool Driver`
+
Steps To Setup Platform Driver
------------------------------
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] doc: add mempool and octeontx mempool device
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-10-23 14:48 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-10-23 14:48 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Friday, October 20, 2017 4:21 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 2/3] doc: add mempool and octeontx mempool device
>
> This commit adds a section to the docs listing the mempool device PMDs
> available.
>
> It then adds the octeontx fpavf mempool PMD to the listed mempool devices.
>
> ...
> diff --git a/doc/guides/index.rst b/doc/guides/index.rst index
> 9c24dcb46..c92bb8003 100644
> --- a/doc/guides/index.rst
> +++ b/doc/guides/index.rst
> @@ -48,3 +48,4 @@ DPDK documentation
> rel_notes/index
> faq/index
> platform/index
> + mempool/index
This new section should go before platform and after evendev.
> +Feature
> +-------
Probably "Features" is better.
> +
> +Features of the OCTEONTX FPAVF PMD are:
> +
> +- 32 SR-IOV Virtual functions
> +- 32 Pools
> +- HW mempool manager
> +
> +Supported OCTEONTX SoCs
> +-----------------------
> +
> +- CN83xx
> +
> +Prerequisites
> +-------------
> +
> +Ref: `OCTEONTX Platform Driver` for setup information of PMD.
You could use a link here, like this:
See :doc:`../platform/octeontx` for setup information.
John
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 1/3] doc: add platform device Santosh Shukla
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-10-20 15:21 ` Santosh Shukla
2017-10-21 9:42 ` Jerin Jacob
2017-10-23 13:12 ` Mcnamara, John
2017-10-20 16:07 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Mcnamara, John
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
4 siblings, 2 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-10-20 15:21 UTC (permalink / raw)
To: dev; +Cc: thomas, jerin.jacob, john.mcnamara, Santosh Shukla
Fixes: f820b5896631 ("doc: add octeontx ethdev driver documentation")
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
doc/guides/nics/octeontx.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index 2527aa3e3..d40d1f94b 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -195,7 +195,7 @@ Example:
.. code-block:: console
- ./your_dpdk_application --mbuf-pool-ops="octeontx_fpavf" \
+ ./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \
--vdev='event_octeontx' \
--vdev="eth_octeontx,nr_port=2"
--
2.14.1
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name Santosh Shukla
@ 2017-10-21 9:42 ` Jerin Jacob
2017-10-23 13:12 ` Mcnamara, John
1 sibling, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2017-10-21 9:42 UTC (permalink / raw)
To: Santosh Shukla; +Cc: dev, thomas, john.mcnamara
-----Original Message-----
> Date: Fri, 20 Oct 2017 20:51:24 +0530
> From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> To: dev@dpdk.org
> Cc: thomas@monjalon.net, jerin.jacob@caviumnetworks.com,
> john.mcnamara@intel.com, Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 3/3] doc: use correct mempool ops handle name
> X-Mailer: git-send-email 2.14.1
>
> Fixes: f820b5896631 ("doc: add octeontx ethdev driver documentation")
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
> doc/guides/nics/octeontx.rst | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
> index 2527aa3e3..d40d1f94b 100644
> --- a/doc/guides/nics/octeontx.rst
> +++ b/doc/guides/nics/octeontx.rst
> @@ -195,7 +195,7 @@ Example:
>
> .. code-block:: console
>
> - ./your_dpdk_application --mbuf-pool-ops="octeontx_fpavf" \
> + ./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \
> --vdev='event_octeontx' \
> --vdev="eth_octeontx,nr_port=2"
>
> --
> 2.14.1
>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name Santosh Shukla
2017-10-21 9:42 ` Jerin Jacob
@ 2017-10-23 13:12 ` Mcnamara, John
1 sibling, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-10-23 13:12 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Friday, October 20, 2017 4:21 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 3/3] doc: use correct mempool ops handle name
>
> Fixes: f820b5896631 ("doc: add octeontx ethdev driver documentation")
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
` (2 preceding siblings ...)
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 3/3] doc: use correct mempool ops handle name Santosh Shukla
@ 2017-10-20 16:07 ` Mcnamara, John
2017-10-20 21:10 ` Thomas Monjalon
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
4 siblings, 1 reply; 78+ messages in thread
From: Mcnamara, John @ 2017-10-20 16:07 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Friday, October 20, 2017 4:21 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; jerin.jacob@caviumnetworks.com; Mcnamara, John
> <john.mcnamara@intel.com>; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v4 0/3] Octeontx doc misc
>
> Patch 1: Introduce platform/ entry in guide. That has information about
> platform drivers. ( As per discussion[1]) Patch 2: Introduce mempool/
> entry in guide. (Refer discussion[1]) Patch 3: Misc Fix for
> nic/octeontx.rst
>
Hi,
I may be missing context from other patches/emails but what is a Platform Device Driver?
John
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc
2017-10-20 16:07 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Mcnamara, John
@ 2017-10-20 21:10 ` Thomas Monjalon
2017-10-23 14:02 ` Mcnamara, John
0 siblings, 1 reply; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-20 21:10 UTC (permalink / raw)
To: Mcnamara, John, Santosh Shukla; +Cc: dev, jerin.jacob
20/10/2017 18:07, Mcnamara, John:
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> > Patch 1: Introduce platform/ entry in guide. That has information about
> > platform drivers. ( As per discussion[1]) Patch 2: Introduce mempool/
> > entry in guide. (Refer discussion[1]) Patch 3: Misc Fix for
> > nic/octeontx.rst
> >
>
> Hi,
>
> I may be missing context from other patches/emails but what is a Platform Device Driver?
The idea is to add a guide for platforms (x86, Power8, DPAA, DPAA2, ThunderX, OcteonTX, Armada).
We could move the x86 tuning from the QSG to this platform guide.
For the SoCs, it is also a good way to reference docs for the drivers related to the platform.
Santosh, I know you like referencing previous discussions. It's good.
It should not prevent you from doing a summary where needed :)
Thanks for bringing this idea to live.
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc
2017-10-20 21:10 ` Thomas Monjalon
@ 2017-10-23 14:02 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-10-23 14:02 UTC (permalink / raw)
To: Thomas Monjalon, Santosh Shukla; +Cc: dev, jerin.jacob
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, October 20, 2017 10:11 PM
> To: Mcnamara, John <john.mcnamara@intel.com>; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Cc: dev@dpdk.org; jerin.jacob@caviumnetworks.com
> Subject: Re: [PATCH v4 0/3] Octeontx doc misc
>
> 20/10/2017 18:07, Mcnamara, John:
> > From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> > > Patch 1: Introduce platform/ entry in guide. That has information
> > > about platform drivers. ( As per discussion[1]) Patch 2: Introduce
> > > mempool/ entry in guide. (Refer discussion[1]) Patch 3: Misc Fix for
> > > nic/octeontx.rst
> > >
> >
> > Hi,
> >
> > I may be missing context from other patches/emails but what is a
> Platform Device Driver?
>
> The idea is to add a guide for platforms (x86, Power8, DPAA, DPAA2,
> ThunderX, OcteonTX, Armada).
> We could move the x86 tuning from the QSG to this platform guide.
> For the SoCs, it is also a good way to reference docs for the drivers
> related to the platform.
Ok. Seems like a good idea. But should it be called something like
"Platform Specific Guides"?
John
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] Doc misc
2017-10-20 15:21 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Santosh Shukla
` (3 preceding siblings ...)
2017-10-20 16:07 ` [dpdk-dev] [PATCH v4 0/3] Octeontx doc misc Mcnamara, John
@ 2017-11-07 6:59 ` Santosh Shukla
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 1/3] doc: add platform guide Santosh Shukla
` (3 more replies)
4 siblings, 4 replies; 78+ messages in thread
From: Santosh Shukla @ 2017-11-07 6:59 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, thomas, jerin.jacob, Santosh Shukla
v5:
Introduce platform specific guide in doc, That can have
any misc/common information about platform like quick start guide,
, tunning tips, how-to-setup platform.
Also can have information about platforms common hw block etc..
Series has Octeontx specific Misc doc changes.
v5 --> 4:
* Replaced `Ref:` with `Sec :doc:` as per John review comment.
* Renamed Feature to Features (john).
* Replaced Section from `Platform Device Driver` to
`Platform Specific guide` (john).
Thanks.
Discussion history;
[1] http://dpdk.org/dev/patchwork/patch/29893/
Santosh Shukla (3):
doc: add platform guide
doc: add mempool and octeontx mempool device
doc: use correct mempool ops handle name
doc/guides/eventdevs/octeontx.rst | 28 +---------
doc/guides/index.rst | 2 +
doc/guides/mempool/index.rst | 40 +++++++++++++++
doc/guides/mempool/octeontx.rst | 104 ++++++++++++++++++++++++++++++++++++++
doc/guides/nics/octeontx.rst | 31 +-----------
doc/guides/platform/index.rst | 39 ++++++++++++++
doc/guides/platform/octeontx.rst | 81 +++++++++++++++++++++++++++++
7 files changed, 269 insertions(+), 56 deletions(-)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
create mode 100644 doc/guides/platform/index.rst
create mode 100644 doc/guides/platform/octeontx.rst
--
2.7.4
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] doc: add platform guide
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
@ 2017-11-07 6:59 ` Santosh Shukla
2017-11-10 17:42 ` Mcnamara, John
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
` (2 subsequent siblings)
3 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-11-07 6:59 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, thomas, jerin.jacob, Santosh Shukla
This commit adds a section to the docs listing the platform
guide for the PMDs.
It then adds the octeontx platform guide to the listed platform
devices.
Patch also removes platform specific duplicate setup information from
eventdev/octeontx.rst, nics/octeontx.rst and update to
plaform/octeontx.rst.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v4 --> v5:
* Replaced `Ref:` with `Sec :doc:` as per John review comment.
* * Renamed Feature to Features (john).
* * Replaced Section from `Platform Device Driver` to
`Platform Specific guide` (john).
* s/Intel Corporation / Cavium Inc (Jerin)
doc/guides/eventdevs/octeontx.rst | 28 +-------------
doc/guides/index.rst | 1 +
doc/guides/nics/octeontx.rst | 29 +--------------
doc/guides/platform/index.rst | 39 ++++++++++++++++++++
doc/guides/platform/octeontx.rst | 77 +++++++++++++++++++++++++++++++++++++++
5 files changed, 119 insertions(+), 55 deletions(-)
create mode 100644 doc/guides/platform/index.rst
create mode 100644 doc/guides/platform/octeontx.rst
diff --git a/doc/guides/eventdevs/octeontx.rst b/doc/guides/eventdevs/octeontx.rst
index 7e601a0..cef004a 100644
--- a/doc/guides/eventdevs/octeontx.rst
+++ b/doc/guides/eventdevs/octeontx.rst
@@ -63,33 +63,7 @@ Supported OCTEONTX SoCs
Prerequisites
-------------
-There are three main pre-perquisites for executing SSOVF PMD on a OCTEONTX
-compatible board:
-
-1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
-
- The OCTEONTX Linux kernel drivers (including the required PF driver for the
- SSOVF) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
- along with build, install and dpdk usage instructions.
-
-2. **ARM64 Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem can be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- As an alternative method, SSOVF PMD can also be executed using images provided
- as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
- to bring up a OCTEONTX board.
-
- SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
-
-- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+See :doc: `../platform/octeontx` for setup information.
Pre-Installation Configuration
------------------------------
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 5b6eb7e..91a7ebf 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -44,6 +44,7 @@ DPDK documentation
nics/index
cryptodevs/index
eventdevs/index
+ platform/index
contributing/index
rel_notes/index
faq/index
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index a6631cd..e4f8a4c 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -71,34 +71,7 @@ The features supported by the device and not yet supported by this PMD include:
Prerequisites
-------------
-There are three main pre-perquisites for executing OCTEONTX PMD on a OCTEONTX
-compatible board:
-
-1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
-
- The OCTEONTX Linux kernel drivers (including the required PF driver for the
- all network acceleration blocks) are available on GitHub at
- `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
- along with build, install and dpdk usage instructions.
-
-2. **ARM64 Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem can be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- As an alternative method, OCTEONTX PMD can also be executed using images provided
- as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
- to bring up a OCTEONTX board.
-
- SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
-
-Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+See :doc:`../platform/octeontx` for setup information.
Pre-Installation Configuration
------------------------------
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
new file mode 100644
index 0000000..69e560c
--- /dev/null
+++ b/doc/guides/platform/index.rst
@@ -0,0 +1,39 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Platform Specific Guides
+========================
+
+The following are platform specific guides and setup information.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
new file mode 100644
index 0000000..34c0339
--- /dev/null
+++ b/doc/guides/platform/octeontx.rst
@@ -0,0 +1,77 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX Board Support Package
+==============================
+
+This doc has information about steps to setup octeontx platform
+and information about common offload hw block drivers of
+**Cavium OCTEONTX** SoC family.
+
+
+More information about SoC can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Common Offload HW Block Drivers
+-------------------------------
+
+1. **Eventdev Driver**
+ See :doc: `../eventdevs/octeontx.rst` for octeontx ssovf eventdev driver
+ information.
+
+Steps To Setup Platform
+-----------------------
+
+There are three main pre-prerequisites for setting up Platform drivers on
+OCTEONTX compatible board:
+
+1. **OCTEONTX Linux kernel PF driver for Network acceleration HW blocks**
+
+ The OCTEONTX Linux kernel drivers (includes the required PF driver for the
+ Platform drivers) are available on Github at `octeontx-kmod <https://github.com/caviumnetworks/octeontx-kmod>`_
+ along with build, install and dpdk usage instructions.
+
+2. **ARM64 Tool Chain**
+
+ For example, the *aarch64* Linaro Toolchain, which can be obtained from
+ `here <https://releases.linaro.org/components/toolchain/binaries/4.9-2017.01/aarch64-linux-gnu>`_.
+
+3. **Rootfile system**
+
+ Any *aarch64* supporting filesystem can be used. For example,
+ Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
+ from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
+
+ As an alternative method, Platform drivers can also be executed using images provided
+ as part of SDK from Cavium. The SDK includes all the above prerequisites necessary
+ to bring up a OCTEONTX board.
+
+ SDK and related information can be obtained from: `Cavium support site <https://support.cavium.com/>`_.
+
+- Follow the DPDK :doc: `../linux_gsg/index.rst` to setup the basic DPDK environment.
--
2.7.4
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] doc: add platform guide
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 1/3] doc: add platform guide Santosh Shukla
@ 2017-11-10 17:42 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-11-10 17:42 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Tuesday, November 7, 2017 7:00 AM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; thomas@monjalon.net;
> jerin.jacob@caviumnetworks.com; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v5 1/3] doc: add platform guide
>
> This commit adds a section to the docs listing the platform guide for the
> PMDs.
>
> It then adds the octeontx platform guide to the listed platform devices.
>
> Patch also removes platform specific duplicate setup information from
> eventdev/octeontx.rst, nics/octeontx.rst and update to
> plaform/octeontx.rst.
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] doc: add mempool and octeontx mempool device
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 1/3] doc: add platform guide Santosh Shukla
@ 2017-11-07 6:59 ` Santosh Shukla
2017-11-10 17:43 ` Mcnamara, John
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 3/3] doc: use correct mempool ops handle name Santosh Shukla
2017-11-12 3:52 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Thomas Monjalon
3 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-11-07 6:59 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, thomas, jerin.jacob, Santosh Shukla
This commit adds a section to the docs listing the mempool
device PMDs available.
It then adds the octeontx fpavf mempool PMD to the listed mempool
devices.
Cc: John McNamara <john.mcnamara@intel.com>
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
---
v4 --> v5:
* Replaced `Ref :` with `See :doc:` (John)
* Renamed section from Platform Device Drivers to
Platform Specific Guides
doc/guides/index.rst | 1 +
doc/guides/mempool/index.rst | 40 +++++++++++++++
doc/guides/mempool/octeontx.rst | 104 +++++++++++++++++++++++++++++++++++++++
doc/guides/platform/octeontx.rst | 4 ++
4 files changed, 149 insertions(+)
create mode 100644 doc/guides/mempool/index.rst
create mode 100644 doc/guides/mempool/octeontx.rst
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 91a7ebf..f924a7c 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -44,6 +44,7 @@ DPDK documentation
nics/index
cryptodevs/index
eventdevs/index
+ mempool/index
platform/index
contributing/index
rel_notes/index
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
new file mode 100644
index 0000000..b3c8e7f
--- /dev/null
+++ b/doc/guides/mempool/index.rst
@@ -0,0 +1,40 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Cavium Inc. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Mempool Device Driver
+=====================
+
+The following are a list of mempool PMDs, which can be used from an
+application through the mempool API.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ octeontx
diff --git a/doc/guides/mempool/octeontx.rst b/doc/guides/mempool/octeontx.rst
new file mode 100644
index 0000000..b262c82
--- /dev/null
+++ b/doc/guides/mempool/octeontx.rst
@@ -0,0 +1,104 @@
+.. BSD LICENSE
+ Copyright (C) Cavium, Inc. 2017. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Cavium, Inc nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+OCTEONTX FPAVF Mempool Driver
+=============================
+
+The OCTEONTX FPAVF PMD (**librte_mempool_octeontx**) is a mempool
+driver for offload mempool device found in **Cavium OCTEONTX** SoC
+family.
+
+More information can be found at `Cavium, Inc Official Website
+<http://www.cavium.com/OCTEON-TX_ARM_Processors.html>`_.
+
+Features
+--------
+
+Features of the OCTEONTX FPAVF PMD are:
+
+- 32 SR-IOV Virtual functions
+- 32 Pools
+- HW mempool manager
+
+Supported OCTEONTX SoCs
+-----------------------
+
+- CN83xx
+
+Prerequisites
+-------------
+
+See :doc: `../platform/octeontx.rst` for setup information.
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` ( set to ``octeontx_fpavf``)
+
+ Set default mempool ops to octeontx_fpavf.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL`` (default ``y``)
+
+ Toggle compilation of the ``librte_mempool_octeontx`` driver.
+
+- ``CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL_DEBUG`` (default ``n``)
+
+ Toggle display of generic debugging messages
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the OCTEONTX FPAVF MEMPOOL PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-thunderx-linuxapp-gcc test-build
+
+
+Initialization
+--------------
+
+The octeontx fpavf mempool initialization similar to other mempool
+drivers like ring. However user need to pass --base-virtaddr as
+command line input to application example test_mempool.c application.
+
+Example:
+
+.. code-block:: console
+
+ ./build/app/test -c 0xf --base-virtaddr=0x100000000000 \
+ --mbuf-pool-ops-name="octeontx_fpavf"
diff --git a/doc/guides/platform/octeontx.rst b/doc/guides/platform/octeontx.rst
index 34c0339..fb708ca 100644
--- a/doc/guides/platform/octeontx.rst
+++ b/doc/guides/platform/octeontx.rst
@@ -45,6 +45,10 @@ Common Offload HW Block Drivers
See :doc: `../eventdevs/octeontx.rst` for octeontx ssovf eventdev driver
information.
+2. **Mempool Driver**
+ See :doc: `../mempool/octeontx.rst` for octeontx fpavf mempool driver
+ information.
+
Steps To Setup Platform
-----------------------
--
2.7.4
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add mempool and octeontx mempool device
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-11-10 17:43 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-11-10 17:43 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Tuesday, November 7, 2017 7:00 AM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; thomas@monjalon.net;
> jerin.jacob@caviumnetworks.com; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v5 2/3] doc: add mempool and octeontx mempool device
>
> This commit adds a section to the docs listing the mempool device PMDs
> available.
>
> It then adds the octeontx fpavf mempool PMD to the listed mempool devices.
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] doc: use correct mempool ops handle name
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 1/3] doc: add platform guide Santosh Shukla
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 2/3] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-11-07 6:59 ` Santosh Shukla
2017-11-10 17:43 ` Mcnamara, John
2017-11-12 3:52 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Thomas Monjalon
3 siblings, 1 reply; 78+ messages in thread
From: Santosh Shukla @ 2017-11-07 6:59 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, thomas, jerin.jacob, Santosh Shukla
Fixes: f820b5896631 ("doc: add octeontx ethdev driver documentation")
Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/nics/octeontx.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/nics/octeontx.rst b/doc/guides/nics/octeontx.rst
index e4f8a4c..90bb9e5 100644
--- a/doc/guides/nics/octeontx.rst
+++ b/doc/guides/nics/octeontx.rst
@@ -195,7 +195,7 @@ Example:
.. code-block:: console
- ./your_dpdk_application --mbuf-pool-ops="octeontx_fpavf" \
+ ./your_dpdk_application --mbuf-pool-ops-name="octeontx_fpavf" \
--vdev='event_octeontx' \
--vdev="eth_octeontx,nr_port=2"
--
2.7.4
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] doc: use correct mempool ops handle name
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 3/3] doc: use correct mempool ops handle name Santosh Shukla
@ 2017-11-10 17:43 ` Mcnamara, John
0 siblings, 0 replies; 78+ messages in thread
From: Mcnamara, John @ 2017-11-10 17:43 UTC (permalink / raw)
To: Santosh Shukla, dev; +Cc: thomas, jerin.jacob
> -----Original Message-----
> From: Santosh Shukla [mailto:santosh.shukla@caviumnetworks.com]
> Sent: Tuesday, November 7, 2017 7:00 AM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; thomas@monjalon.net;
> jerin.jacob@caviumnetworks.com; Santosh Shukla
> <santosh.shukla@caviumnetworks.com>
> Subject: [PATCH v5 3/3] doc: use correct mempool ops handle name
>
> Fixes: f820b5896631 ("doc: add octeontx ethdev driver documentation")
>
> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/3] Doc misc
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 0/3] Doc misc Santosh Shukla
` (2 preceding siblings ...)
2017-11-07 6:59 ` [dpdk-dev] [PATCH v5 3/3] doc: use correct mempool ops handle name Santosh Shukla
@ 2017-11-12 3:52 ` Thomas Monjalon
3 siblings, 0 replies; 78+ messages in thread
From: Thomas Monjalon @ 2017-11-12 3:52 UTC (permalink / raw)
To: Santosh Shukla; +Cc: dev, john.mcnamara, jerin.jacob
> Santosh Shukla (3):
> doc: add platform guide
> doc: add mempool and octeontx mempool device
> doc: use correct mempool ops handle name
Applied, thanks
^ permalink raw reply [flat|nested] 78+ messages in thread
* Re: [dpdk-dev] [PATCH v3 00/10] Cavium Octeontx external mempool driver
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 " Santosh Shukla
` (9 preceding siblings ...)
2017-10-08 12:40 ` [dpdk-dev] [PATCH v3 10/10] doc: add mempool and octeontx mempool device Santosh Shukla
@ 2017-10-08 17:16 ` Thomas Monjalon
10 siblings, 0 replies; 78+ messages in thread
From: Thomas Monjalon @ 2017-10-08 17:16 UTC (permalink / raw)
To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal
> Santosh Shukla (10):
> mempool/octeontx: add HW constants
> mempool/octeontx: add build and log infrastructure
> mempool/octeontx: probe fpavf pcie devices
> mempool/octeontx: add support for alloc
> mempool/octeontx: add support for free
> mempool/octeontx: add support for enq and deq
> mempool/octeontx: add support for get count
> mempool/octeontx: add support for get capability
> mempool/octeontx: add support for memory area ops
> doc: add mempool and octeontx mempool device
Applied without doc patch (see comment on it), thanks
^ permalink raw reply [flat|nested] 78+ messages in thread