* [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
@ 2014-11-25 17:26 Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 1/6] enicpmd: License text Sujith Sankar
` (7 more replies)
0 siblings, 8 replies; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
ENIC PMD is the poll-mode driver for the Cisco Systems Inc. VIC to be
used with DPDK suite.
Sujith Sankar (6):
enicpmd: License text
enicpmd: Makefile
enicpmd: VNIC common code partially shared with ENIC kernel mode
driver
enicpmd: pmd specific code
enicpmd: DPDK-ENIC PMD interface
enicpmd: DPDK changes for accommodating ENIC PMD
config/common_linuxapp | 5 +
lib/Makefile | 1 +
lib/librte_pmd_enic/LICENSE | 27 +
lib/librte_pmd_enic/Makefile | 67 ++
lib/librte_pmd_enic/enic.h | 157 ++++
lib/librte_pmd_enic/enic_clsf.c | 244 ++++++
lib/librte_pmd_enic/enic_compat.h | 142 ++++
lib/librte_pmd_enic/enic_etherdev.c | 613 +++++++++++++++
lib/librte_pmd_enic/enic_main.c | 1266 ++++++++++++++++++++++++++++++
lib/librte_pmd_enic/enic_res.c | 221 ++++++
lib/librte_pmd_enic/enic_res.h | 168 ++++
lib/librte_pmd_enic/vnic/cq_desc.h | 126 +++
lib/librte_pmd_enic/vnic/cq_enet_desc.h | 261 ++++++
lib/librte_pmd_enic/vnic/rq_enet_desc.h | 76 ++
lib/librte_pmd_enic/vnic/vnic_cq.c | 117 +++
lib/librte_pmd_enic/vnic/vnic_cq.h | 152 ++++
lib/librte_pmd_enic/vnic/vnic_dev.c | 1063 +++++++++++++++++++++++++
lib/librte_pmd_enic/vnic/vnic_dev.h | 203 +++++
lib/librte_pmd_enic/vnic/vnic_devcmd.h | 774 ++++++++++++++++++
lib/librte_pmd_enic/vnic/vnic_enet.h | 78 ++
lib/librte_pmd_enic/vnic/vnic_intr.c | 83 ++
lib/librte_pmd_enic/vnic/vnic_intr.h | 126 +++
lib/librte_pmd_enic/vnic/vnic_nic.h | 88 +++
lib/librte_pmd_enic/vnic/vnic_resource.h | 97 +++
lib/librte_pmd_enic/vnic/vnic_rq.c | 246 ++++++
lib/librte_pmd_enic/vnic/vnic_rq.h | 282 +++++++
lib/librte_pmd_enic/vnic/vnic_rss.c | 85 ++
lib/librte_pmd_enic/vnic/vnic_rss.h | 61 ++
lib/librte_pmd_enic/vnic/vnic_stats.h | 86 ++
lib/librte_pmd_enic/vnic/vnic_wq.c | 245 ++++++
lib/librte_pmd_enic/vnic/vnic_wq.h | 283 +++++++
lib/librte_pmd_enic/vnic/wq_enet_desc.h | 114 +++
mk/rte.app.mk | 4 +
33 files changed, 7561 insertions(+)
create mode 100644 lib/librte_pmd_enic/LICENSE
create mode 100644 lib/librte_pmd_enic/Makefile
create mode 100644 lib/librte_pmd_enic/enic.h
create mode 100644 lib/librte_pmd_enic/enic_clsf.c
create mode 100644 lib/librte_pmd_enic/enic_compat.h
create mode 100644 lib/librte_pmd_enic/enic_etherdev.c
create mode 100644 lib/librte_pmd_enic/enic_main.c
create mode 100644 lib/librte_pmd_enic/enic_res.c
create mode 100644 lib/librte_pmd_enic/enic_res.h
create mode 100644 lib/librte_pmd_enic/vnic/cq_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/cq_enet_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/rq_enet_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_devcmd.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_enet.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_nic.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_resource.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_stats.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.h
create mode 100644 lib/librte_pmd_enic/vnic/wq_enet_desc.h
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 1/6] enicpmd: License text
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 2/6] enicpmd: Makefile Sujith Sankar
` (6 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
lib/librte_pmd_enic/LICENSE | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
create mode 100644 lib/librte_pmd_enic/LICENSE
diff --git a/lib/librte_pmd_enic/LICENSE b/lib/librte_pmd_enic/LICENSE
new file mode 100644
index 0000000..46a27a4
--- /dev/null
+++ b/lib/librte_pmd_enic/LICENSE
@@ -0,0 +1,27 @@
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 2/6] enicpmd: Makefile
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 1/6] enicpmd: License text Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 3/6] enicpmd: VNIC common code partially shared with ENIC kernel mode driver Sujith Sankar
` (5 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
lib/librte_pmd_enic/Makefile | 67 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
create mode 100644 lib/librte_pmd_enic/Makefile
diff --git a/lib/librte_pmd_enic/Makefile b/lib/librte_pmd_enic/Makefile
new file mode 100644
index 0000000..e7bb11b
--- /dev/null
+++ b/lib/librte_pmd_enic/Makefile
@@ -0,0 +1,67 @@
+#
+# Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+# Copyright 2007 Nuova Systems, Inc. All rights reserved.
+#
+# Copyright (c) 2014, Cisco Systems, Inc.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# 1. Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#
+# 2. Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+# POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_enic.a
+
+CFLAGS += -I$(RTE_SDK)/lib/librte_hash/ -I$(RTE_SDK)/lib/librte_pmd_enic/vnic/
+CFLAGS += -I$(RTE_SDK)/lib/librte_pmd_enic/
+CFLAGS += -O3 -Wno-deprecated
+
+VPATH += $(RTE_SDK)/lib/librte_pmd_enic/src
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic_main.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic_clsf.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_cq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_wq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_dev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_intr.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_rq.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic_etherdev.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += enic_res.c
+SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_rss.c
+
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_eal lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_net lib/librte_malloc
+
+include $(RTE_SDK)/mk/rte.lib.mk
+
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 3/6] enicpmd: VNIC common code partially shared with ENIC kernel mode driver
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 1/6] enicpmd: License text Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 2/6] enicpmd: Makefile Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code Sujith Sankar
` (4 subsequent siblings)
7 siblings, 0 replies; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
lib/librte_pmd_enic/vnic/cq_desc.h | 126 ++++
lib/librte_pmd_enic/vnic/cq_enet_desc.h | 261 ++++++++
lib/librte_pmd_enic/vnic/rq_enet_desc.h | 76 +++
lib/librte_pmd_enic/vnic/vnic_cq.c | 117 ++++
lib/librte_pmd_enic/vnic/vnic_cq.h | 152 +++++
lib/librte_pmd_enic/vnic/vnic_dev.c | 1063 ++++++++++++++++++++++++++++++
lib/librte_pmd_enic/vnic/vnic_dev.h | 203 ++++++
lib/librte_pmd_enic/vnic/vnic_devcmd.h | 774 ++++++++++++++++++++++
lib/librte_pmd_enic/vnic/vnic_enet.h | 78 +++
lib/librte_pmd_enic/vnic/vnic_intr.c | 83 +++
lib/librte_pmd_enic/vnic/vnic_intr.h | 126 ++++
lib/librte_pmd_enic/vnic/vnic_nic.h | 88 +++
lib/librte_pmd_enic/vnic/vnic_resource.h | 97 +++
lib/librte_pmd_enic/vnic/vnic_rq.c | 246 +++++++
lib/librte_pmd_enic/vnic/vnic_rq.h | 282 ++++++++
| 85 +++
| 61 ++
lib/librte_pmd_enic/vnic/vnic_stats.h | 86 +++
lib/librte_pmd_enic/vnic/vnic_wq.c | 245 +++++++
lib/librte_pmd_enic/vnic/vnic_wq.h | 283 ++++++++
lib/librte_pmd_enic/vnic/wq_enet_desc.h | 114 ++++
21 files changed, 4646 insertions(+)
create mode 100644 lib/librte_pmd_enic/vnic/cq_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/cq_enet_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/rq_enet_desc.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_devcmd.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_enet.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_nic.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_resource.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_stats.h
create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.c
create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.h
create mode 100644 lib/librte_pmd_enic/vnic/wq_enet_desc.h
diff --git a/lib/librte_pmd_enic/vnic/cq_desc.h b/lib/librte_pmd_enic/vnic/cq_desc.h
new file mode 100644
index 0000000..c418967
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/cq_desc.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: cq_desc.h 129574 2013-04-26 22:11:14Z rfaucett $"
+
+#ifndef _CQ_DESC_H_
+#define _CQ_DESC_H_
+
+/*
+ * Completion queue descriptor types
+ */
+enum cq_desc_types {
+ CQ_DESC_TYPE_WQ_ENET = 0,
+ CQ_DESC_TYPE_DESC_COPY = 1,
+ CQ_DESC_TYPE_WQ_EXCH = 2,
+ CQ_DESC_TYPE_RQ_ENET = 3,
+ CQ_DESC_TYPE_RQ_FCP = 4,
+ CQ_DESC_TYPE_IOMMU_MISS = 5,
+ CQ_DESC_TYPE_SGL = 6,
+ CQ_DESC_TYPE_CLASSIFIER = 7,
+ CQ_DESC_TYPE_TEST = 127,
+};
+
+/* Completion queue descriptor: 16B
+ *
+ * All completion queues have this basic layout. The
+ * type_specfic area is unique for each completion
+ * queue type.
+ */
+struct cq_desc {
+ __le16 completed_index;
+ __le16 q_number;
+ u8 type_specfic[11];
+ u8 type_color;
+};
+
+#define CQ_DESC_TYPE_BITS 4
+#define CQ_DESC_TYPE_MASK ((1 << CQ_DESC_TYPE_BITS) - 1)
+#define CQ_DESC_COLOR_MASK 1
+#define CQ_DESC_COLOR_SHIFT 7
+#define CQ_DESC_Q_NUM_BITS 10
+#define CQ_DESC_Q_NUM_MASK ((1 << CQ_DESC_Q_NUM_BITS) - 1)
+#define CQ_DESC_COMP_NDX_BITS 12
+#define CQ_DESC_COMP_NDX_MASK ((1 << CQ_DESC_COMP_NDX_BITS) - 1)
+
+static inline void cq_color_enc(struct cq_desc *desc, const u8 color)
+{
+ if (color)
+ desc->type_color |= (1 << CQ_DESC_COLOR_SHIFT);
+ else
+ desc->type_color &= ~(1 << CQ_DESC_COLOR_SHIFT);
+}
+
+static inline void cq_desc_enc(struct cq_desc *desc,
+ const u8 type, const u8 color, const u16 q_number,
+ const u16 completed_index)
+{
+ desc->type_color = (type & CQ_DESC_TYPE_MASK) |
+ ((color & CQ_DESC_COLOR_MASK) << CQ_DESC_COLOR_SHIFT);
+ desc->q_number = cpu_to_le16(q_number & CQ_DESC_Q_NUM_MASK);
+ desc->completed_index = cpu_to_le16(completed_index &
+ CQ_DESC_COMP_NDX_MASK);
+}
+
+static inline void cq_desc_dec(const struct cq_desc *desc_arg,
+ u8 *type, u8 *color, u16 *q_number, u16 *completed_index)
+{
+ const struct cq_desc *desc = desc_arg;
+ const u8 type_color = desc->type_color;
+
+ *color = (type_color >> CQ_DESC_COLOR_SHIFT) & CQ_DESC_COLOR_MASK;
+
+ /*
+ * Make sure color bit is read from desc *before* other fields
+ * are read from desc. Hardware guarantees color bit is last
+ * bit (byte) written. Adding the rmb() prevents the compiler
+ * and/or CPU from reordering the reads which would potentially
+ * result in reading stale values.
+ */
+
+ rmb();
+
+ *type = type_color & CQ_DESC_TYPE_MASK;
+ *q_number = le16_to_cpu(desc->q_number) & CQ_DESC_Q_NUM_MASK;
+ *completed_index = le16_to_cpu(desc->completed_index) &
+ CQ_DESC_COMP_NDX_MASK;
+}
+
+static inline void cq_color_dec(const struct cq_desc *desc_arg, u8 *color)
+{
+ volatile const struct cq_desc *desc = desc_arg;
+
+ *color = (desc->type_color >> CQ_DESC_COLOR_SHIFT) & CQ_DESC_COLOR_MASK;
+}
+
+#endif /* _CQ_DESC_H_ */
diff --git a/lib/librte_pmd_enic/vnic/cq_enet_desc.h b/lib/librte_pmd_enic/vnic/cq_enet_desc.h
new file mode 100644
index 0000000..669a2b5
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/cq_enet_desc.h
@@ -0,0 +1,261 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: cq_enet_desc.h 160468 2014-02-18 09:50:15Z gvaradar $"
+
+#ifndef _CQ_ENET_DESC_H_
+#define _CQ_ENET_DESC_H_
+
+#include "cq_desc.h"
+
+/* Ethernet completion queue descriptor: 16B */
+struct cq_enet_wq_desc {
+ __le16 completed_index;
+ __le16 q_number;
+ u8 reserved[11];
+ u8 type_color;
+};
+
+static inline void cq_enet_wq_desc_enc(struct cq_enet_wq_desc *desc,
+ u8 type, u8 color, u16 q_number, u16 completed_index)
+{
+ cq_desc_enc((struct cq_desc *)desc, type,
+ color, q_number, completed_index);
+}
+
+static inline void cq_enet_wq_desc_dec(struct cq_enet_wq_desc *desc,
+ u8 *type, u8 *color, u16 *q_number, u16 *completed_index)
+{
+ cq_desc_dec((struct cq_desc *)desc, type,
+ color, q_number, completed_index);
+}
+
+/* Completion queue descriptor: Ethernet receive queue, 16B */
+struct cq_enet_rq_desc {
+ __le16 completed_index_flags;
+ __le16 q_number_rss_type_flags;
+ __le32 rss_hash;
+ __le16 bytes_written_flags;
+ __le16 vlan;
+ __le16 checksum_fcoe;
+ u8 flags;
+ u8 type_color;
+};
+
+#define CQ_ENET_RQ_DESC_FLAGS_INGRESS_PORT (0x1 << 12)
+#define CQ_ENET_RQ_DESC_FLAGS_FCOE (0x1 << 13)
+#define CQ_ENET_RQ_DESC_FLAGS_EOP (0x1 << 14)
+#define CQ_ENET_RQ_DESC_FLAGS_SOP (0x1 << 15)
+
+#define CQ_ENET_RQ_DESC_RSS_TYPE_BITS 4
+#define CQ_ENET_RQ_DESC_RSS_TYPE_MASK \
+ ((1 << CQ_ENET_RQ_DESC_RSS_TYPE_BITS) - 1)
+#define CQ_ENET_RQ_DESC_RSS_TYPE_NONE 0
+#define CQ_ENET_RQ_DESC_RSS_TYPE_IPv4 1
+#define CQ_ENET_RQ_DESC_RSS_TYPE_TCP_IPv4 2
+#define CQ_ENET_RQ_DESC_RSS_TYPE_IPv6 3
+#define CQ_ENET_RQ_DESC_RSS_TYPE_TCP_IPv6 4
+#define CQ_ENET_RQ_DESC_RSS_TYPE_IPv6_EX 5
+#define CQ_ENET_RQ_DESC_RSS_TYPE_TCP_IPv6_EX 6
+
+#define CQ_ENET_RQ_DESC_FLAGS_CSUM_NOT_CALC (0x1 << 14)
+
+#define CQ_ENET_RQ_DESC_BYTES_WRITTEN_BITS 14
+#define CQ_ENET_RQ_DESC_BYTES_WRITTEN_MASK \
+ ((1 << CQ_ENET_RQ_DESC_BYTES_WRITTEN_BITS) - 1)
+#define CQ_ENET_RQ_DESC_FLAGS_TRUNCATED (0x1 << 14)
+#define CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED (0x1 << 15)
+
+#define CQ_ENET_RQ_DESC_VLAN_TCI_VLAN_BITS 12
+#define CQ_ENET_RQ_DESC_VLAN_TCI_VLAN_MASK \
+ ((1 << CQ_ENET_RQ_DESC_VLAN_TCI_VLAN_BITS) - 1)
+#define CQ_ENET_RQ_DESC_VLAN_TCI_CFI_MASK (0x1 << 12)
+#define CQ_ENET_RQ_DESC_VLAN_TCI_USER_PRIO_BITS 3
+#define CQ_ENET_RQ_DESC_VLAN_TCI_USER_PRIO_MASK \
+ ((1 << CQ_ENET_RQ_DESC_VLAN_TCI_USER_PRIO_BITS) - 1)
+#define CQ_ENET_RQ_DESC_VLAN_TCI_USER_PRIO_SHIFT 13
+
+#define CQ_ENET_RQ_DESC_FCOE_SOF_BITS 8
+#define CQ_ENET_RQ_DESC_FCOE_SOF_MASK \
+ ((1 << CQ_ENET_RQ_DESC_FCOE_SOF_BITS) - 1)
+#define CQ_ENET_RQ_DESC_FCOE_EOF_BITS 8
+#define CQ_ENET_RQ_DESC_FCOE_EOF_MASK \
+ ((1 << CQ_ENET_RQ_DESC_FCOE_EOF_BITS) - 1)
+#define CQ_ENET_RQ_DESC_FCOE_EOF_SHIFT 8
+
+#define CQ_ENET_RQ_DESC_FLAGS_TCP_UDP_CSUM_OK (0x1 << 0)
+#define CQ_ENET_RQ_DESC_FCOE_FC_CRC_OK (0x1 << 0)
+#define CQ_ENET_RQ_DESC_FLAGS_UDP (0x1 << 1)
+#define CQ_ENET_RQ_DESC_FCOE_ENC_ERROR (0x1 << 1)
+#define CQ_ENET_RQ_DESC_FLAGS_TCP (0x1 << 2)
+#define CQ_ENET_RQ_DESC_FLAGS_IPV4_CSUM_OK (0x1 << 3)
+#define CQ_ENET_RQ_DESC_FLAGS_IPV6 (0x1 << 4)
+#define CQ_ENET_RQ_DESC_FLAGS_IPV4 (0x1 << 5)
+#define CQ_ENET_RQ_DESC_FLAGS_IPV4_FRAGMENT (0x1 << 6)
+#define CQ_ENET_RQ_DESC_FLAGS_FCS_OK (0x1 << 7)
+
+static inline void cq_enet_rq_desc_enc(struct cq_enet_rq_desc *desc,
+ u8 type, u8 color, u16 q_number, u16 completed_index,
+ u8 ingress_port, u8 fcoe, u8 eop, u8 sop, u8 rss_type, u8 csum_not_calc,
+ u32 rss_hash, u16 bytes_written, u8 packet_error, u8 vlan_stripped,
+ u16 vlan, u16 checksum, u8 fcoe_sof, u8 fcoe_fc_crc_ok,
+ u8 fcoe_enc_error, u8 fcoe_eof, u8 tcp_udp_csum_ok, u8 udp, u8 tcp,
+ u8 ipv4_csum_ok, u8 ipv6, u8 ipv4, u8 ipv4_fragment, u8 fcs_ok)
+{
+ cq_desc_enc((struct cq_desc *)desc, type,
+ color, q_number, completed_index);
+
+ desc->completed_index_flags |= cpu_to_le16(
+ (ingress_port ? CQ_ENET_RQ_DESC_FLAGS_INGRESS_PORT : 0) |
+ (fcoe ? CQ_ENET_RQ_DESC_FLAGS_FCOE : 0) |
+ (eop ? CQ_ENET_RQ_DESC_FLAGS_EOP : 0) |
+ (sop ? CQ_ENET_RQ_DESC_FLAGS_SOP : 0));
+
+ desc->q_number_rss_type_flags |= cpu_to_le16(
+ ((rss_type & CQ_ENET_RQ_DESC_RSS_TYPE_MASK) <<
+ CQ_DESC_Q_NUM_BITS) |
+ (csum_not_calc ? CQ_ENET_RQ_DESC_FLAGS_CSUM_NOT_CALC : 0));
+
+ desc->rss_hash = cpu_to_le32(rss_hash);
+
+ desc->bytes_written_flags = cpu_to_le16(
+ (bytes_written & CQ_ENET_RQ_DESC_BYTES_WRITTEN_MASK) |
+ (packet_error ? CQ_ENET_RQ_DESC_FLAGS_TRUNCATED : 0) |
+ (vlan_stripped ? CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED : 0));
+
+ desc->vlan = cpu_to_le16(vlan);
+
+ if (fcoe) {
+ desc->checksum_fcoe = cpu_to_le16(
+ (fcoe_sof & CQ_ENET_RQ_DESC_FCOE_SOF_MASK) |
+ ((fcoe_eof & CQ_ENET_RQ_DESC_FCOE_EOF_MASK) <<
+ CQ_ENET_RQ_DESC_FCOE_EOF_SHIFT));
+ } else {
+ desc->checksum_fcoe = cpu_to_le16(checksum);
+ }
+
+ desc->flags =
+ (tcp_udp_csum_ok ? CQ_ENET_RQ_DESC_FLAGS_TCP_UDP_CSUM_OK : 0) |
+ (udp ? CQ_ENET_RQ_DESC_FLAGS_UDP : 0) |
+ (tcp ? CQ_ENET_RQ_DESC_FLAGS_TCP : 0) |
+ (ipv4_csum_ok ? CQ_ENET_RQ_DESC_FLAGS_IPV4_CSUM_OK : 0) |
+ (ipv6 ? CQ_ENET_RQ_DESC_FLAGS_IPV6 : 0) |
+ (ipv4 ? CQ_ENET_RQ_DESC_FLAGS_IPV4 : 0) |
+ (ipv4_fragment ? CQ_ENET_RQ_DESC_FLAGS_IPV4_FRAGMENT : 0) |
+ (fcs_ok ? CQ_ENET_RQ_DESC_FLAGS_FCS_OK : 0) |
+ (fcoe_fc_crc_ok ? CQ_ENET_RQ_DESC_FCOE_FC_CRC_OK : 0) |
+ (fcoe_enc_error ? CQ_ENET_RQ_DESC_FCOE_ENC_ERROR : 0);
+}
+
+static inline void cq_enet_rq_desc_dec(struct cq_enet_rq_desc *desc,
+ u8 *type, u8 *color, u16 *q_number, u16 *completed_index,
+ u8 *ingress_port, u8 *fcoe, u8 *eop, u8 *sop, u8 *rss_type,
+ u8 *csum_not_calc, u32 *rss_hash, u16 *bytes_written, u8 *packet_error,
+ u8 *vlan_stripped, u16 *vlan_tci, u16 *checksum, u8 *fcoe_sof,
+ u8 *fcoe_fc_crc_ok, u8 *fcoe_enc_error, u8 *fcoe_eof,
+ u8 *tcp_udp_csum_ok, u8 *udp, u8 *tcp, u8 *ipv4_csum_ok,
+ u8 *ipv6, u8 *ipv4, u8 *ipv4_fragment, u8 *fcs_ok)
+{
+ u16 completed_index_flags;
+ u16 q_number_rss_type_flags;
+ u16 bytes_written_flags;
+
+ cq_desc_dec((struct cq_desc *)desc, type,
+ color, q_number, completed_index);
+
+ completed_index_flags = le16_to_cpu(desc->completed_index_flags);
+ q_number_rss_type_flags =
+ le16_to_cpu(desc->q_number_rss_type_flags);
+ bytes_written_flags = le16_to_cpu(desc->bytes_written_flags);
+
+ *ingress_port = (completed_index_flags &
+ CQ_ENET_RQ_DESC_FLAGS_INGRESS_PORT) ? 1 : 0;
+ *fcoe = (completed_index_flags & CQ_ENET_RQ_DESC_FLAGS_FCOE) ?
+ 1 : 0;
+ *eop = (completed_index_flags & CQ_ENET_RQ_DESC_FLAGS_EOP) ?
+ 1 : 0;
+ *sop = (completed_index_flags & CQ_ENET_RQ_DESC_FLAGS_SOP) ?
+ 1 : 0;
+
+ *rss_type = (u8)((q_number_rss_type_flags >> CQ_DESC_Q_NUM_BITS) &
+ CQ_ENET_RQ_DESC_RSS_TYPE_MASK);
+ *csum_not_calc = (q_number_rss_type_flags &
+ CQ_ENET_RQ_DESC_FLAGS_CSUM_NOT_CALC) ? 1 : 0;
+
+ *rss_hash = le32_to_cpu(desc->rss_hash);
+
+ *bytes_written = bytes_written_flags &
+ CQ_ENET_RQ_DESC_BYTES_WRITTEN_MASK;
+ *packet_error = (bytes_written_flags &
+ CQ_ENET_RQ_DESC_FLAGS_TRUNCATED) ? 1 : 0;
+ *vlan_stripped = (bytes_written_flags &
+ CQ_ENET_RQ_DESC_FLAGS_VLAN_STRIPPED) ? 1 : 0;
+
+ /*
+ * Tag Control Information(16) = user_priority(3) + cfi(1) + vlan(12)
+ */
+ *vlan_tci = le16_to_cpu(desc->vlan);
+
+ if (*fcoe) {
+ *fcoe_sof = (u8)(le16_to_cpu(desc->checksum_fcoe) &
+ CQ_ENET_RQ_DESC_FCOE_SOF_MASK);
+ *fcoe_fc_crc_ok = (desc->flags &
+ CQ_ENET_RQ_DESC_FCOE_FC_CRC_OK) ? 1 : 0;
+ *fcoe_enc_error = (desc->flags &
+ CQ_ENET_RQ_DESC_FCOE_ENC_ERROR) ? 1 : 0;
+ *fcoe_eof = (u8)((le16_to_cpu(desc->checksum_fcoe) >>
+ CQ_ENET_RQ_DESC_FCOE_EOF_SHIFT) &
+ CQ_ENET_RQ_DESC_FCOE_EOF_MASK);
+ *checksum = 0;
+ } else {
+ *fcoe_sof = 0;
+ *fcoe_fc_crc_ok = 0;
+ *fcoe_enc_error = 0;
+ *fcoe_eof = 0;
+ *checksum = le16_to_cpu(desc->checksum_fcoe);
+ }
+
+ *tcp_udp_csum_ok =
+ (desc->flags & CQ_ENET_RQ_DESC_FLAGS_TCP_UDP_CSUM_OK) ? 1 : 0;
+ *udp = (desc->flags & CQ_ENET_RQ_DESC_FLAGS_UDP) ? 1 : 0;
+ *tcp = (desc->flags & CQ_ENET_RQ_DESC_FLAGS_TCP) ? 1 : 0;
+ *ipv4_csum_ok =
+ (desc->flags & CQ_ENET_RQ_DESC_FLAGS_IPV4_CSUM_OK) ? 1 : 0;
+ *ipv6 = (desc->flags & CQ_ENET_RQ_DESC_FLAGS_IPV6) ? 1 : 0;
+ *ipv4 = (desc->flags & CQ_ENET_RQ_DESC_FLAGS_IPV4) ? 1 : 0;
+ *ipv4_fragment =
+ (desc->flags & CQ_ENET_RQ_DESC_FLAGS_IPV4_FRAGMENT) ? 1 : 0;
+ *fcs_ok = (desc->flags & CQ_ENET_RQ_DESC_FLAGS_FCS_OK) ? 1 : 0;
+}
+
+#endif /* _CQ_ENET_DESC_H_ */
diff --git a/lib/librte_pmd_enic/vnic/rq_enet_desc.h b/lib/librte_pmd_enic/vnic/rq_enet_desc.h
new file mode 100644
index 0000000..f38ff2a
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/rq_enet_desc.h
@@ -0,0 +1,76 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: rq_enet_desc.h 59839 2010-09-27 20:36:31Z roprabhu $"
+
+#ifndef _RQ_ENET_DESC_H_
+#define _RQ_ENET_DESC_H_
+
+/* Ethernet receive queue descriptor: 16B */
+struct rq_enet_desc {
+ __le64 address;
+ __le16 length_type;
+ u8 reserved[6];
+};
+
+enum rq_enet_type_types {
+ RQ_ENET_TYPE_ONLY_SOP = 0,
+ RQ_ENET_TYPE_NOT_SOP = 1,
+ RQ_ENET_TYPE_RESV2 = 2,
+ RQ_ENET_TYPE_RESV3 = 3,
+};
+
+#define RQ_ENET_ADDR_BITS 64
+#define RQ_ENET_LEN_BITS 14
+#define RQ_ENET_LEN_MASK ((1 << RQ_ENET_LEN_BITS) - 1)
+#define RQ_ENET_TYPE_BITS 2
+#define RQ_ENET_TYPE_MASK ((1 << RQ_ENET_TYPE_BITS) - 1)
+
+static inline void rq_enet_desc_enc(struct rq_enet_desc *desc,
+ u64 address, u8 type, u16 length)
+{
+ desc->address = cpu_to_le64(address);
+ desc->length_type = cpu_to_le16((length & RQ_ENET_LEN_MASK) |
+ ((type & RQ_ENET_TYPE_MASK) << RQ_ENET_LEN_BITS));
+}
+
+static inline void rq_enet_desc_dec(struct rq_enet_desc *desc,
+ u64 *address, u8 *type, u16 *length)
+{
+ *address = le64_to_cpu(desc->address);
+ *length = le16_to_cpu(desc->length_type) & RQ_ENET_LEN_MASK;
+ *type = (u8)((le16_to_cpu(desc->length_type) >> RQ_ENET_LEN_BITS) &
+ RQ_ENET_TYPE_MASK);
+}
+
+#endif /* _RQ_ENET_DESC_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_cq.c b/lib/librte_pmd_enic/vnic/vnic_cq.c
new file mode 100644
index 0000000..cda97e4
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_cq.c
@@ -0,0 +1,117 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_cq.c 171146 2014-05-02 07:08:20Z ssujith $"
+
+#include "vnic_dev.h"
+#include "vnic_cq.h"
+
+int vnic_cq_mem_size(struct vnic_cq *cq, unsigned int desc_count,
+ unsigned int desc_size)
+{
+ int mem_size;
+
+ mem_size = vnic_dev_desc_ring_size(&cq->ring, desc_count, desc_size);
+
+ return mem_size;
+}
+
+void vnic_cq_free(struct vnic_cq *cq)
+{
+ vnic_dev_free_desc_ring(cq->vdev, &cq->ring);
+
+ cq->ctrl = NULL;
+}
+
+int vnic_cq_alloc(struct vnic_dev *vdev, struct vnic_cq *cq, unsigned int index,
+ unsigned int socket_id,
+ unsigned int desc_count, unsigned int desc_size)
+{
+ int err;
+ char res_name[NAME_MAX];
+ static int instance;
+
+ cq->index = index;
+ cq->vdev = vdev;
+
+ cq->ctrl = vnic_dev_get_res(vdev, RES_TYPE_CQ, index);
+ if (!cq->ctrl) {
+ pr_err("Failed to hook CQ[%d] resource\n", index);
+ return -EINVAL;
+ }
+
+ snprintf(res_name, sizeof(res_name), "%d-cq-%d", instance++, index);
+ err = vnic_dev_alloc_desc_ring(vdev, &cq->ring, desc_count, desc_size,
+ socket_id, res_name);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+void vnic_cq_init(struct vnic_cq *cq, unsigned int flow_control_enable,
+ unsigned int color_enable, unsigned int cq_head, unsigned int cq_tail,
+ unsigned int cq_tail_color, unsigned int interrupt_enable,
+ unsigned int cq_entry_enable, unsigned int cq_message_enable,
+ unsigned int interrupt_offset, u64 cq_message_addr)
+{
+ u64 paddr;
+
+ paddr = (u64)cq->ring.base_addr | VNIC_PADDR_TARGET;
+ writeq(paddr, &cq->ctrl->ring_base);
+ iowrite32(cq->ring.desc_count, &cq->ctrl->ring_size);
+ iowrite32(flow_control_enable, &cq->ctrl->flow_control_enable);
+ iowrite32(color_enable, &cq->ctrl->color_enable);
+ iowrite32(cq_head, &cq->ctrl->cq_head);
+ iowrite32(cq_tail, &cq->ctrl->cq_tail);
+ iowrite32(cq_tail_color, &cq->ctrl->cq_tail_color);
+ iowrite32(interrupt_enable, &cq->ctrl->interrupt_enable);
+ iowrite32(cq_entry_enable, &cq->ctrl->cq_entry_enable);
+ iowrite32(cq_message_enable, &cq->ctrl->cq_message_enable);
+ iowrite32(interrupt_offset, &cq->ctrl->interrupt_offset);
+ writeq(cq_message_addr, &cq->ctrl->cq_message_addr);
+
+ cq->interrupt_offset = interrupt_offset;
+}
+
+void vnic_cq_clean(struct vnic_cq *cq)
+{
+ cq->to_clean = 0;
+ cq->last_color = 0;
+
+ iowrite32(0, &cq->ctrl->cq_head);
+ iowrite32(0, &cq->ctrl->cq_tail);
+ iowrite32(1, &cq->ctrl->cq_tail_color);
+
+ vnic_dev_clear_desc_ring(&cq->ring);
+}
diff --git a/lib/librte_pmd_enic/vnic/vnic_cq.h b/lib/librte_pmd_enic/vnic/vnic_cq.h
new file mode 100644
index 0000000..9ed9b1d
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_cq.h
@@ -0,0 +1,152 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_cq.h 173398 2014-05-19 09:17:02Z gvaradar $"
+
+#ifndef _VNIC_CQ_H_
+#define _VNIC_CQ_H_
+
+#include <rte_mbuf.h>
+
+#include "cq_desc.h"
+#include "vnic_dev.h"
+
+/* Completion queue control */
+struct vnic_cq_ctrl {
+ u64 ring_base; /* 0x00 */
+ u32 ring_size; /* 0x08 */
+ u32 pad0;
+ u32 flow_control_enable; /* 0x10 */
+ u32 pad1;
+ u32 color_enable; /* 0x18 */
+ u32 pad2;
+ u32 cq_head; /* 0x20 */
+ u32 pad3;
+ u32 cq_tail; /* 0x28 */
+ u32 pad4;
+ u32 cq_tail_color; /* 0x30 */
+ u32 pad5;
+ u32 interrupt_enable; /* 0x38 */
+ u32 pad6;
+ u32 cq_entry_enable; /* 0x40 */
+ u32 pad7;
+ u32 cq_message_enable; /* 0x48 */
+ u32 pad8;
+ u32 interrupt_offset; /* 0x50 */
+ u32 pad9;
+ u64 cq_message_addr; /* 0x58 */
+ u32 pad10;
+};
+
+#ifdef ENIC_AIC
+struct vnic_rx_bytes_counter {
+ unsigned int small_pkt_bytes_cnt;
+ unsigned int large_pkt_bytes_cnt;
+};
+#endif
+
+struct vnic_cq {
+ unsigned int index;
+ struct vnic_dev *vdev;
+ struct vnic_cq_ctrl __iomem *ctrl; /* memory-mapped */
+ struct vnic_dev_ring ring;
+ unsigned int to_clean;
+ unsigned int last_color;
+ unsigned int interrupt_offset;
+#ifdef ENIC_AIC
+ struct vnic_rx_bytes_counter pkt_size_counter;
+ unsigned int cur_rx_coal_timeval;
+ unsigned int tobe_rx_coal_timeval;
+ ktime_t prev_ts;
+#endif
+};
+
+static inline unsigned int vnic_cq_service(struct vnic_cq *cq,
+ unsigned int work_to_do,
+ int (*q_service)(struct vnic_dev *vdev, struct cq_desc *cq_desc,
+ u8 type, u16 q_number, u16 completed_index, void *opaque),
+ void *opaque)
+{
+ struct cq_desc *cq_desc;
+ unsigned int work_done = 0;
+ u16 q_number, completed_index;
+ u8 type, color;
+ struct rte_mbuf **rx_pkts = opaque;
+ unsigned int ret;
+ unsigned int split_hdr_size = vnic_get_hdr_split_size(cq->vdev);
+
+ cq_desc = (struct cq_desc *)((u8 *)cq->ring.descs +
+ cq->ring.desc_size * cq->to_clean);
+ cq_desc_dec(cq_desc, &type, &color,
+ &q_number, &completed_index);
+
+ while (color != cq->last_color) {
+ if (opaque)
+ opaque = (void *)&(rx_pkts[work_done]);
+
+ ret = (*q_service)(cq->vdev, cq_desc, type,
+ q_number, completed_index, opaque);
+ cq->to_clean++;
+ if (cq->to_clean == cq->ring.desc_count) {
+ cq->to_clean = 0;
+ cq->last_color = cq->last_color ? 0 : 1;
+ }
+
+ cq_desc = (struct cq_desc *)((u8 *)cq->ring.descs +
+ cq->ring.desc_size * cq->to_clean);
+ cq_desc_dec(cq_desc, &type, &color,
+ &q_number, &completed_index);
+
+ if (ret)
+ work_done++;
+ if (work_done >= work_to_do)
+ break;
+ }
+
+ return work_done;
+}
+
+void vnic_cq_free(struct vnic_cq *cq);
+int vnic_cq_alloc(struct vnic_dev *vdev, struct vnic_cq *cq, unsigned int index,
+ unsigned int socket_id,
+ unsigned int desc_count, unsigned int desc_size);
+void vnic_cq_init(struct vnic_cq *cq, unsigned int flow_control_enable,
+ unsigned int color_enable, unsigned int cq_head, unsigned int cq_tail,
+ unsigned int cq_tail_color, unsigned int interrupt_enable,
+ unsigned int cq_entry_enable, unsigned int message_enable,
+ unsigned int interrupt_offset, u64 message_addr);
+void vnic_cq_clean(struct vnic_cq *cq);
+int vnic_cq_mem_size(struct vnic_cq *cq, unsigned int desc_count,
+ unsigned int desc_size);
+
+#endif /* _VNIC_CQ_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_dev.c b/lib/librte_pmd_enic/vnic/vnic_dev.c
new file mode 100644
index 0000000..485123f
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_dev.c
@@ -0,0 +1,1063 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#include <rte_memzone.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+
+#include "vnic_dev.h"
+#include "vnic_resource.h"
+#include "vnic_devcmd.h"
+#include "vnic_stats.h"
+
+
+enum vnic_proxy_type {
+ PROXY_NONE,
+ PROXY_BY_BDF,
+ PROXY_BY_INDEX,
+};
+
+struct vnic_res {
+ void __iomem *vaddr;
+ dma_addr_t bus_addr;
+ unsigned int count;
+};
+
+struct vnic_intr_coal_timer_info {
+ u32 mul;
+ u32 div;
+ u32 max_usec;
+};
+
+struct vnic_dev {
+ void *priv;
+ struct rte_pci_device *pdev;
+ struct vnic_res res[RES_TYPE_MAX];
+ enum vnic_dev_intr_mode intr_mode;
+ struct vnic_devcmd __iomem *devcmd;
+ struct vnic_devcmd_notify *notify;
+ struct vnic_devcmd_notify notify_copy;
+ dma_addr_t notify_pa;
+ u32 notify_sz;
+ dma_addr_t linkstatus_pa;
+ struct vnic_stats *stats;
+ dma_addr_t stats_pa;
+ struct vnic_devcmd_fw_info *fw_info;
+ dma_addr_t fw_info_pa;
+ enum vnic_proxy_type proxy;
+ u32 proxy_index;
+ u64 args[VNIC_DEVCMD_NARGS];
+ u16 split_hdr_size;
+ int in_reset;
+ struct vnic_intr_coal_timer_info intr_coal_timer_info;
+ void *(*alloc_consistent)(void *priv, size_t size,
+ dma_addr_t *dma_handle, u8 *name);
+ void (*free_consistent)(struct rte_pci_device *hwdev,
+ size_t size, void *vaddr,
+ dma_addr_t dma_handle);
+};
+
+#define VNIC_MAX_RES_HDR_SIZE \
+ (sizeof(struct vnic_resource_header) + \
+ sizeof(struct vnic_resource) * RES_TYPE_MAX)
+#define VNIC_RES_STRIDE 128
+
+void *vnic_dev_priv(struct vnic_dev *vdev)
+{
+ return vdev->priv;
+}
+
+void vnic_register_cbacks(struct vnic_dev *vdev,
+ void *(*alloc_consistent)(void *priv, size_t size,
+ dma_addr_t *dma_handle, u8 *name),
+ void (*free_consistent)(struct rte_pci_device *hwdev,
+ size_t size, void *vaddr,
+ dma_addr_t dma_handle))
+{
+ vdev->alloc_consistent = alloc_consistent;
+ vdev->free_consistent = free_consistent;
+}
+
+static int vnic_dev_discover_res(struct vnic_dev *vdev,
+ struct vnic_dev_bar *bar, unsigned int num_bars)
+{
+ struct vnic_resource_header __iomem *rh;
+ struct mgmt_barmap_hdr __iomem *mrh;
+ struct vnic_resource __iomem *r;
+ u8 type;
+
+ if (num_bars == 0)
+ return -EINVAL;
+
+ if (bar->len < VNIC_MAX_RES_HDR_SIZE) {
+ pr_err("vNIC BAR0 res hdr length error\n");
+ return -EINVAL;
+ }
+
+ rh = bar->vaddr;
+ mrh = bar->vaddr;
+ if (!rh) {
+ pr_err("vNIC BAR0 res hdr not mem-mapped\n");
+ return -EINVAL;
+ }
+
+ /* Check for mgmt vnic in addition to normal vnic */
+ if ((ioread32(&rh->magic) != VNIC_RES_MAGIC) ||
+ (ioread32(&rh->version) != VNIC_RES_VERSION)) {
+ if ((ioread32(&mrh->magic) != MGMTVNIC_MAGIC) ||
+ (ioread32(&mrh->version) != MGMTVNIC_VERSION)) {
+ pr_err("vNIC BAR0 res magic/version error " \
+ "exp (%lx/%lx) or (%lx/%lx), curr (%x/%x)\n",
+ VNIC_RES_MAGIC, VNIC_RES_VERSION,
+ MGMTVNIC_MAGIC, MGMTVNIC_VERSION,
+ ioread32(&rh->magic), ioread32(&rh->version));
+ return -EINVAL;
+ }
+ }
+
+ if (ioread32(&mrh->magic) == MGMTVNIC_MAGIC)
+ r = (struct vnic_resource __iomem *)(mrh + 1);
+ else
+ r = (struct vnic_resource __iomem *)(rh + 1);
+
+
+ while ((type = ioread8(&r->type)) != RES_TYPE_EOL) {
+ u8 bar_num = ioread8(&r->bar);
+ u32 bar_offset = ioread32(&r->bar_offset);
+ u32 count = ioread32(&r->count);
+ u32 len;
+
+ r++;
+
+ if (bar_num >= num_bars)
+ continue;
+
+ if (!bar[bar_num].len || !bar[bar_num].vaddr)
+ continue;
+
+ switch (type) {
+ case RES_TYPE_WQ:
+ case RES_TYPE_RQ:
+ case RES_TYPE_CQ:
+ case RES_TYPE_INTR_CTRL:
+ /* each count is stride bytes long */
+ len = count * VNIC_RES_STRIDE;
+ if (len + bar_offset > bar[bar_num].len) {
+ pr_err("vNIC BAR0 resource %d " \
+ "out-of-bounds, offset 0x%x + " \
+ "size 0x%x > bar len 0x%lx\n",
+ type, bar_offset,
+ len,
+ bar[bar_num].len);
+ return -EINVAL;
+ }
+ break;
+ case RES_TYPE_INTR_PBA_LEGACY:
+ case RES_TYPE_DEVCMD:
+ len = count;
+ break;
+ default:
+ continue;
+ }
+
+ vdev->res[type].count = count;
+ vdev->res[type].vaddr = (char __iomem *)bar[bar_num].vaddr +
+ bar_offset;
+ vdev->res[type].bus_addr = bar[bar_num].bus_addr + bar_offset;
+ }
+
+ return 0;
+}
+
+unsigned int vnic_dev_get_res_count(struct vnic_dev *vdev,
+ enum vnic_res_type type)
+{
+ return vdev->res[type].count;
+}
+
+void __iomem *vnic_dev_get_res(struct vnic_dev *vdev, enum vnic_res_type type,
+ unsigned int index)
+{
+ if (!vdev->res[type].vaddr)
+ return NULL;
+
+ switch (type) {
+ case RES_TYPE_WQ:
+ case RES_TYPE_RQ:
+ case RES_TYPE_CQ:
+ case RES_TYPE_INTR_CTRL:
+ return (char __iomem *)vdev->res[type].vaddr +
+ index * VNIC_RES_STRIDE;
+ default:
+ return (char __iomem *)vdev->res[type].vaddr;
+ }
+}
+
+unsigned int vnic_dev_desc_ring_size(struct vnic_dev_ring *ring,
+ unsigned int desc_count, unsigned int desc_size)
+{
+ /* The base address of the desc rings must be 512 byte aligned.
+ * Descriptor count is aligned to groups of 32 descriptors. A
+ * count of 0 means the maximum 4096 descriptors. Descriptor
+ * size is aligned to 16 bytes.
+ */
+
+ unsigned int count_align = 32;
+ unsigned int desc_align = 16;
+
+ ring->base_align = 512;
+
+ if (desc_count == 0)
+ desc_count = 4096;
+
+ ring->desc_count = ALIGN(desc_count, count_align);
+
+ ring->desc_size = ALIGN(desc_size, desc_align);
+
+ ring->size = ring->desc_count * ring->desc_size;
+ ring->size_unaligned = ring->size + ring->base_align;
+
+ return ring->size_unaligned;
+}
+
+void vnic_set_hdr_split_size(struct vnic_dev *vdev, u16 size)
+{
+ vdev->split_hdr_size = size;
+}
+
+u16 vnic_get_hdr_split_size(struct vnic_dev *vdev)
+{
+ return vdev->split_hdr_size;
+}
+
+void vnic_dev_clear_desc_ring(struct vnic_dev_ring *ring)
+{
+ memset(ring->descs, 0, ring->size);
+}
+
+int vnic_dev_alloc_desc_ring(struct vnic_dev *vdev, struct vnic_dev_ring *ring,
+ unsigned int desc_count, unsigned int desc_size, unsigned int socket_id,
+ char *z_name)
+{
+ const struct rte_memzone *rz;
+
+ vnic_dev_desc_ring_size(ring, desc_count, desc_size);
+
+ rz = rte_memzone_reserve_aligned(z_name,
+ ring->size_unaligned, socket_id,
+ 0, ENIC_ALIGN);
+ if (!rz) {
+ pr_err("Failed to allocate ring (size=%d), aborting\n",
+ (int)ring->size);
+ return -ENOMEM;
+ }
+
+ ring->descs_unaligned = rz->addr;
+ if (!ring->descs_unaligned) {
+ pr_err("Failed to map allocated ring (size=%d), aborting\n",
+ (int)ring->size);
+ return -ENOMEM;
+ }
+
+ ring->base_addr_unaligned = (dma_addr_t)rz->phys_addr;
+
+ ring->base_addr = ALIGN(ring->base_addr_unaligned,
+ ring->base_align);
+ ring->descs = (u8 *)ring->descs_unaligned +
+ (ring->base_addr - ring->base_addr_unaligned);
+
+ vnic_dev_clear_desc_ring(ring);
+
+ ring->desc_avail = ring->desc_count - 1;
+
+ return 0;
+}
+
+void vnic_dev_free_desc_ring(struct vnic_dev *vdev, struct vnic_dev_ring *ring)
+{
+ if (ring->descs)
+ ring->descs = NULL;
+}
+
+static int _vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
+ int wait)
+{
+ struct vnic_devcmd __iomem *devcmd = vdev->devcmd;
+ unsigned int i;
+ int delay;
+ u32 status;
+ int err;
+
+ status = ioread32(&devcmd->status);
+ if (status == 0xFFFFFFFF) {
+ /* PCI-e target device is gone */
+ return -ENODEV;
+ }
+ if (status & STAT_BUSY) {
+
+ pr_err("Busy devcmd %d\n", _CMD_N(cmd));
+ return -EBUSY;
+ }
+
+ if (_CMD_DIR(cmd) & _CMD_DIR_WRITE) {
+ for (i = 0; i < VNIC_DEVCMD_NARGS; i++)
+ writeq(vdev->args[i], &devcmd->args[i]);
+ wmb(); /* complete all writes initiated till now */
+ }
+
+ iowrite32(cmd, &devcmd->cmd);
+
+ if ((_CMD_FLAGS(cmd) & _CMD_FLAGS_NOWAIT))
+ return 0;
+
+ for (delay = 0; delay < wait; delay++) {
+
+ udelay(100);
+
+ status = ioread32(&devcmd->status);
+ if (status == 0xFFFFFFFF) {
+ /* PCI-e target device is gone */
+ return -ENODEV;
+ }
+
+ if (!(status & STAT_BUSY)) {
+ if (status & STAT_ERROR) {
+ err = -(int)readq(&devcmd->args[0]);
+ if (cmd != CMD_CAPABILITY)
+ pr_err("Devcmd %d failed " \
+ "with error code %d\n",
+ _CMD_N(cmd), err);
+ return err;
+ }
+
+ if (_CMD_DIR(cmd) & _CMD_DIR_READ) {
+ rmb();/* finish all reads initiated till now */
+ for (i = 0; i < VNIC_DEVCMD_NARGS; i++)
+ vdev->args[i] = readq(&devcmd->args[i]);
+ }
+
+ return 0;
+ }
+ }
+
+ pr_err("Timedout devcmd %d\n", _CMD_N(cmd));
+ return -ETIMEDOUT;
+}
+
+static int vnic_dev_cmd_proxy(struct vnic_dev *vdev,
+ enum vnic_devcmd_cmd proxy_cmd, enum vnic_devcmd_cmd cmd,
+ u64 *a0, u64 *a1, int wait)
+{
+ u32 status;
+ int err;
+
+ memset(vdev->args, 0, sizeof(vdev->args));
+
+ vdev->args[0] = vdev->proxy_index;
+ vdev->args[1] = cmd;
+ vdev->args[2] = *a0;
+ vdev->args[3] = *a1;
+
+ err = _vnic_dev_cmd(vdev, proxy_cmd, wait);
+ if (err)
+ return err;
+
+ status = (u32)vdev->args[0];
+ if (status & STAT_ERROR) {
+ err = (int)vdev->args[1];
+ if (err != ERR_ECMDUNKNOWN ||
+ cmd != CMD_CAPABILITY)
+ pr_err("Error %d proxy devcmd %d\n", err, _CMD_N(cmd));
+ return err;
+ }
+
+ *a0 = vdev->args[1];
+ *a1 = vdev->args[2];
+
+ return 0;
+}
+
+static int vnic_dev_cmd_no_proxy(struct vnic_dev *vdev,
+ enum vnic_devcmd_cmd cmd, u64 *a0, u64 *a1, int wait)
+{
+ int err;
+
+ vdev->args[0] = *a0;
+ vdev->args[1] = *a1;
+
+ err = _vnic_dev_cmd(vdev, cmd, wait);
+
+ *a0 = vdev->args[0];
+ *a1 = vdev->args[1];
+
+ return err;
+}
+
+void vnic_dev_cmd_proxy_by_index_start(struct vnic_dev *vdev, u16 index)
+{
+ vdev->proxy = PROXY_BY_INDEX;
+ vdev->proxy_index = index;
+}
+
+void vnic_dev_cmd_proxy_by_bdf_start(struct vnic_dev *vdev, u16 bdf)
+{
+ vdev->proxy = PROXY_BY_BDF;
+ vdev->proxy_index = bdf;
+}
+
+void vnic_dev_cmd_proxy_end(struct vnic_dev *vdev)
+{
+ vdev->proxy = PROXY_NONE;
+ vdev->proxy_index = 0;
+}
+
+int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
+ u64 *a0, u64 *a1, int wait)
+{
+ memset(vdev->args, 0, sizeof(vdev->args));
+
+ switch (vdev->proxy) {
+ case PROXY_BY_INDEX:
+ return vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_INDEX, cmd,
+ a0, a1, wait);
+ case PROXY_BY_BDF:
+ return vnic_dev_cmd_proxy(vdev, CMD_PROXY_BY_BDF, cmd,
+ a0, a1, wait);
+ case PROXY_NONE:
+ default:
+ return vnic_dev_cmd_no_proxy(vdev, cmd, a0, a1, wait);
+ }
+}
+
+static int vnic_dev_capable(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd)
+{
+ u64 a0 = (u32)cmd, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ err = vnic_dev_cmd(vdev, CMD_CAPABILITY, &a0, &a1, wait);
+
+ return !(err || a0);
+}
+
+int vnic_dev_spec(struct vnic_dev *vdev, unsigned int offset, unsigned int size,
+ void *value)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ int err;
+
+ a0 = offset;
+ a1 = size;
+
+ err = vnic_dev_cmd(vdev, CMD_DEV_SPEC, &a0, &a1, wait);
+
+ switch (size) {
+ case 1:
+ *(u8 *)value = (u8)a0;
+ break;
+ case 2:
+ *(u16 *)value = (u16)a0;
+ break;
+ case 4:
+ *(u32 *)value = (u32)a0;
+ break;
+ case 8:
+ *(u64 *)value = a0;
+ break;
+ default:
+ BUG();
+ break;
+ }
+
+ return err;
+}
+
+int vnic_dev_stats_clear(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_STATS_CLEAR, &a0, &a1, wait);
+}
+
+int vnic_dev_stats_dump(struct vnic_dev *vdev, struct vnic_stats **stats)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ static instance;
+ char name[NAME_MAX];
+
+ if (!vdev->stats) {
+ snprintf(name, sizeof(name), "vnic_stats-%d", instance++);
+ vdev->stats = vdev->alloc_consistent(vdev->priv,
+ sizeof(struct vnic_stats), &vdev->stats_pa, name);
+ if (!vdev->stats)
+ return -ENOMEM;
+ }
+
+ *stats = vdev->stats;
+ a0 = vdev->stats_pa;
+ a1 = sizeof(struct vnic_stats);
+
+ return vnic_dev_cmd(vdev, CMD_STATS_DUMP, &a0, &a1, wait);
+}
+
+int vnic_dev_close(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_CLOSE, &a0, &a1, wait);
+}
+
+/** Deprecated. @see vnic_dev_enable_wait */
+int vnic_dev_enable(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_ENABLE, &a0, &a1, wait);
+}
+
+int vnic_dev_enable_wait(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ if (vnic_dev_capable(vdev, CMD_ENABLE_WAIT))
+ return vnic_dev_cmd(vdev, CMD_ENABLE_WAIT, &a0, &a1, wait);
+ else
+ return vnic_dev_cmd(vdev, CMD_ENABLE, &a0, &a1, wait);
+}
+
+int vnic_dev_disable(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_DISABLE, &a0, &a1, wait);
+}
+
+int vnic_dev_open(struct vnic_dev *vdev, int arg)
+{
+ u64 a0 = (u32)arg, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_OPEN, &a0, &a1, wait);
+}
+
+int vnic_dev_open_done(struct vnic_dev *vdev, int *done)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ *done = 0;
+
+ err = vnic_dev_cmd(vdev, CMD_OPEN_STATUS, &a0, &a1, wait);
+ if (err)
+ return err;
+
+ *done = (a0 == 0);
+
+ return 0;
+}
+
+int vnic_dev_soft_reset(struct vnic_dev *vdev, int arg)
+{
+ u64 a0 = (u32)arg, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_SOFT_RESET, &a0, &a1, wait);
+}
+
+int vnic_dev_soft_reset_done(struct vnic_dev *vdev, int *done)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ *done = 0;
+
+ err = vnic_dev_cmd(vdev, CMD_SOFT_RESET_STATUS, &a0, &a1, wait);
+ if (err)
+ return err;
+
+ *done = (a0 == 0);
+
+ return 0;
+}
+
+int vnic_dev_get_mac_addr(struct vnic_dev *vdev, u8 *mac_addr)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ int err, i;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ mac_addr[i] = 0;
+
+ err = vnic_dev_cmd(vdev, CMD_GET_MAC_ADDR, &a0, &a1, wait);
+ if (err)
+ return err;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ mac_addr[i] = ((u8 *)&a0)[i];
+
+ return 0;
+}
+
+int vnic_dev_packet_filter(struct vnic_dev *vdev, int directed, int multicast,
+ int broadcast, int promisc, int allmulti)
+{
+ u64 a0, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ a0 = (directed ? CMD_PFILTER_DIRECTED : 0) |
+ (multicast ? CMD_PFILTER_MULTICAST : 0) |
+ (broadcast ? CMD_PFILTER_BROADCAST : 0) |
+ (promisc ? CMD_PFILTER_PROMISCUOUS : 0) |
+ (allmulti ? CMD_PFILTER_ALL_MULTICAST : 0);
+
+ err = vnic_dev_cmd(vdev, CMD_PACKET_FILTER, &a0, &a1, wait);
+ if (err)
+ pr_err("Can't set packet filter\n");
+
+ return err;
+}
+
+int vnic_dev_add_addr(struct vnic_dev *vdev, u8 *addr)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+ int err;
+ int i;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ ((u8 *)&a0)[i] = addr[i];
+
+ err = vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait);
+ if (err)
+ pr_err("Can't add addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
+ addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
+ err);
+
+ return err;
+}
+
+int vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+ int err;
+ int i;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ ((u8 *)&a0)[i] = addr[i];
+
+ err = vnic_dev_cmd(vdev, CMD_ADDR_DEL, &a0, &a1, wait);
+ if (err)
+ pr_err("Can't del addr [%02x:%02x:%02x:%02x:%02x:%02x], %d\n",
+ addr[0], addr[1], addr[2], addr[3], addr[4], addr[5],
+ err);
+
+ return err;
+}
+
+int vnic_dev_set_ig_vlan_rewrite_mode(struct vnic_dev *vdev,
+ u8 ig_vlan_rewrite_mode)
+{
+ u64 a0 = ig_vlan_rewrite_mode, a1 = 0;
+ int wait = 1000;
+
+ if (vnic_dev_capable(vdev, CMD_IG_VLAN_REWRITE_MODE))
+ return vnic_dev_cmd(vdev, CMD_IG_VLAN_REWRITE_MODE,
+ &a0, &a1, wait);
+ else
+ return 0;
+}
+
+int vnic_dev_raise_intr(struct vnic_dev *vdev, u16 intr)
+{
+ u64 a0 = intr, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ err = vnic_dev_cmd(vdev, CMD_IAR, &a0, &a1, wait);
+ if (err)
+ pr_err("Failed to raise INTR[%d], err %d\n", intr, err);
+
+ return err;
+}
+
+void vnic_dev_set_reset_flag(struct vnic_dev *vdev, int state)
+{
+ vdev->in_reset = state;
+}
+
+static inline int vnic_dev_in_reset(struct vnic_dev *vdev)
+{
+ return vdev->in_reset;
+}
+
+int vnic_dev_notify_setcmd(struct vnic_dev *vdev,
+ void *notify_addr, dma_addr_t notify_pa, u16 intr)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ int r;
+
+ memset(notify_addr, 0, sizeof(struct vnic_devcmd_notify));
+ if (!vnic_dev_in_reset(vdev)) {
+ vdev->notify = notify_addr;
+ vdev->notify_pa = notify_pa;
+ }
+
+ a0 = (u64)notify_pa;
+ a1 = ((u64)intr << 32) & 0x0000ffff00000000ULL;
+ a1 += sizeof(struct vnic_devcmd_notify);
+
+ r = vnic_dev_cmd(vdev, CMD_NOTIFY, &a0, &a1, wait);
+ if (!vnic_dev_in_reset(vdev))
+ vdev->notify_sz = (r == 0) ? (u32)a1 : 0;
+
+ return r;
+}
+
+int vnic_dev_notify_set(struct vnic_dev *vdev, u16 intr)
+{
+ void *notify_addr;
+ dma_addr_t notify_pa;
+ char name[NAME_MAX];
+ static int instance;
+
+ if (vdev->notify || vdev->notify_pa) {
+ pr_warn("notify block %p still allocated.\n" \
+ "Ignore if restarting port\n", vdev->notify);
+ return -EINVAL;
+ }
+
+ if (!vnic_dev_in_reset(vdev)) {
+ snprintf(name, sizeof(name), "vnic_notify-%d", instance++);
+ notify_addr = vdev->alloc_consistent(vdev->priv,
+ sizeof(struct vnic_devcmd_notify),
+ ¬ify_pa, name);
+ if (!notify_addr)
+ return -ENOMEM;
+ }
+
+ return vnic_dev_notify_setcmd(vdev, notify_addr, notify_pa, intr);
+}
+
+int vnic_dev_notify_unsetcmd(struct vnic_dev *vdev)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ int err;
+
+ a0 = 0; /* paddr = 0 to unset notify buffer */
+ a1 = 0x0000ffff00000000ULL; /* intr num = -1 to unreg for intr */
+ a1 += sizeof(struct vnic_devcmd_notify);
+
+ err = vnic_dev_cmd(vdev, CMD_NOTIFY, &a0, &a1, wait);
+ if (!vnic_dev_in_reset(vdev)) {
+ vdev->notify = NULL;
+ vdev->notify_pa = 0;
+ vdev->notify_sz = 0;
+ }
+
+ return err;
+}
+
+int vnic_dev_notify_unset(struct vnic_dev *vdev)
+{
+ if (vdev->notify && !vnic_dev_in_reset(vdev)) {
+ vdev->free_consistent(vdev->pdev,
+ sizeof(struct vnic_devcmd_notify),
+ vdev->notify,
+ vdev->notify_pa);
+ }
+
+ return vnic_dev_notify_unsetcmd(vdev);
+}
+
+static int vnic_dev_notify_ready(struct vnic_dev *vdev)
+{
+ u32 *words;
+ unsigned int nwords = vdev->notify_sz / 4;
+ unsigned int i;
+ u32 csum;
+
+ if (!vdev->notify || !vdev->notify_sz)
+ return 0;
+
+ do {
+ csum = 0;
+ rte_memcpy(&vdev->notify_copy, vdev->notify, vdev->notify_sz);
+ words = (u32 *)&vdev->notify_copy;
+ for (i = 1; i < nwords; i++)
+ csum += words[i];
+ } while (csum != words[0]);
+
+ return 1;
+}
+
+int vnic_dev_init(struct vnic_dev *vdev, int arg)
+{
+ u64 a0 = (u32)arg, a1 = 0;
+ int wait = 1000;
+ int r = 0;
+
+ if (vnic_dev_capable(vdev, CMD_INIT))
+ r = vnic_dev_cmd(vdev, CMD_INIT, &a0, &a1, wait);
+ else {
+ vnic_dev_cmd(vdev, CMD_INIT_v1, &a0, &a1, wait);
+ if (a0 & CMD_INITF_DEFAULT_MAC) {
+ /* Emulate these for old CMD_INIT_v1 which
+ * didn't pass a0 so no CMD_INITF_*.
+ */
+ vnic_dev_cmd(vdev, CMD_GET_MAC_ADDR, &a0, &a1, wait);
+ vnic_dev_cmd(vdev, CMD_ADDR_ADD, &a0, &a1, wait);
+ }
+ }
+ return r;
+}
+
+int vnic_dev_deinit(struct vnic_dev *vdev)
+{
+ u64 a0 = 0, a1 = 0;
+ int wait = 1000;
+
+ return vnic_dev_cmd(vdev, CMD_DEINIT, &a0, &a1, wait);
+}
+
+void vnic_dev_intr_coal_timer_info_default(struct vnic_dev *vdev)
+{
+ /* Default: hardware intr coal timer is in units of 1.5 usecs */
+ vdev->intr_coal_timer_info.mul = 2;
+ vdev->intr_coal_timer_info.div = 3;
+ vdev->intr_coal_timer_info.max_usec =
+ vnic_dev_intr_coal_timer_hw_to_usec(vdev, 0xffff);
+}
+
+int vnic_dev_link_status(struct vnic_dev *vdev)
+{
+ if (!vnic_dev_notify_ready(vdev))
+ return 0;
+
+ return vdev->notify_copy.link_state;
+}
+
+u32 vnic_dev_port_speed(struct vnic_dev *vdev)
+{
+ if (!vnic_dev_notify_ready(vdev))
+ return 0;
+
+ return vdev->notify_copy.port_speed;
+}
+
+void vnic_dev_set_intr_mode(struct vnic_dev *vdev,
+ enum vnic_dev_intr_mode intr_mode)
+{
+ vdev->intr_mode = intr_mode;
+}
+
+enum vnic_dev_intr_mode vnic_dev_get_intr_mode(
+ struct vnic_dev *vdev)
+{
+ return vdev->intr_mode;
+}
+
+u32 vnic_dev_intr_coal_timer_usec_to_hw(struct vnic_dev *vdev, u32 usec)
+{
+ return (usec * vdev->intr_coal_timer_info.mul) /
+ vdev->intr_coal_timer_info.div;
+}
+
+u32 vnic_dev_intr_coal_timer_hw_to_usec(struct vnic_dev *vdev, u32 hw_cycles)
+{
+ return (hw_cycles * vdev->intr_coal_timer_info.div) /
+ vdev->intr_coal_timer_info.mul;
+}
+
+u32 vnic_dev_get_intr_coal_timer_max(struct vnic_dev *vdev)
+{
+ return vdev->intr_coal_timer_info.max_usec;
+}
+
+void vnic_dev_unregister(struct vnic_dev *vdev)
+{
+ if (vdev) {
+ if (vdev->notify)
+ vdev->free_consistent(vdev->pdev,
+ sizeof(struct vnic_devcmd_notify),
+ vdev->notify,
+ vdev->notify_pa);
+ if (vdev->stats)
+ vdev->free_consistent(vdev->pdev,
+ sizeof(struct vnic_stats),
+ vdev->stats, vdev->stats_pa);
+ if (vdev->fw_info)
+ vdev->free_consistent(vdev->pdev,
+ sizeof(struct vnic_devcmd_fw_info),
+ vdev->fw_info, vdev->fw_info_pa);
+ kfree(vdev);
+ }
+}
+
+struct vnic_dev *vnic_dev_register(struct vnic_dev *vdev,
+ void *priv, struct rte_pci_device *pdev, struct vnic_dev_bar *bar,
+ unsigned int num_bars)
+{
+ if (!vdev) {
+ vdev = kzalloc(sizeof(struct vnic_dev), GFP_ATOMIC);
+ if (!vdev)
+ return NULL;
+ }
+
+ vdev->priv = priv;
+ vdev->pdev = pdev;
+
+ if (vnic_dev_discover_res(vdev, bar, num_bars))
+ goto err_out;
+
+ vdev->devcmd = vnic_dev_get_res(vdev, RES_TYPE_DEVCMD, 0);
+ if (!vdev->devcmd)
+ goto err_out;
+
+ return vdev;
+
+err_out:
+ vnic_dev_unregister(vdev);
+ return NULL;
+}
+
+struct rte_pci_device *vnic_dev_get_pdev(struct vnic_dev *vdev)
+{
+ return vdev->pdev;
+}
+
+static int vnic_dev_cmd_status(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
+ int *status)
+{
+ u64 a0 = cmd, a1 = 0;
+ int wait = 1000;
+ int ret;
+
+ ret = vnic_dev_cmd(vdev, CMD_STATUS, &a0, &a1, wait);
+ if (!ret)
+ *status = (int)a0;
+
+ return ret;
+}
+
+int vnic_dev_set_mac_addr(struct vnic_dev *vdev, u8 *mac_addr)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ int i;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ ((u8 *)&a0)[i] = mac_addr[i];
+
+ return vnic_dev_cmd(vdev, CMD_SET_MAC_ADDR, &a0, &a1, wait);
+}
+
+/*
+ * vnic_dev_classifier: Add/Delete classifier entries
+ * @vdev: vdev of the device
+ * @cmd: CLSF_ADD for Add filter
+ * CLSF_DEL for Delete filter
+ * @entry: In case of ADD filter, the caller passes the RQ number in this
+ * variable.
+ * This function stores the filter_id returned by the
+ * firmware in the same variable before return;
+ *
+ * In case of DEL filter, the caller passes the RQ number. Return
+ * value is irrelevant.
+ * @data: filter data
+ */
+int vnic_dev_classifier(struct vnic_dev *vdev, u8 cmd, u16 *entry,
+ struct filter *data)
+{
+ u64 a0, a1;
+ int wait = 1000;
+ dma_addr_t tlv_pa;
+ int ret = -EINVAL;
+ struct filter_tlv *tlv, *tlv_va;
+ struct filter_action *action;
+ u64 tlv_size;
+ static unsigned int unique_id;
+ char z_name[RTE_MEMZONE_NAMESIZE];
+
+ if (cmd == CLSF_ADD) {
+ tlv_size = sizeof(struct filter) +
+ sizeof(struct filter_action) +
+ 2*sizeof(struct filter_tlv);
+ snprintf(z_name, sizeof(z_name), "vnic_clsf_%d", unique_id++);
+ tlv_va = vdev->alloc_consistent(vdev->priv,
+ tlv_size, &tlv_pa, z_name);
+ if (!tlv_va)
+ return -ENOMEM;
+ tlv = tlv_va;
+ a0 = tlv_pa;
+ a1 = tlv_size;
+ memset(tlv, 0, tlv_size);
+ tlv->type = CLSF_TLV_FILTER;
+ tlv->length = sizeof(struct filter);
+ *(struct filter *)&tlv->val = *data;
+
+ tlv = (struct filter_tlv *)((char *)tlv +
+ sizeof(struct filter_tlv) +
+ sizeof(struct filter));
+
+ tlv->type = CLSF_TLV_ACTION;
+ tlv->length = sizeof(struct filter_action);
+ action = (struct filter_action *)&tlv->val;
+ action->type = FILTER_ACTION_RQ_STEERING;
+ action->u.rq_idx = *entry;
+
+ ret = vnic_dev_cmd(vdev, CMD_ADD_FILTER, &a0, &a1, wait);
+ *entry = (u16)a0;
+ vdev->free_consistent(vdev->pdev, tlv_size, tlv_va, tlv_pa);
+ } else if (cmd == CLSF_DEL) {
+ a0 = *entry;
+ ret = vnic_dev_cmd(vdev, CMD_DEL_FILTER, &a0, &a1, wait);
+ }
+
+ return ret;
+}
diff --git a/lib/librte_pmd_enic/vnic/vnic_dev.h b/lib/librte_pmd_enic/vnic/vnic_dev.h
new file mode 100644
index 0000000..63c26dd
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_dev.h
@@ -0,0 +1,203 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_dev.h 196958 2014-11-04 18:23:37Z xuywang $"
+
+#ifndef _VNIC_DEV_H_
+#define _VNIC_DEV_H_
+
+#include "enic_compat.h"
+#include "rte_pci.h"
+#include "vnic_resource.h"
+#include "vnic_devcmd.h"
+
+#ifndef VNIC_PADDR_TARGET
+#define VNIC_PADDR_TARGET 0x0000000000000000ULL
+#endif
+
+#ifndef readq
+static inline u64 readq(void __iomem *reg)
+{
+ return ((u64)readl(reg + 0x4UL) << 32) |
+ (u64)readl(reg);
+}
+
+static inline void writeq(u64 val, void __iomem *reg)
+{
+ writel(val & 0xffffffff, reg);
+ writel(val >> 32, reg + 0x4UL);
+}
+#endif
+
+#undef pr_fmt
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+enum vnic_dev_intr_mode {
+ VNIC_DEV_INTR_MODE_UNKNOWN,
+ VNIC_DEV_INTR_MODE_INTX,
+ VNIC_DEV_INTR_MODE_MSI,
+ VNIC_DEV_INTR_MODE_MSIX,
+};
+
+struct vnic_dev_bar {
+ void __iomem *vaddr;
+ dma_addr_t bus_addr;
+ unsigned long len;
+};
+
+struct vnic_dev_ring {
+ void *descs;
+ size_t size;
+ dma_addr_t base_addr;
+ size_t base_align;
+ void *descs_unaligned;
+ size_t size_unaligned;
+ dma_addr_t base_addr_unaligned;
+ unsigned int desc_size;
+ unsigned int desc_count;
+ unsigned int desc_avail;
+};
+
+struct vnic_dev_iomap_info {
+ dma_addr_t bus_addr;
+ unsigned long len;
+ void __iomem *vaddr;
+};
+
+struct vnic_dev;
+struct vnic_stats;
+
+void *vnic_dev_priv(struct vnic_dev *vdev);
+unsigned int vnic_dev_get_res_count(struct vnic_dev *vdev,
+ enum vnic_res_type type);
+void __iomem *vnic_dev_get_res(struct vnic_dev *vdev, enum vnic_res_type type,
+ unsigned int index);
+dma_addr_t vnic_dev_get_res_bus_addr(struct vnic_dev *vdev,
+ enum vnic_res_type type, unsigned int index);
+uint8_t vnic_dev_get_res_bar(struct vnic_dev *vdev,
+ enum vnic_res_type type);
+uint32_t vnic_dev_get_res_offset(struct vnic_dev *vdev,
+ enum vnic_res_type type, unsigned int index);
+unsigned long vnic_dev_get_res_type_len(struct vnic_dev *vdev,
+ enum vnic_res_type type);
+unsigned int vnic_dev_desc_ring_size(struct vnic_dev_ring *ring,
+ unsigned int desc_count, unsigned int desc_size);
+void vnic_dev_clear_desc_ring(struct vnic_dev_ring *ring);
+int vnic_dev_alloc_desc_ring(struct vnic_dev *vdev, struct vnic_dev_ring *ring,
+ unsigned int desc_count, unsigned int desc_size, unsigned int socket_id,
+ char *z_name);
+void vnic_dev_free_desc_ring(struct vnic_dev *vdev,
+ struct vnic_dev_ring *ring);
+int vnic_dev_cmd(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
+ u64 *a0, u64 *a1, int wait);
+int vnic_dev_cmd_args(struct vnic_dev *vdev, enum vnic_devcmd_cmd cmd,
+ u64 *args, int nargs, int wait);
+void vnic_dev_cmd_proxy_by_index_start(struct vnic_dev *vdev, u16 index);
+void vnic_dev_cmd_proxy_by_bdf_start(struct vnic_dev *vdev, u16 bdf);
+void vnic_dev_cmd_proxy_end(struct vnic_dev *vdev);
+int vnic_dev_fw_info(struct vnic_dev *vdev,
+ struct vnic_devcmd_fw_info **fw_info);
+int vnic_dev_asic_info(struct vnic_dev *vdev, u16 *asic_type, u16 *asic_rev);
+int vnic_dev_spec(struct vnic_dev *vdev, unsigned int offset, unsigned int size,
+ void *value);
+int vnic_dev_stats_clear(struct vnic_dev *vdev);
+int vnic_dev_stats_dump(struct vnic_dev *vdev, struct vnic_stats **stats);
+int vnic_dev_hang_notify(struct vnic_dev *vdev);
+int vnic_dev_packet_filter(struct vnic_dev *vdev, int directed, int multicast,
+ int broadcast, int promisc, int allmulti);
+int vnic_dev_packet_filter_all(struct vnic_dev *vdev, int directed,
+ int multicast, int broadcast, int promisc, int allmulti);
+int vnic_dev_add_addr(struct vnic_dev *vdev, u8 *addr);
+int vnic_dev_del_addr(struct vnic_dev *vdev, u8 *addr);
+int vnic_dev_get_mac_addr(struct vnic_dev *vdev, u8 *mac_addr);
+int vnic_dev_raise_intr(struct vnic_dev *vdev, u16 intr);
+int vnic_dev_notify_set(struct vnic_dev *vdev, u16 intr);
+int vnic_dev_notify_unset(struct vnic_dev *vdev);
+int vnic_dev_notify_setcmd(struct vnic_dev *vdev,
+ void *notify_addr, dma_addr_t notify_pa, u16 intr);
+int vnic_dev_notify_unsetcmd(struct vnic_dev *vdev);
+int vnic_dev_link_status(struct vnic_dev *vdev);
+u32 vnic_dev_port_speed(struct vnic_dev *vdev);
+u32 vnic_dev_msg_lvl(struct vnic_dev *vdev);
+u32 vnic_dev_mtu(struct vnic_dev *vdev);
+u32 vnic_dev_link_down_cnt(struct vnic_dev *vdev);
+u32 vnic_dev_notify_status(struct vnic_dev *vdev);
+u32 vnic_dev_uif(struct vnic_dev *vdev);
+int vnic_dev_close(struct vnic_dev *vdev);
+int vnic_dev_enable(struct vnic_dev *vdev);
+int vnic_dev_enable_wait(struct vnic_dev *vdev);
+int vnic_dev_disable(struct vnic_dev *vdev);
+int vnic_dev_open(struct vnic_dev *vdev, int arg);
+int vnic_dev_open_done(struct vnic_dev *vdev, int *done);
+int vnic_dev_init(struct vnic_dev *vdev, int arg);
+int vnic_dev_init_done(struct vnic_dev *vdev, int *done, int *err);
+int vnic_dev_init_prov(struct vnic_dev *vdev, u8 *buf, u32 len);
+int vnic_dev_deinit(struct vnic_dev *vdev);
+void vnic_dev_intr_coal_timer_info_default(struct vnic_dev *vdev);
+int vnic_dev_intr_coal_timer_info(struct vnic_dev *vdev);
+int vnic_dev_soft_reset(struct vnic_dev *vdev, int arg);
+int vnic_dev_soft_reset_done(struct vnic_dev *vdev, int *done);
+int vnic_dev_hang_reset(struct vnic_dev *vdev, int arg);
+int vnic_dev_hang_reset_done(struct vnic_dev *vdev, int *done);
+void vnic_dev_set_intr_mode(struct vnic_dev *vdev,
+ enum vnic_dev_intr_mode intr_mode);
+enum vnic_dev_intr_mode vnic_dev_get_intr_mode(struct vnic_dev *vdev);
+u32 vnic_dev_intr_coal_timer_usec_to_hw(struct vnic_dev *vdev, u32 usec);
+u32 vnic_dev_intr_coal_timer_hw_to_usec(struct vnic_dev *vdev, u32 hw_cycles);
+u32 vnic_dev_get_intr_coal_timer_max(struct vnic_dev *vdev);
+void vnic_dev_unregister(struct vnic_dev *vdev);
+int vnic_dev_set_ig_vlan_rewrite_mode(struct vnic_dev *vdev,
+ u8 ig_vlan_rewrite_mode);
+struct vnic_dev *vnic_dev_register(struct vnic_dev *vdev,
+ void *priv, struct rte_pci_device *pdev, struct vnic_dev_bar *bar,
+ unsigned int num_bars);
+struct rte_pci_device *vnic_dev_get_pdev(struct vnic_dev *vdev);
+int vnic_dev_cmd_init(struct vnic_dev *vdev, int fallback);
+int vnic_dev_get_size(void);
+int vnic_dev_int13(struct vnic_dev *vdev, u64 arg, u32 op);
+int vnic_dev_perbi(struct vnic_dev *vdev, u64 arg, u32 op);
+u32 vnic_dev_perbi_rebuild_cnt(struct vnic_dev *vdev);
+int vnic_dev_init_prov2(struct vnic_dev *vdev, u8 *buf, u32 len);
+int vnic_dev_enable2(struct vnic_dev *vdev, int active);
+int vnic_dev_enable2_done(struct vnic_dev *vdev, int *status);
+int vnic_dev_deinit_done(struct vnic_dev *vdev, int *status);
+int vnic_dev_set_mac_addr(struct vnic_dev *vdev, u8 *mac_addr);
+int vnic_dev_classifier(struct vnic_dev *vdev, u8 cmd, u16 *entry,
+ struct filter *data);
+#ifdef ENIC_VXLAN
+int vnic_dev_overlay_offload_enable_disable(struct vnic_dev *vdev,
+ u8 overlay, u8 config);
+int vnic_dev_overlay_offload_cfg(struct vnic_dev *vdev, u8 overlay,
+ u16 vxlan_udp_port_number);
+#endif
+#endif /* _VNIC_DEV_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_devcmd.h b/lib/librte_pmd_enic/vnic/vnic_devcmd.h
new file mode 100644
index 0000000..b4c87c1
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_devcmd.h
@@ -0,0 +1,774 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_devcmd.h 173135 2014-05-16 03:14:07Z sanpilla $"
+
+#ifndef _VNIC_DEVCMD_H_
+#define _VNIC_DEVCMD_H_
+
+#define _CMD_NBITS 14
+#define _CMD_VTYPEBITS 10
+#define _CMD_FLAGSBITS 6
+#define _CMD_DIRBITS 2
+
+#define _CMD_NMASK ((1 << _CMD_NBITS)-1)
+#define _CMD_VTYPEMASK ((1 << _CMD_VTYPEBITS)-1)
+#define _CMD_FLAGSMASK ((1 << _CMD_FLAGSBITS)-1)
+#define _CMD_DIRMASK ((1 << _CMD_DIRBITS)-1)
+
+#define _CMD_NSHIFT 0
+#define _CMD_VTYPESHIFT (_CMD_NSHIFT+_CMD_NBITS)
+#define _CMD_FLAGSSHIFT (_CMD_VTYPESHIFT+_CMD_VTYPEBITS)
+#define _CMD_DIRSHIFT (_CMD_FLAGSSHIFT+_CMD_FLAGSBITS)
+
+/*
+ * Direction bits (from host perspective).
+ */
+#define _CMD_DIR_NONE 0U
+#define _CMD_DIR_WRITE 1U
+#define _CMD_DIR_READ 2U
+#define _CMD_DIR_RW (_CMD_DIR_WRITE | _CMD_DIR_READ)
+
+/*
+ * Flag bits.
+ */
+#define _CMD_FLAGS_NONE 0U
+#define _CMD_FLAGS_NOWAIT 1U
+
+/*
+ * vNIC type bits.
+ */
+#define _CMD_VTYPE_NONE 0U
+#define _CMD_VTYPE_ENET 1U
+#define _CMD_VTYPE_FC 2U
+#define _CMD_VTYPE_SCSI 4U
+#define _CMD_VTYPE_ALL (_CMD_VTYPE_ENET | _CMD_VTYPE_FC | _CMD_VTYPE_SCSI)
+
+/*
+ * Used to create cmds..
+ */
+#define _CMDCF(dir, flags, vtype, nr) \
+ (((dir) << _CMD_DIRSHIFT) | \
+ ((flags) << _CMD_FLAGSSHIFT) | \
+ ((vtype) << _CMD_VTYPESHIFT) | \
+ ((nr) << _CMD_NSHIFT))
+#define _CMDC(dir, vtype, nr) _CMDCF(dir, 0, vtype, nr)
+#define _CMDCNW(dir, vtype, nr) _CMDCF(dir, _CMD_FLAGS_NOWAIT, vtype, nr)
+
+/*
+ * Used to decode cmds..
+ */
+#define _CMD_DIR(cmd) (((cmd) >> _CMD_DIRSHIFT) & _CMD_DIRMASK)
+#define _CMD_FLAGS(cmd) (((cmd) >> _CMD_FLAGSSHIFT) & _CMD_FLAGSMASK)
+#define _CMD_VTYPE(cmd) (((cmd) >> _CMD_VTYPESHIFT) & _CMD_VTYPEMASK)
+#define _CMD_N(cmd) (((cmd) >> _CMD_NSHIFT) & _CMD_NMASK)
+
+enum vnic_devcmd_cmd {
+ CMD_NONE = _CMDC(_CMD_DIR_NONE, _CMD_VTYPE_NONE, 0),
+
+ /*
+ * mcpu fw info in mem:
+ * in:
+ * (u64)a0=paddr to struct vnic_devcmd_fw_info
+ * action:
+ * Fills in struct vnic_devcmd_fw_info (128 bytes)
+ * note:
+ * An old definition of CMD_MCPU_FW_INFO
+ */
+ CMD_MCPU_FW_INFO_OLD = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 1),
+
+ /*
+ * mcpu fw info in mem:
+ * in:
+ * (u64)a0=paddr to struct vnic_devcmd_fw_info
+ * (u16)a1=size of the structure
+ * out:
+ * (u16)a1=0 for in:a1 = 0,
+ * data size actually written for other values.
+ * action:
+ * Fills in first 128 bytes of vnic_devcmd_fw_info for in:a1 = 0,
+ * first in:a1 bytes for 0 < in:a1 <= 132,
+ * 132 bytes for other values of in:a1.
+ * note:
+ * CMD_MCPU_FW_INFO and CMD_MCPU_FW_INFO_OLD have the same enum 1
+ * for source compatibility.
+ */
+ CMD_MCPU_FW_INFO = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 1),
+
+ /* dev-specific block member:
+ * in: (u16)a0=offset,(u8)a1=size
+ * out: a0=value */
+ CMD_DEV_SPEC = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 2),
+
+ /* stats clear */
+ CMD_STATS_CLEAR = _CMDCNW(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 3),
+
+ /* stats dump in mem: (u64)a0=paddr to stats area,
+ * (u16)a1=sizeof stats area */
+ CMD_STATS_DUMP = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 4),
+
+ /* set Rx packet filter: (u32)a0=filters (see CMD_PFILTER_*) */
+ CMD_PACKET_FILTER = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 7),
+
+ /* set Rx packet filter for all: (u32)a0=filters (see CMD_PFILTER_*) */
+ CMD_PACKET_FILTER_ALL = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 7),
+
+ /* hang detection notification */
+ CMD_HANG_NOTIFY = _CMDC(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 8),
+
+ /* MAC address in (u48)a0 */
+ CMD_GET_MAC_ADDR = _CMDC(_CMD_DIR_READ,
+ _CMD_VTYPE_ENET | _CMD_VTYPE_FC, 9),
+
+ /* add addr from (u48)a0 */
+ CMD_ADDR_ADD = _CMDCNW(_CMD_DIR_WRITE,
+ _CMD_VTYPE_ENET | _CMD_VTYPE_FC, 12),
+
+ /* del addr from (u48)a0 */
+ CMD_ADDR_DEL = _CMDCNW(_CMD_DIR_WRITE,
+ _CMD_VTYPE_ENET | _CMD_VTYPE_FC, 13),
+
+ /* add VLAN id in (u16)a0 */
+ CMD_VLAN_ADD = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 14),
+
+ /* del VLAN id in (u16)a0 */
+ CMD_VLAN_DEL = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 15),
+
+ /* nic_cfg in (u32)a0 */
+ CMD_NIC_CFG = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 16),
+
+ /* union vnic_rss_key in mem: (u64)a0=paddr, (u16)a1=len */
+ CMD_RSS_KEY = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 17),
+
+ /* union vnic_rss_cpu in mem: (u64)a0=paddr, (u16)a1=len */
+ CMD_RSS_CPU = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 18),
+
+ /* initiate softreset */
+ CMD_SOFT_RESET = _CMDCNW(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 19),
+
+ /* softreset status:
+ * out: a0=0 reset complete, a0=1 reset in progress */
+ CMD_SOFT_RESET_STATUS = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 20),
+
+ /* set struct vnic_devcmd_notify buffer in mem:
+ * in:
+ * (u64)a0=paddr to notify (set paddr=0 to unset)
+ * (u32)a1 & 0x00000000ffffffff=sizeof(struct vnic_devcmd_notify)
+ * (u16)a1 & 0x0000ffff00000000=intr num (-1 for no intr)
+ * out:
+ * (u32)a1 = effective size
+ */
+ CMD_NOTIFY = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 21),
+
+ /* UNDI API: (u64)a0=paddr to s_PXENV_UNDI_ struct,
+ * (u8)a1=PXENV_UNDI_xxx */
+ CMD_UNDI = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 22),
+
+ /* initiate open sequence (u32)a0=flags (see CMD_OPENF_*) */
+ CMD_OPEN = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 23),
+
+ /* open status:
+ * out: a0=0 open complete, a0=1 open in progress */
+ CMD_OPEN_STATUS = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 24),
+
+ /* close vnic */
+ CMD_CLOSE = _CMDC(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 25),
+
+ /* initialize virtual link: (u32)a0=flags (see CMD_INITF_*) */
+/***** Replaced by CMD_INIT *****/
+ CMD_INIT_v1 = _CMDCNW(_CMD_DIR_READ, _CMD_VTYPE_ALL, 26),
+
+ /* variant of CMD_INIT, with provisioning info
+ * (u64)a0=paddr of vnic_devcmd_provinfo
+ * (u32)a1=sizeof provision info */
+ CMD_INIT_PROV_INFO = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 27),
+
+ /* enable virtual link */
+ CMD_ENABLE = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 28),
+
+ /* enable virtual link, waiting variant. */
+ CMD_ENABLE_WAIT = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 28),
+
+ /* disable virtual link */
+ CMD_DISABLE = _CMDC(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 29),
+
+ /* stats dump sum of all vnic stats on same uplink in mem:
+ * (u64)a0=paddr
+ * (u16)a1=sizeof stats area */
+ CMD_STATS_DUMP_ALL = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 30),
+
+ /* init status:
+ * out: a0=0 init complete, a0=1 init in progress
+ * if a0=0, a1=errno */
+ CMD_INIT_STATUS = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 31),
+
+ /* INT13 API: (u64)a0=paddr to vnic_int13_params struct
+ * (u32)a1=INT13_CMD_xxx */
+ CMD_INT13 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_FC, 32),
+
+ /* logical uplink enable/disable: (u64)a0: 0/1=disable/enable */
+ CMD_LOGICAL_UPLINK = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 33),
+
+ /* undo initialize of virtual link */
+ CMD_DEINIT = _CMDCNW(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 34),
+
+ /* initialize virtual link: (u32)a0=flags (see CMD_INITF_*) */
+ CMD_INIT = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 35),
+
+ /* check fw capability of a cmd:
+ * in: (u32)a0=cmd
+ * out: (u32)a0=errno, 0:valid cmd, a1=supported VNIC_STF_* bits */
+ CMD_CAPABILITY = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 36),
+
+ /* persistent binding info
+ * in: (u64)a0=paddr of arg
+ * (u32)a1=CMD_PERBI_XXX */
+ CMD_PERBI = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_FC, 37),
+
+ /* Interrupt Assert Register functionality
+ * in: (u16)a0=interrupt number to assert
+ */
+ CMD_IAR = _CMDCNW(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 38),
+
+ /* initiate hangreset, like softreset after hang detected */
+ CMD_HANG_RESET = _CMDC(_CMD_DIR_NONE, _CMD_VTYPE_ALL, 39),
+
+ /* hangreset status:
+ * out: a0=0 reset complete, a0=1 reset in progress */
+ CMD_HANG_RESET_STATUS = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 40),
+
+ /*
+ * Set hw ingress packet vlan rewrite mode:
+ * in: (u32)a0=new vlan rewrite mode
+ * out: (u32)a0=old vlan rewrite mode */
+ CMD_IG_VLAN_REWRITE_MODE = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ENET, 41),
+
+ /*
+ * in: (u16)a0=bdf of target vnic
+ * (u32)a1=cmd to proxy
+ * a2-a15=args to cmd in a1
+ * out: (u32)a0=status of proxied cmd
+ * a1-a15=out args of proxied cmd */
+ CMD_PROXY_BY_BDF = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 42),
+
+ /*
+ * As for BY_BDF except a0 is index of hvnlink subordinate vnic
+ * or SR-IOV virtual vnic
+ */
+ CMD_PROXY_BY_INDEX = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 43),
+
+ /*
+ * For HPP toggle:
+ * adapter-info-get
+ * in: (u64)a0=phsical address of buffer passed in from caller.
+ * (u16)a1=size of buffer specified in a0.
+ * out: (u64)a0=phsical address of buffer passed in from caller.
+ * (u16)a1=actual bytes from VIF-CONFIG-INFO TLV, or
+ * 0 if no VIF-CONFIG-INFO TLV was ever received. */
+ CMD_CONFIG_INFO_GET = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 44),
+
+ /*
+ * INT13 API: (u64)a0=paddr to vnic_int13_params struct
+ * (u32)a1=INT13_CMD_xxx
+ */
+ CMD_INT13_ALL = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 45),
+
+ /*
+ * Set default vlan:
+ * in: (u16)a0=new default vlan
+ * (u16)a1=zero for overriding vlan with param a0,
+ * non-zero for resetting vlan to the default
+ * out: (u16)a0=old default vlan
+ */
+ CMD_SET_DEFAULT_VLAN = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 46),
+
+ /* init_prov_info2:
+ * Variant of CMD_INIT_PROV_INFO, where it will not try to enable
+ * the vnic until CMD_ENABLE2 is issued.
+ * (u64)a0=paddr of vnic_devcmd_provinfo
+ * (u32)a1=sizeof provision info */
+ CMD_INIT_PROV_INFO2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 47),
+
+ /* enable2:
+ * (u32)a0=0 ==> standby
+ * =CMD_ENABLE2_ACTIVE ==> active
+ */
+ CMD_ENABLE2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 48),
+
+ /*
+ * cmd_status:
+ * Returns the status of the specified command
+ * Input:
+ * a0 = command for which status is being queried.
+ * Possible values are:
+ * CMD_SOFT_RESET
+ * CMD_HANG_RESET
+ * CMD_OPEN
+ * CMD_INIT
+ * CMD_INIT_PROV_INFO
+ * CMD_DEINIT
+ * CMD_INIT_PROV_INFO2
+ * CMD_ENABLE2
+ * Output:
+ * if status == STAT_ERROR
+ * a0 = ERR_ENOTSUPPORTED - status for command in a0 is
+ * not supported
+ * if status == STAT_NONE
+ * a0 = status of the devcmd specified in a0 as follows.
+ * ERR_SUCCESS - command in a0 completed successfully
+ * ERR_EINPROGRESS - command in a0 is still in progress
+ */
+ CMD_STATUS = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 49),
+
+ /*
+ * Returns interrupt coalescing timer conversion factors.
+ * After calling this devcmd, ENIC driver can convert
+ * interrupt coalescing timer in usec into CPU cycles as follows:
+ *
+ * intr_timer_cycles = intr_timer_usec * multiplier / divisor
+ *
+ * Interrupt coalescing timer in usecs can be be converted/obtained
+ * from CPU cycles as follows:
+ *
+ * intr_timer_usec = intr_timer_cycles * divisor / multiplier
+ *
+ * in: none
+ * out: (u32)a0 = multiplier
+ * (u32)a1 = divisor
+ * (u32)a2 = maximum timer value in usec
+ */
+ CMD_INTR_COAL_CONVERT = _CMDC(_CMD_DIR_READ, _CMD_VTYPE_ALL, 50),
+
+ /*
+ * ISCSI DUMP API:
+ * in: (u64)a0=paddr of the param or param itself
+ * (u32)a1=ISCSI_CMD_xxx
+ */
+ CMD_ISCSI_DUMP_REQ = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 51),
+
+ /*
+ * ISCSI DUMP STATUS API:
+ * in: (u32)a0=cmd tag
+ * in: (u32)a1=ISCSI_CMD_xxx
+ * out: (u32)a0=cmd status
+ */
+ CMD_ISCSI_DUMP_STATUS = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 52),
+
+ /*
+ * Subvnic migration from MQ <--> VF.
+ * Enable the LIF migration from MQ to VF and vice versa. MQ and VF
+ * indexes are statically bound at the time of initialization.
+ * Based on the
+ * direction of migration, the resources of either MQ or the VF shall
+ * be attached to the LIF.
+ * in: (u32)a0=Direction of Migration
+ * 0=> Migrate to VF
+ * 1=> Migrate to MQ
+ * (u32)a1=VF index (MQ index)
+ */
+ CMD_MIGRATE_SUBVNIC = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 53),
+
+
+ /*
+ * Register / Deregister the notification block for MQ subvnics
+ * in:
+ * (u64)a0=paddr to notify (set paddr=0 to unset)
+ * (u32)a1 & 0x00000000ffffffff=sizeof(struct vnic_devcmd_notify)
+ * (u16)a1 & 0x0000ffff00000000=intr num (-1 for no intr)
+ * out:
+ * (u32)a1 = effective size
+ */
+ CMD_SUBVNIC_NOTIFY = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ALL, 54),
+
+ /*
+ * Set the predefined mac address as default
+ * in:
+ * (u48)a0=mac addr
+ */
+ CMD_SET_MAC_ADDR = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 55),
+
+ /* Update the provisioning info of the given VIF
+ * (u64)a0=paddr of vnic_devcmd_provinfo
+ * (u32)a1=sizeof provision info */
+ CMD_PROV_INFO_UPDATE = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 56),
+
+ /*
+ * Initialization for the devcmd2 interface.
+ * in: (u64) a0=host result buffer physical address
+ * in: (u16) a1=number of entries in result buffer
+ */
+ CMD_INITIALIZE_DEVCMD2 = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ALL, 57),
+
+ /*
+ * Add a filter.
+ * in: (u64) a0= filter address
+ * (u32) a1= size of filter
+ * out: (u32) a0=filter identifier
+ */
+ CMD_ADD_FILTER = _CMDC(_CMD_DIR_RW, _CMD_VTYPE_ENET, 58),
+
+ /*
+ * Delete a filter.
+ * in: (u32) a0=filter identifier
+ */
+ CMD_DEL_FILTER = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 59),
+
+ /*
+ * Enable a Queue Pair in User space NIC
+ * in: (u32) a0=Queue Pair number
+ * (u32) a1= command
+ */
+ CMD_QP_ENABLE = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 60),
+
+ /*
+ * Disable a Queue Pair in User space NIC
+ * in: (u32) a0=Queue Pair number
+ * (u32) a1= command
+ */
+ CMD_QP_DISABLE = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 61),
+
+ /*
+ * Stats dump Queue Pair in User space NIC
+ * in: (u32) a0=Queue Pair number
+ * (u64) a1=host buffer addr for status dump
+ * (u32) a2=length of the buffer
+ */
+ CMD_QP_STATS_DUMP = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 62),
+
+ /*
+ * Clear stats for Queue Pair in User space NIC
+ * in: (u32) a0=Queue Pair number
+ */
+ CMD_QP_STATS_CLEAR = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 63),
+
+ /*
+ * Enable/Disable overlay offloads on the given vnic
+ * in: (u8) a0 = OVERLAY_FEATURE_NVGRE : NVGRE
+ * a0 = OVERLAY_FEATURE_VXLAN : VxLAN
+ * in: (u8) a1 = OVERLAY_OFFLOAD_ENABLE : Enable
+ * a1 = OVERLAY_OFFLOAD_DISABLE : Disable
+ */
+ CMD_OVERLAY_OFFLOAD_ENABLE_DISABLE =
+ _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 72),
+
+ /*
+ * Configuration of overlay offloads feature on a given vNIC
+ * in: (u8) a0 = DEVCMD_OVERLAY_NVGRE : NVGRE
+ * a0 = DEVCMD_OVERLAY_VXLAN : VxLAN
+ * in: (u8) a1 = VXLAN_PORT_UPDATE : VxLAN
+ * in: (u16) a2 = unsigned short int port information
+ */
+ CMD_OVERLAY_OFFLOAD_CFG = _CMDC(_CMD_DIR_WRITE, _CMD_VTYPE_ENET, 73),
+};
+
+/* CMD_ENABLE2 flags */
+#define CMD_ENABLE2_STANDBY 0x0
+#define CMD_ENABLE2_ACTIVE 0x1
+
+/* flags for CMD_OPEN */
+#define CMD_OPENF_OPROM 0x1 /* open coming from option rom */
+
+/* flags for CMD_INIT */
+#define CMD_INITF_DEFAULT_MAC 0x1 /* init with default mac addr */
+
+/* flags for CMD_PACKET_FILTER */
+#define CMD_PFILTER_DIRECTED 0x01
+#define CMD_PFILTER_MULTICAST 0x02
+#define CMD_PFILTER_BROADCAST 0x04
+#define CMD_PFILTER_PROMISCUOUS 0x08
+#define CMD_PFILTER_ALL_MULTICAST 0x10
+
+/* Commands for CMD_QP_ENABLE/CM_QP_DISABLE */
+#define CMD_QP_RQWQ 0x0
+
+/* rewrite modes for CMD_IG_VLAN_REWRITE_MODE */
+#define IG_VLAN_REWRITE_MODE_DEFAULT_TRUNK 0
+#define IG_VLAN_REWRITE_MODE_UNTAG_DEFAULT_VLAN 1
+#define IG_VLAN_REWRITE_MODE_PRIORITY_TAG_DEFAULT_VLAN 2
+#define IG_VLAN_REWRITE_MODE_PASS_THRU 3
+
+enum vnic_devcmd_status {
+ STAT_NONE = 0,
+ STAT_BUSY = 1 << 0, /* cmd in progress */
+ STAT_ERROR = 1 << 1, /* last cmd caused error (code in a0) */
+};
+
+enum vnic_devcmd_error {
+ ERR_SUCCESS = 0,
+ ERR_EINVAL = 1,
+ ERR_EFAULT = 2,
+ ERR_EPERM = 3,
+ ERR_EBUSY = 4,
+ ERR_ECMDUNKNOWN = 5,
+ ERR_EBADSTATE = 6,
+ ERR_ENOMEM = 7,
+ ERR_ETIMEDOUT = 8,
+ ERR_ELINKDOWN = 9,
+ ERR_EMAXRES = 10,
+ ERR_ENOTSUPPORTED = 11,
+ ERR_EINPROGRESS = 12,
+ ERR_MAX
+};
+
+/*
+ * note: hw_version and asic_rev refer to the same thing,
+ * but have different formats. hw_version is
+ * a 32-byte string (e.g. "A2") and asic_rev is
+ * a 16-bit integer (e.g. 0xA2).
+ */
+struct vnic_devcmd_fw_info {
+ char fw_version[32];
+ char fw_build[32];
+ char hw_version[32];
+ char hw_serial_number[32];
+ u16 asic_type;
+ u16 asic_rev;
+};
+
+enum fwinfo_asic_type {
+ FWINFO_ASIC_TYPE_UNKNOWN,
+ FWINFO_ASIC_TYPE_PALO,
+ FWINFO_ASIC_TYPE_SERENO,
+};
+
+
+struct vnic_devcmd_notify {
+ u32 csum; /* checksum over following words */
+
+ u32 link_state; /* link up == 1 */
+ u32 port_speed; /* effective port speed (rate limit) */
+ u32 mtu; /* MTU */
+ u32 msglvl; /* requested driver msg lvl */
+ u32 uif; /* uplink interface */
+ u32 status; /* status bits (see VNIC_STF_*) */
+ u32 error; /* error code (see ERR_*) for first ERR */
+ u32 link_down_cnt; /* running count of link down transitions */
+ u32 perbi_rebuild_cnt; /* running count of perbi rebuilds */
+};
+#define VNIC_STF_FATAL_ERR 0x0001 /* fatal fw error */
+#define VNIC_STF_STD_PAUSE 0x0002 /* standard link-level pause on */
+#define VNIC_STF_PFC_PAUSE 0x0004 /* priority flow control pause on */
+/* all supported status flags */
+#define VNIC_STF_ALL (VNIC_STF_FATAL_ERR |\
+ VNIC_STF_STD_PAUSE |\
+ VNIC_STF_PFC_PAUSE |\
+ 0)
+
+struct vnic_devcmd_provinfo {
+ u8 oui[3];
+ u8 type;
+ u8 data[0];
+};
+
+/*
+ * These are used in flags field of different filters to denote
+ * valid fields used.
+ */
+#define FILTER_FIELD_VALID(fld) (1 << (fld - 1))
+
+#define FILTER_FIELDS_USNIC (FILTER_FIELD_VALID(1) | \
+ FILTER_FIELD_VALID(2) | \
+ FILTER_FIELD_VALID(3) | \
+ FILTER_FIELD_VALID(4))
+
+#define FILTER_FIELDS_IPV4_5TUPLE (FILTER_FIELD_VALID(1) | \
+ FILTER_FIELD_VALID(2) | \
+ FILTER_FIELD_VALID(3) | \
+ FILTER_FIELD_VALID(4) | \
+ FILTER_FIELD_VALID(5))
+
+#define FILTER_FIELDS_MAC_VLAN (FILTER_FIELD_VALID(1) | \
+ FILTER_FIELD_VALID(2))
+
+#define FILTER_FIELD_USNIC_VLAN FILTER_FIELD_VALID(1)
+#define FILTER_FIELD_USNIC_ETHTYPE FILTER_FIELD_VALID(2)
+#define FILTER_FIELD_USNIC_PROTO FILTER_FIELD_VALID(3)
+#define FILTER_FIELD_USNIC_ID FILTER_FIELD_VALID(4)
+
+struct filter_usnic_id {
+ u32 flags;
+ u16 vlan;
+ u16 ethtype;
+ u8 proto_version;
+ u32 usnic_id;
+} __attribute__((packed));
+
+#define FILTER_FIELD_5TUP_PROTO FILTER_FIELD_VALID(1)
+#define FILTER_FIELD_5TUP_SRC_AD FILTER_FIELD_VALID(2)
+#define FILTER_FIELD_5TUP_DST_AD FILTER_FIELD_VALID(3)
+#define FILTER_FIELD_5TUP_SRC_PT FILTER_FIELD_VALID(4)
+#define FILTER_FIELD_5TUP_DST_PT FILTER_FIELD_VALID(5)
+
+/* Enums for the protocol field. */
+enum protocol_e {
+ PROTO_UDP = 0,
+ PROTO_TCP = 1,
+};
+
+struct filter_ipv4_5tuple {
+ u32 flags;
+ u32 protocol;
+ u32 src_addr;
+ u32 dst_addr;
+ u16 src_port;
+ u16 dst_port;
+} __attribute__((packed));
+
+#define FILTER_FIELD_VMQ_VLAN FILTER_FIELD_VALID(1)
+#define FILTER_FIELD_VMQ_MAC FILTER_FIELD_VALID(2)
+
+struct filter_mac_vlan {
+ u32 flags;
+ u16 vlan;
+ u8 mac_addr[6];
+} __attribute__((packed));
+
+/* Specifies the filter_action type. */
+enum {
+ FILTER_ACTION_RQ_STEERING = 0,
+ FILTER_ACTION_MAX
+};
+
+struct filter_action {
+ u32 type;
+ union {
+ u32 rq_idx;
+ } u;
+} __attribute__((packed));
+
+/* Specifies the filter type. */
+enum filter_type {
+ FILTER_USNIC_ID = 0,
+ FILTER_IPV4_5TUPLE = 1,
+ FILTER_MAC_VLAN = 2,
+ FILTER_MAX
+};
+
+struct filter {
+ u32 type;
+ union {
+ struct filter_usnic_id usnic;
+ struct filter_ipv4_5tuple ipv4;
+ struct filter_mac_vlan mac_vlan;
+ } u;
+} __attribute__((packed));
+
+enum {
+ CLSF_TLV_FILTER = 0,
+ CLSF_TLV_ACTION = 1,
+};
+
+#define FILTER_MAX_BUF_SIZE 100 /* Maximum size of buffer to CMD_ADD_FILTER */
+
+struct filter_tlv {
+ u_int32_t type;
+ u_int32_t length;
+ u_int32_t val[0];
+};
+
+enum {
+ CLSF_ADD = 0,
+ CLSF_DEL = 1,
+};
+
+/*
+ * Writing cmd register causes STAT_BUSY to get set in status register.
+ * When cmd completes, STAT_BUSY will be cleared.
+ *
+ * If cmd completed successfully STAT_ERROR will be clear
+ * and args registers contain cmd-specific results.
+ *
+ * If cmd error, STAT_ERROR will be set and args[0] contains error code.
+ *
+ * status register is read-only. While STAT_BUSY is set,
+ * all other register contents are read-only.
+ */
+
+/* Make sizeof(vnic_devcmd) a power-of-2 for I/O BAR. */
+#define VNIC_DEVCMD_NARGS 15
+struct vnic_devcmd {
+ u32 status; /* RO */
+ u32 cmd; /* RW */
+ u64 args[VNIC_DEVCMD_NARGS]; /* RW cmd args (little-endian) */
+};
+
+/*
+ * Version 2 of the interface.
+ *
+ * Some things are carried over, notably the vnic_devcmd_cmd enum.
+ */
+
+/*
+ * Flags for vnic_devcmd2.flags
+ */
+
+#define DEVCMD2_FNORESULT 0x1 /* Don't copy result to host */
+
+#define VNIC_DEVCMD2_NARGS VNIC_DEVCMD_NARGS
+struct vnic_devcmd2 {
+ u16 pad;
+ u16 flags;
+ u32 cmd; /* same command #defines as original */
+ u64 args[VNIC_DEVCMD2_NARGS];
+};
+
+#define VNIC_DEVCMD2_NRESULTS VNIC_DEVCMD_NARGS
+struct devcmd2_result {
+ u64 results[VNIC_DEVCMD2_NRESULTS];
+ u32 pad;
+ u16 completed_index; /* into copy WQ */
+ u8 error; /* same error codes as original */
+ u8 color; /* 0 or 1 as with completion queues */
+};
+
+#define DEVCMD2_RING_SIZE 32
+#define DEVCMD2_DESC_SIZE 128
+
+#define DEVCMD2_RESULTS_SIZE_MAX ((1 << 16) - 1)
+
+/* Overlay related definitions */
+
+/*
+ * This enum lists the flag associated with each of the overlay features
+ */
+typedef enum {
+ OVERLAY_FEATURE_NVGRE = 1,
+ OVERLAY_FEATURE_VXLAN,
+ OVERLAY_FEATURE_MAX,
+} overlay_feature_t;
+
+#define OVERLAY_OFFLOAD_ENABLE 0
+#define OVERLAY_OFFLOAD_DISABLE 1
+
+#define OVERLAY_CFG_VXLAN_PORT_UPDATE 0
+#endif /* _VNIC_DEVCMD_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_enet.h b/lib/librte_pmd_enic/vnic/vnic_enet.h
new file mode 100644
index 0000000..9d3cc07
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_enet.h
@@ -0,0 +1,78 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_enet.h 175806 2014-06-04 19:31:17Z rfaucett $"
+
+#ifndef _VNIC_ENIC_H_
+#define _VNIC_ENIC_H_
+
+/* Device-specific region: enet configuration */
+struct vnic_enet_config {
+ u32 flags;
+ u32 wq_desc_count;
+ u32 rq_desc_count;
+ u16 mtu;
+ u16 intr_timer_deprecated;
+ u8 intr_timer_type;
+ u8 intr_mode;
+ char devname[16];
+ u32 intr_timer_usec;
+ u16 loop_tag;
+ u16 vf_rq_count;
+ u16 num_arfs;
+ u64 mem_paddr;
+};
+
+#define VENETF_TSO 0x1 /* TSO enabled */
+#define VENETF_LRO 0x2 /* LRO enabled */
+#define VENETF_RXCSUM 0x4 /* RX csum enabled */
+#define VENETF_TXCSUM 0x8 /* TX csum enabled */
+#define VENETF_RSS 0x10 /* RSS enabled */
+#define VENETF_RSSHASH_IPV4 0x20 /* Hash on IPv4 fields */
+#define VENETF_RSSHASH_TCPIPV4 0x40 /* Hash on TCP + IPv4 fields */
+#define VENETF_RSSHASH_IPV6 0x80 /* Hash on IPv6 fields */
+#define VENETF_RSSHASH_TCPIPV6 0x100 /* Hash on TCP + IPv6 fields */
+#define VENETF_RSSHASH_IPV6_EX 0x200 /* Hash on IPv6 extended fields */
+#define VENETF_RSSHASH_TCPIPV6_EX 0x400 /* Hash on TCP + IPv6 ext. fields */
+#define VENETF_LOOP 0x800 /* Loopback enabled */
+#define VENETF_VMQ 0x4000 /* using VMQ flag for VMware NETQ */
+#define VENETF_VXLAN 0x10000 /* VxLAN offload */
+#define VENETF_NVGRE 0x20000 /* NVGRE offload */
+#define VENET_INTR_TYPE_MIN 0 /* Timer specs min interrupt spacing */
+#define VENET_INTR_TYPE_IDLE 1 /* Timer specs idle time before irq */
+
+#define VENET_INTR_MODE_ANY 0 /* Try MSI-X, then MSI, then INTx */
+#define VENET_INTR_MODE_MSI 1 /* Try MSI then INTx */
+#define VENET_INTR_MODE_INTX 2 /* Try INTx only */
+
+#endif /* _VNIC_ENIC_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_intr.c b/lib/librte_pmd_enic/vnic/vnic_intr.c
new file mode 100644
index 0000000..9be3744
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_intr.c
@@ -0,0 +1,83 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_intr.c 171146 2014-05-02 07:08:20Z ssujith $"
+
+#include "vnic_dev.h"
+#include "vnic_intr.h"
+
+void vnic_intr_free(struct vnic_intr *intr)
+{
+ intr->ctrl = NULL;
+}
+
+int vnic_intr_alloc(struct vnic_dev *vdev, struct vnic_intr *intr,
+ unsigned int index)
+{
+ intr->index = index;
+ intr->vdev = vdev;
+
+ intr->ctrl = vnic_dev_get_res(vdev, RES_TYPE_INTR_CTRL, index);
+ if (!intr->ctrl) {
+ pr_err("Failed to hook INTR[%d].ctrl resource\n", index);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+void vnic_intr_init(struct vnic_intr *intr, u32 coalescing_timer,
+ unsigned int coalescing_type, unsigned int mask_on_assertion)
+{
+ vnic_intr_coalescing_timer_set(intr, coalescing_timer);
+ iowrite32(coalescing_type, &intr->ctrl->coalescing_type);
+ iowrite32(mask_on_assertion, &intr->ctrl->mask_on_assertion);
+ iowrite32(0, &intr->ctrl->int_credits);
+}
+
+void vnic_intr_coalescing_timer_set(struct vnic_intr *intr,
+ u32 coalescing_timer)
+{
+ iowrite32(vnic_dev_intr_coal_timer_usec_to_hw(intr->vdev,
+ coalescing_timer), &intr->ctrl->coalescing_timer);
+}
+
+void vnic_intr_clean(struct vnic_intr *intr)
+{
+ iowrite32(0, &intr->ctrl->int_credits);
+}
+
+void vnic_intr_raise(struct vnic_intr *intr)
+{
+ vnic_dev_raise_intr(intr->vdev, (u16)intr->index);
+}
diff --git a/lib/librte_pmd_enic/vnic/vnic_intr.h b/lib/librte_pmd_enic/vnic/vnic_intr.h
new file mode 100644
index 0000000..ecb82bf
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_intr.h
@@ -0,0 +1,126 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_intr.h 171146 2014-05-02 07:08:20Z ssujith $"
+
+#ifndef _VNIC_INTR_H_
+#define _VNIC_INTR_H_
+
+
+#include "vnic_dev.h"
+
+#define VNIC_INTR_TIMER_TYPE_ABS 0
+#define VNIC_INTR_TIMER_TYPE_QUIET 1
+
+/* Interrupt control */
+struct vnic_intr_ctrl {
+ u32 coalescing_timer; /* 0x00 */
+ u32 pad0;
+ u32 coalescing_value; /* 0x08 */
+ u32 pad1;
+ u32 coalescing_type; /* 0x10 */
+ u32 pad2;
+ u32 mask_on_assertion; /* 0x18 */
+ u32 pad3;
+ u32 mask; /* 0x20 */
+ u32 pad4;
+ u32 int_credits; /* 0x28 */
+ u32 pad5;
+ u32 int_credit_return; /* 0x30 */
+ u32 pad6;
+};
+
+struct vnic_intr {
+ unsigned int index;
+ struct vnic_dev *vdev;
+ struct vnic_intr_ctrl __iomem *ctrl; /* memory-mapped */
+};
+
+static inline void vnic_intr_unmask(struct vnic_intr *intr)
+{
+ iowrite32(0, &intr->ctrl->mask);
+}
+
+static inline void vnic_intr_mask(struct vnic_intr *intr)
+{
+ iowrite32(1, &intr->ctrl->mask);
+}
+
+static inline int vnic_intr_masked(struct vnic_intr *intr)
+{
+ return ioread32(&intr->ctrl->mask);
+}
+
+static inline void vnic_intr_return_credits(struct vnic_intr *intr,
+ unsigned int credits, int unmask, int reset_timer)
+{
+#define VNIC_INTR_UNMASK_SHIFT 16
+#define VNIC_INTR_RESET_TIMER_SHIFT 17
+
+ u32 int_credit_return = (credits & 0xffff) |
+ (unmask ? (1 << VNIC_INTR_UNMASK_SHIFT) : 0) |
+ (reset_timer ? (1 << VNIC_INTR_RESET_TIMER_SHIFT) : 0);
+
+ iowrite32(int_credit_return, &intr->ctrl->int_credit_return);
+}
+
+static inline unsigned int vnic_intr_credits(struct vnic_intr *intr)
+{
+ return ioread32(&intr->ctrl->int_credits);
+}
+
+static inline void vnic_intr_return_all_credits(struct vnic_intr *intr)
+{
+ unsigned int credits = vnic_intr_credits(intr);
+ int unmask = 1;
+ int reset_timer = 1;
+
+ vnic_intr_return_credits(intr, credits, unmask, reset_timer);
+}
+
+static inline u32 vnic_intr_legacy_pba(u32 __iomem *legacy_pba)
+{
+ /* read PBA without clearing */
+ return ioread32(legacy_pba);
+}
+
+void vnic_intr_free(struct vnic_intr *intr);
+int vnic_intr_alloc(struct vnic_dev *vdev, struct vnic_intr *intr,
+ unsigned int index);
+void vnic_intr_init(struct vnic_intr *intr, u32 coalescing_timer,
+ unsigned int coalescing_type, unsigned int mask_on_assertion);
+void vnic_intr_coalescing_timer_set(struct vnic_intr *intr,
+ u32 coalescing_timer);
+void vnic_intr_clean(struct vnic_intr *intr);
+
+#endif /* _VNIC_INTR_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_nic.h b/lib/librte_pmd_enic/vnic/vnic_nic.h
new file mode 100644
index 0000000..332cfb4
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_nic.h
@@ -0,0 +1,88 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_nic.h 59839 2010-09-27 20:36:31Z roprabhu $"
+
+#ifndef _VNIC_NIC_H_
+#define _VNIC_NIC_H_
+
+#define NIC_CFG_RSS_DEFAULT_CPU_MASK_FIELD 0xffUL
+#define NIC_CFG_RSS_DEFAULT_CPU_SHIFT 0
+#define NIC_CFG_RSS_HASH_TYPE (0xffUL << 8)
+#define NIC_CFG_RSS_HASH_TYPE_MASK_FIELD 0xffUL
+#define NIC_CFG_RSS_HASH_TYPE_SHIFT 8
+#define NIC_CFG_RSS_HASH_BITS (7UL << 16)
+#define NIC_CFG_RSS_HASH_BITS_MASK_FIELD 7UL
+#define NIC_CFG_RSS_HASH_BITS_SHIFT 16
+#define NIC_CFG_RSS_BASE_CPU (7UL << 19)
+#define NIC_CFG_RSS_BASE_CPU_MASK_FIELD 7UL
+#define NIC_CFG_RSS_BASE_CPU_SHIFT 19
+#define NIC_CFG_RSS_ENABLE (1UL << 22)
+#define NIC_CFG_RSS_ENABLE_MASK_FIELD 1UL
+#define NIC_CFG_RSS_ENABLE_SHIFT 22
+#define NIC_CFG_TSO_IPID_SPLIT_EN (1UL << 23)
+#define NIC_CFG_TSO_IPID_SPLIT_EN_MASK_FIELD 1UL
+#define NIC_CFG_TSO_IPID_SPLIT_EN_SHIFT 23
+#define NIC_CFG_IG_VLAN_STRIP_EN (1UL << 24)
+#define NIC_CFG_IG_VLAN_STRIP_EN_MASK_FIELD 1UL
+#define NIC_CFG_IG_VLAN_STRIP_EN_SHIFT 24
+
+#define NIC_CFG_RSS_HASH_TYPE_IPV4 (1 << 1)
+#define NIC_CFG_RSS_HASH_TYPE_TCP_IPV4 (1 << 2)
+#define NIC_CFG_RSS_HASH_TYPE_IPV6 (1 << 3)
+#define NIC_CFG_RSS_HASH_TYPE_TCP_IPV6 (1 << 4)
+#define NIC_CFG_RSS_HASH_TYPE_IPV6_EX (1 << 5)
+#define NIC_CFG_RSS_HASH_TYPE_TCP_IPV6_EX (1 << 6)
+
+static inline void vnic_set_nic_cfg(u32 *nic_cfg,
+ u8 rss_default_cpu, u8 rss_hash_type,
+ u8 rss_hash_bits, u8 rss_base_cpu,
+ u8 rss_enable, u8 tso_ipid_split_en,
+ u8 ig_vlan_strip_en)
+{
+ *nic_cfg = (rss_default_cpu & NIC_CFG_RSS_DEFAULT_CPU_MASK_FIELD) |
+ ((rss_hash_type & NIC_CFG_RSS_HASH_TYPE_MASK_FIELD)
+ << NIC_CFG_RSS_HASH_TYPE_SHIFT) |
+ ((rss_hash_bits & NIC_CFG_RSS_HASH_BITS_MASK_FIELD)
+ << NIC_CFG_RSS_HASH_BITS_SHIFT) |
+ ((rss_base_cpu & NIC_CFG_RSS_BASE_CPU_MASK_FIELD)
+ << NIC_CFG_RSS_BASE_CPU_SHIFT) |
+ ((rss_enable & NIC_CFG_RSS_ENABLE_MASK_FIELD)
+ << NIC_CFG_RSS_ENABLE_SHIFT) |
+ ((tso_ipid_split_en & NIC_CFG_TSO_IPID_SPLIT_EN_MASK_FIELD)
+ << NIC_CFG_TSO_IPID_SPLIT_EN_SHIFT) |
+ ((ig_vlan_strip_en & NIC_CFG_IG_VLAN_STRIP_EN_MASK_FIELD)
+ << NIC_CFG_IG_VLAN_STRIP_EN_SHIFT);
+}
+
+#endif /* _VNIC_NIC_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_resource.h b/lib/librte_pmd_enic/vnic/vnic_resource.h
new file mode 100644
index 0000000..2512712
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_resource.h
@@ -0,0 +1,97 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_resource.h 196958 2014-11-04 18:23:37Z xuywang $"
+
+#ifndef _VNIC_RESOURCE_H_
+#define _VNIC_RESOURCE_H_
+
+#define VNIC_RES_MAGIC 0x766E6963L /* 'vnic' */
+#define VNIC_RES_VERSION 0x00000000L
+#define MGMTVNIC_MAGIC 0x544d474dL /* 'MGMT' */
+#define MGMTVNIC_VERSION 0x00000000L
+
+/* The MAC address assigned to the CFG vNIC is fixed. */
+#define MGMTVNIC_MAC { 0x02, 0x00, 0x54, 0x4d, 0x47, 0x4d }
+
+/* vNIC resource types */
+enum vnic_res_type {
+ RES_TYPE_EOL, /* End-of-list */
+ RES_TYPE_WQ, /* Work queues */
+ RES_TYPE_RQ, /* Receive queues */
+ RES_TYPE_CQ, /* Completion queues */
+ RES_TYPE_MEM, /* Window to dev memory */
+ RES_TYPE_NIC_CFG, /* Enet NIC config registers */
+ RES_TYPE_RSS_KEY, /* Enet RSS secret key */
+ RES_TYPE_RSS_CPU, /* Enet RSS indirection table */
+ RES_TYPE_TX_STATS, /* Netblock Tx statistic regs */
+ RES_TYPE_RX_STATS, /* Netblock Rx statistic regs */
+ RES_TYPE_INTR_CTRL, /* Interrupt ctrl table */
+ RES_TYPE_INTR_TABLE, /* MSI/MSI-X Interrupt table */
+ RES_TYPE_INTR_PBA, /* MSI/MSI-X PBA table */
+ RES_TYPE_INTR_PBA_LEGACY, /* Legacy intr status */
+ RES_TYPE_DEBUG, /* Debug-only info */
+ RES_TYPE_DEV, /* Device-specific region */
+ RES_TYPE_DEVCMD, /* Device command region */
+ RES_TYPE_PASS_THRU_PAGE, /* Pass-thru page */
+ RES_TYPE_SUBVNIC, /* subvnic resource type */
+ RES_TYPE_MQ_WQ, /* MQ Work queues */
+ RES_TYPE_MQ_RQ, /* MQ Receive queues */
+ RES_TYPE_MQ_CQ, /* MQ Completion queues */
+ RES_TYPE_DEPRECATED1, /* Old version of devcmd 2 */
+ RES_TYPE_DEVCMD2, /* Device control region */
+ RES_TYPE_MAX, /* Count of resource types */
+};
+
+struct vnic_resource_header {
+ u32 magic;
+ u32 version;
+};
+
+struct mgmt_barmap_hdr {
+ u32 magic; /* magic number */
+ u32 version; /* header format version */
+ u16 lif; /* loopback lif for mgmt frames */
+ u16 pci_slot; /* installed pci slot */
+ char serial[16]; /* card serial number */
+};
+
+struct vnic_resource {
+ u8 type;
+ u8 bar;
+ u8 pad[2];
+ u32 bar_offset;
+ u32 count;
+};
+
+#endif /* _VNIC_RESOURCE_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_rq.c b/lib/librte_pmd_enic/vnic/vnic_rq.c
new file mode 100644
index 0000000..a013cba
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_rq.c
@@ -0,0 +1,246 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_rq.c 171146 2014-05-02 07:08:20Z ssujith $"
+
+#include "vnic_dev.h"
+#include "vnic_rq.h"
+
+static int vnic_rq_alloc_bufs(struct vnic_rq *rq)
+{
+ struct vnic_rq_buf *buf;
+ unsigned int i, j, count = rq->ring.desc_count;
+ unsigned int blks = VNIC_RQ_BUF_BLKS_NEEDED(count);
+
+ for (i = 0; i < blks; i++) {
+ rq->bufs[i] = kzalloc(VNIC_RQ_BUF_BLK_SZ(count), GFP_ATOMIC);
+ if (!rq->bufs[i])
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < blks; i++) {
+ buf = rq->bufs[i];
+ for (j = 0; j < VNIC_RQ_BUF_BLK_ENTRIES(count); j++) {
+ buf->index = i * VNIC_RQ_BUF_BLK_ENTRIES(count) + j;
+ buf->desc = (u8 *)rq->ring.descs +
+ rq->ring.desc_size * buf->index;
+ if (buf->index + 1 == count) {
+ buf->next = rq->bufs[0];
+ break;
+ } else if (j + 1 == VNIC_RQ_BUF_BLK_ENTRIES(count)) {
+ buf->next = rq->bufs[i + 1];
+ } else {
+ buf->next = buf + 1;
+ buf++;
+ }
+ }
+ }
+
+ rq->to_use = rq->to_clean = rq->bufs[0];
+
+ return 0;
+}
+
+int vnic_rq_mem_size(struct vnic_rq *rq, unsigned int desc_count,
+ unsigned int desc_size)
+{
+ int mem_size = 0;
+
+ mem_size += vnic_dev_desc_ring_size(&rq->ring, desc_count, desc_size);
+
+ mem_size += VNIC_RQ_BUF_BLKS_NEEDED(rq->ring.desc_count) *
+ VNIC_RQ_BUF_BLK_SZ(rq->ring.desc_count);
+
+ return mem_size;
+}
+
+void vnic_rq_free(struct vnic_rq *rq)
+{
+ struct vnic_dev *vdev;
+ unsigned int i;
+
+ vdev = rq->vdev;
+
+ vnic_dev_free_desc_ring(vdev, &rq->ring);
+
+ for (i = 0; i < VNIC_RQ_BUF_BLKS_MAX; i++) {
+ if (rq->bufs[i]) {
+ kfree(rq->bufs[i]);
+ rq->bufs[i] = NULL;
+ }
+ }
+
+ rq->ctrl = NULL;
+}
+
+int vnic_rq_alloc(struct vnic_dev *vdev, struct vnic_rq *rq, unsigned int index,
+ unsigned int desc_count, unsigned int desc_size)
+{
+ int err;
+ char res_name[NAME_MAX];
+ static int instance;
+
+ rq->index = index;
+ rq->vdev = vdev;
+
+ rq->ctrl = vnic_dev_get_res(vdev, RES_TYPE_RQ, index);
+ if (!rq->ctrl) {
+ pr_err("Failed to hook RQ[%d] resource\n", index);
+ return -EINVAL;
+ }
+
+ vnic_rq_disable(rq);
+
+ snprintf(res_name, sizeof(res_name), "%d-rq-%d", instance++, index);
+ err = vnic_dev_alloc_desc_ring(vdev, &rq->ring, desc_count, desc_size,
+ rq->socket_id, res_name);
+ if (err)
+ return err;
+
+ err = vnic_rq_alloc_bufs(rq);
+ if (err) {
+ vnic_rq_free(rq);
+ return err;
+ }
+
+ return 0;
+}
+
+void vnic_rq_init_start(struct vnic_rq *rq, unsigned int cq_index,
+ unsigned int fetch_index, unsigned int posted_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset)
+{
+ u64 paddr;
+ unsigned int count = rq->ring.desc_count;
+
+ paddr = (u64)rq->ring.base_addr | VNIC_PADDR_TARGET;
+ writeq(paddr, &rq->ctrl->ring_base);
+ iowrite32(count, &rq->ctrl->ring_size);
+ iowrite32(cq_index, &rq->ctrl->cq_index);
+ iowrite32(error_interrupt_enable, &rq->ctrl->error_interrupt_enable);
+ iowrite32(error_interrupt_offset, &rq->ctrl->error_interrupt_offset);
+ iowrite32(0, &rq->ctrl->dropped_packet_count);
+ iowrite32(0, &rq->ctrl->error_status);
+ iowrite32(fetch_index, &rq->ctrl->fetch_index);
+ iowrite32(posted_index, &rq->ctrl->posted_index);
+
+ rq->to_use = rq->to_clean =
+ &rq->bufs[fetch_index / VNIC_RQ_BUF_BLK_ENTRIES(count)]
+ [fetch_index % VNIC_RQ_BUF_BLK_ENTRIES(count)];
+}
+
+void vnic_rq_init(struct vnic_rq *rq, unsigned int cq_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset)
+{
+ u32 fetch_index = 0;
+ /* Use current fetch_index as the ring starting point */
+ fetch_index = ioread32(&rq->ctrl->fetch_index);
+
+ if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone */
+ /* Hardware surprise removal: reset fetch_index */
+ fetch_index = 0;
+ }
+
+ vnic_rq_init_start(rq, cq_index,
+ fetch_index, fetch_index,
+ error_interrupt_enable,
+ error_interrupt_offset);
+}
+
+void vnic_rq_error_out(struct vnic_rq *rq, unsigned int error)
+{
+ iowrite32(error, &rq->ctrl->error_status);
+}
+
+unsigned int vnic_rq_error_status(struct vnic_rq *rq)
+{
+ return ioread32(&rq->ctrl->error_status);
+}
+
+void vnic_rq_enable(struct vnic_rq *rq)
+{
+ iowrite32(1, &rq->ctrl->enable);
+}
+
+int vnic_rq_disable(struct vnic_rq *rq)
+{
+ unsigned int wait;
+
+ iowrite32(0, &rq->ctrl->enable);
+
+ /* Wait for HW to ACK disable request */
+ for (wait = 0; wait < 1000; wait++) {
+ if (!(ioread32(&rq->ctrl->running)))
+ return 0;
+ udelay(10);
+ }
+
+ pr_err("Failed to disable RQ[%d]\n", rq->index);
+
+ return -ETIMEDOUT;
+}
+
+void vnic_rq_clean(struct vnic_rq *rq,
+ void (*buf_clean)(struct vnic_rq *rq, struct vnic_rq_buf *buf))
+{
+ struct vnic_rq_buf *buf;
+ u32 fetch_index;
+ unsigned int count = rq->ring.desc_count;
+
+ buf = rq->to_clean;
+
+ while (vnic_rq_desc_used(rq) > 0) {
+
+ (*buf_clean)(rq, buf);
+
+ buf = rq->to_clean = buf->next;
+ rq->ring.desc_avail++;
+ }
+
+ /* Use current fetch_index as the ring starting point */
+ fetch_index = ioread32(&rq->ctrl->fetch_index);
+
+ if (fetch_index == 0xFFFFFFFF) { /* check for hardware gone */
+ /* Hardware surprise removal: reset fetch_index */
+ fetch_index = 0;
+ }
+ rq->to_use = rq->to_clean =
+ &rq->bufs[fetch_index / VNIC_RQ_BUF_BLK_ENTRIES(count)]
+ [fetch_index % VNIC_RQ_BUF_BLK_ENTRIES(count)];
+ iowrite32(fetch_index, &rq->ctrl->posted_index);
+
+ vnic_dev_clear_desc_ring(&rq->ring);
+}
+
diff --git a/lib/librte_pmd_enic/vnic/vnic_rq.h b/lib/librte_pmd_enic/vnic/vnic_rq.h
new file mode 100644
index 0000000..54b6612
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_rq.h
@@ -0,0 +1,282 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_rq.h 180262 2014-07-02 07:57:43Z gvaradar $"
+
+#ifndef _VNIC_RQ_H_
+#define _VNIC_RQ_H_
+
+
+#include "vnic_dev.h"
+#include "vnic_cq.h"
+
+/* Receive queue control */
+struct vnic_rq_ctrl {
+ u64 ring_base; /* 0x00 */
+ u32 ring_size; /* 0x08 */
+ u32 pad0;
+ u32 posted_index; /* 0x10 */
+ u32 pad1;
+ u32 cq_index; /* 0x18 */
+ u32 pad2;
+ u32 enable; /* 0x20 */
+ u32 pad3;
+ u32 running; /* 0x28 */
+ u32 pad4;
+ u32 fetch_index; /* 0x30 */
+ u32 pad5;
+ u32 error_interrupt_enable; /* 0x38 */
+ u32 pad6;
+ u32 error_interrupt_offset; /* 0x40 */
+ u32 pad7;
+ u32 error_status; /* 0x48 */
+ u32 pad8;
+ u32 dropped_packet_count; /* 0x50 */
+ u32 pad9;
+ u32 dropped_packet_count_rc; /* 0x58 */
+ u32 pad10;
+};
+
+/* Break the vnic_rq_buf allocations into blocks of 32/64 entries */
+#define VNIC_RQ_BUF_MIN_BLK_ENTRIES 32
+#define VNIC_RQ_BUF_DFLT_BLK_ENTRIES 64
+#define VNIC_RQ_BUF_BLK_ENTRIES(entries) \
+ ((unsigned int)((entries < VNIC_RQ_BUF_DFLT_BLK_ENTRIES) ? \
+ VNIC_RQ_BUF_MIN_BLK_ENTRIES : VNIC_RQ_BUF_DFLT_BLK_ENTRIES))
+#define VNIC_RQ_BUF_BLK_SZ(entries) \
+ (VNIC_RQ_BUF_BLK_ENTRIES(entries) * sizeof(struct vnic_rq_buf))
+#define VNIC_RQ_BUF_BLKS_NEEDED(entries) \
+ DIV_ROUND_UP(entries, VNIC_RQ_BUF_BLK_ENTRIES(entries))
+#define VNIC_RQ_BUF_BLKS_MAX VNIC_RQ_BUF_BLKS_NEEDED(4096)
+
+struct vnic_rq_buf {
+ struct vnic_rq_buf *next;
+ dma_addr_t dma_addr;
+ void *os_buf;
+ unsigned int os_buf_index;
+ unsigned int len;
+ unsigned int index;
+ void *desc;
+ uint64_t wr_id;
+};
+
+struct vnic_rq {
+ unsigned int index;
+ struct vnic_dev *vdev;
+ struct vnic_rq_ctrl __iomem *ctrl; /* memory-mapped */
+ struct vnic_dev_ring ring;
+ struct vnic_rq_buf *bufs[VNIC_RQ_BUF_BLKS_MAX];
+ struct vnic_rq_buf *to_use;
+ struct vnic_rq_buf *to_clean;
+ void *os_buf_head;
+ unsigned int pkts_outstanding;
+
+ unsigned int socket_id;
+ struct rte_mempool *mp;
+};
+
+static inline unsigned int vnic_rq_desc_avail(struct vnic_rq *rq)
+{
+ /* how many does SW own? */
+ return rq->ring.desc_avail;
+}
+
+static inline unsigned int vnic_rq_desc_used(struct vnic_rq *rq)
+{
+ /* how many does HW own? */
+ return rq->ring.desc_count - rq->ring.desc_avail - 1;
+}
+
+static inline void *vnic_rq_next_desc(struct vnic_rq *rq)
+{
+ return rq->to_use->desc;
+}
+
+static inline unsigned int vnic_rq_next_index(struct vnic_rq *rq)
+{
+ return rq->to_use->index;
+}
+
+static inline void vnic_rq_post(struct vnic_rq *rq,
+ void *os_buf, unsigned int os_buf_index,
+ dma_addr_t dma_addr, unsigned int len,
+ uint64_t wrid)
+{
+ struct vnic_rq_buf *buf = rq->to_use;
+
+ buf->os_buf = os_buf;
+ buf->os_buf_index = os_buf_index;
+ buf->dma_addr = dma_addr;
+ buf->len = len;
+ buf->wr_id = wrid;
+
+ buf = buf->next;
+ rq->to_use = buf;
+ rq->ring.desc_avail--;
+
+ /* Move the posted_index every nth descriptor
+ */
+
+#ifndef VNIC_RQ_RETURN_RATE
+#define VNIC_RQ_RETURN_RATE 0xf /* keep 2^n - 1 */
+#endif
+
+ if ((buf->index & VNIC_RQ_RETURN_RATE) == 0) {
+ /* Adding write memory barrier prevents compiler and/or CPU
+ * reordering, thus avoiding descriptor posting before
+ * descriptor is initialized. Otherwise, hardware can read
+ * stale descriptor fields.
+ */
+ wmb();
+ iowrite32(buf->index, &rq->ctrl->posted_index);
+ }
+}
+
+static inline void vnic_rq_post_commit(struct vnic_rq *rq,
+ void *os_buf, unsigned int os_buf_index,
+ dma_addr_t dma_addr, unsigned int len)
+{
+ struct vnic_rq_buf *buf = rq->to_use;
+
+ buf->os_buf = os_buf;
+ buf->os_buf_index = os_buf_index;
+ buf->dma_addr = dma_addr;
+ buf->len = len;
+
+ buf = buf->next;
+ rq->to_use = buf;
+ rq->ring.desc_avail--;
+
+ /* Move the posted_index every descriptor
+ */
+
+ /* Adding write memory barrier prevents compiler and/or CPU
+ * reordering, thus avoiding descriptor posting before
+ * descriptor is initialized. Otherwise, hardware can read
+ * stale descriptor fields.
+ */
+ wmb();
+ iowrite32(buf->index, &rq->ctrl->posted_index);
+}
+
+static inline void vnic_rq_return_descs(struct vnic_rq *rq, unsigned int count)
+{
+ rq->ring.desc_avail += count;
+}
+
+enum desc_return_options {
+ VNIC_RQ_RETURN_DESC,
+ VNIC_RQ_DEFER_RETURN_DESC,
+};
+
+static inline int vnic_rq_service(struct vnic_rq *rq,
+ struct cq_desc *cq_desc, u16 completed_index,
+ int desc_return, int (*buf_service)(struct vnic_rq *rq,
+ struct cq_desc *cq_desc, struct vnic_rq_buf *buf,
+ int skipped, void *opaque), void *opaque)
+{
+ struct vnic_rq_buf *buf;
+ int skipped;
+ int eop = 0;
+
+ buf = rq->to_clean;
+ while (1) {
+
+ skipped = (buf->index != completed_index);
+
+ if ((*buf_service)(rq, cq_desc, buf, skipped, opaque))
+ eop++;
+
+ if (desc_return == VNIC_RQ_RETURN_DESC)
+ rq->ring.desc_avail++;
+
+ rq->to_clean = buf->next;
+
+ if (!skipped)
+ break;
+
+ buf = rq->to_clean;
+ }
+ return eop;
+}
+
+static inline int vnic_rq_fill(struct vnic_rq *rq,
+ int (*buf_fill)(struct vnic_rq *rq))
+{
+ int err;
+
+ while (vnic_rq_desc_avail(rq) > 0) {
+
+ err = (*buf_fill)(rq);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static inline int vnic_rq_fill_count(struct vnic_rq *rq,
+ int (*buf_fill)(struct vnic_rq *rq), unsigned int count)
+{
+ int err;
+
+ while ((vnic_rq_desc_avail(rq) > 0) && (count--)) {
+
+ err = (*buf_fill)(rq);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+void vnic_rq_free(struct vnic_rq *rq);
+int vnic_rq_alloc(struct vnic_dev *vdev, struct vnic_rq *rq, unsigned int index,
+ unsigned int desc_count, unsigned int desc_size);
+void vnic_rq_init_start(struct vnic_rq *rq, unsigned int cq_index,
+ unsigned int fetch_index, unsigned int posted_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset);
+void vnic_rq_init(struct vnic_rq *rq, unsigned int cq_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset);
+void vnic_rq_error_out(struct vnic_rq *rq, unsigned int error);
+unsigned int vnic_rq_error_status(struct vnic_rq *rq);
+void vnic_rq_enable(struct vnic_rq *rq);
+int vnic_rq_disable(struct vnic_rq *rq);
+void vnic_rq_clean(struct vnic_rq *rq,
+ void (*buf_clean)(struct vnic_rq *rq, struct vnic_rq_buf *buf));
+int vnic_rq_mem_size(struct vnic_rq *rq, unsigned int desc_count,
+ unsigned int desc_size);
+
+#endif /* _VNIC_RQ_H_ */
--git a/lib/librte_pmd_enic/vnic/vnic_rss.c b/lib/librte_pmd_enic/vnic/vnic_rss.c
new file mode 100644
index 0000000..5ff76b1
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_rss.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright 2008 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#include "enic_compat.h"
+#include "vnic_rss.h"
+
+void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key)
+{
+ u32 i;
+ u32 *p;
+ u16 *q;
+
+ for (i = 0; i < 4; ++i) {
+ p = (u32 *)(key + (10 * i));
+ iowrite32(*p++, &rss_key->key[i].b[0]);
+ iowrite32(*p++, &rss_key->key[i].b[4]);
+ q = (u16 *)p;
+ iowrite32(*q, &rss_key->key[i].b[8]);
+ }
+}
+
+void vnic_set_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu)
+{
+ u32 i;
+ u32 *p = (u32 *)cpu;
+
+ for (i = 0; i < 32; ++i)
+ iowrite32(*p++, &rss_cpu->cpu[i].b[0]);
+}
+
+void vnic_get_rss_key(union vnic_rss_key *rss_key, u8 *key)
+{
+ u32 i;
+ u32 *p;
+ u16 *q;
+
+ for (i = 0; i < 4; ++i) {
+ p = (u32 *)(key + (10 * i));
+ *p++ = ioread32(&rss_key->key[i].b[0]);
+ *p++ = ioread32(&rss_key->key[i].b[4]);
+ q = (u16 *)p;
+ *q = (u16)ioread32(&rss_key->key[i].b[8]);
+ }
+}
+
+void vnic_get_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu)
+{
+ u32 i;
+ u32 *p = (u32 *)cpu;
+
+ for (i = 0; i < 32; ++i)
+ *p++ = ioread32(&rss_cpu->cpu[i].b[0]);
+}
--git a/lib/librte_pmd_enic/vnic/vnic_rss.h b/lib/librte_pmd_enic/vnic/vnic_rss.h
new file mode 100644
index 0000000..45ed3d2
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_rss.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+#ident "$Id: vnic_rss.h 64224 2010-11-09 19:43:13Z vkolluri $"
+
+#ifndef _VNIC_RSS_H_
+#define _VNIC_RSS_H_
+
+/* RSS key array */
+union vnic_rss_key {
+ struct {
+ u8 b[10];
+ u8 b_pad[6];
+ } key[4];
+ u64 raw[8];
+};
+
+/* RSS cpu array */
+union vnic_rss_cpu {
+ struct {
+ u8 b[4];
+ u8 b_pad[4];
+ } cpu[32];
+ u64 raw[32];
+};
+
+void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key);
+void vnic_set_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu);
+void vnic_get_rss_key(union vnic_rss_key *rss_key, u8 *key);
+void vnic_get_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu);
+
+#endif /* _VNIC_RSS_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_stats.h b/lib/librte_pmd_enic/vnic/vnic_stats.h
new file mode 100644
index 0000000..ac5aa72
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_stats.h
@@ -0,0 +1,86 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_stats.h 84040 2011-08-09 23:38:43Z dwang2 $"
+
+#ifndef _VNIC_STATS_H_
+#define _VNIC_STATS_H_
+
+/* Tx statistics */
+struct vnic_tx_stats {
+ u64 tx_frames_ok;
+ u64 tx_unicast_frames_ok;
+ u64 tx_multicast_frames_ok;
+ u64 tx_broadcast_frames_ok;
+ u64 tx_bytes_ok;
+ u64 tx_unicast_bytes_ok;
+ u64 tx_multicast_bytes_ok;
+ u64 tx_broadcast_bytes_ok;
+ u64 tx_drops;
+ u64 tx_errors;
+ u64 tx_tso;
+ u64 rsvd[16];
+};
+
+/* Rx statistics */
+struct vnic_rx_stats {
+ u64 rx_frames_ok;
+ u64 rx_frames_total;
+ u64 rx_unicast_frames_ok;
+ u64 rx_multicast_frames_ok;
+ u64 rx_broadcast_frames_ok;
+ u64 rx_bytes_ok;
+ u64 rx_unicast_bytes_ok;
+ u64 rx_multicast_bytes_ok;
+ u64 rx_broadcast_bytes_ok;
+ u64 rx_drop;
+ u64 rx_no_bufs;
+ u64 rx_errors;
+ u64 rx_rss;
+ u64 rx_crc_errors;
+ u64 rx_frames_64;
+ u64 rx_frames_127;
+ u64 rx_frames_255;
+ u64 rx_frames_511;
+ u64 rx_frames_1023;
+ u64 rx_frames_1518;
+ u64 rx_frames_to_max;
+ u64 rsvd[16];
+};
+
+struct vnic_stats {
+ struct vnic_tx_stats tx;
+ struct vnic_rx_stats rx;
+};
+
+#endif /* _VNIC_STATS_H_ */
diff --git a/lib/librte_pmd_enic/vnic/vnic_wq.c b/lib/librte_pmd_enic/vnic/vnic_wq.c
new file mode 100644
index 0000000..e52cef0
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_wq.c
@@ -0,0 +1,245 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_wq.c 183023 2014-07-22 23:47:25Z xuywang $"
+
+#include "vnic_dev.h"
+#include "vnic_wq.h"
+
+static inline
+int vnic_wq_get_ctrl(struct vnic_dev *vdev, struct vnic_wq *wq,
+ unsigned int index, enum vnic_res_type res_type)
+{
+ wq->ctrl = vnic_dev_get_res(vdev, res_type, index);
+ if (!wq->ctrl)
+ return -EINVAL;
+ return 0;
+}
+
+static inline
+int vnic_wq_alloc_ring(struct vnic_dev *vdev, struct vnic_wq *wq,
+ unsigned int desc_count, unsigned int desc_size)
+{
+ char res_name[NAME_MAX];
+ static int instance;
+
+ snprintf(res_name, sizeof(res_name), "%d-wq-%d", instance++, wq->index);
+ return vnic_dev_alloc_desc_ring(vdev, &wq->ring, desc_count, desc_size,
+ wq->socket_id, res_name);
+}
+
+static int vnic_wq_alloc_bufs(struct vnic_wq *wq)
+{
+ struct vnic_wq_buf *buf;
+ unsigned int i, j, count = wq->ring.desc_count;
+ unsigned int blks = VNIC_WQ_BUF_BLKS_NEEDED(count);
+
+ for (i = 0; i < blks; i++) {
+ wq->bufs[i] = kzalloc(VNIC_WQ_BUF_BLK_SZ(count), GFP_ATOMIC);
+ if (!wq->bufs[i])
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < blks; i++) {
+ buf = wq->bufs[i];
+ for (j = 0; j < VNIC_WQ_BUF_BLK_ENTRIES(count); j++) {
+ buf->index = i * VNIC_WQ_BUF_BLK_ENTRIES(count) + j;
+ buf->desc = (u8 *)wq->ring.descs +
+ wq->ring.desc_size * buf->index;
+ if (buf->index + 1 == count) {
+ buf->next = wq->bufs[0];
+ break;
+ } else if (j + 1 == VNIC_WQ_BUF_BLK_ENTRIES(count)) {
+ buf->next = wq->bufs[i + 1];
+ } else {
+ buf->next = buf + 1;
+ buf++;
+ }
+ }
+ }
+
+ wq->to_use = wq->to_clean = wq->bufs[0];
+
+ return 0;
+}
+
+void vnic_wq_free(struct vnic_wq *wq)
+{
+ struct vnic_dev *vdev;
+ unsigned int i;
+
+ vdev = wq->vdev;
+
+ vnic_dev_free_desc_ring(vdev, &wq->ring);
+
+ for (i = 0; i < VNIC_WQ_BUF_BLKS_MAX; i++) {
+ if (wq->bufs[i]) {
+ kfree(wq->bufs[i]);
+ wq->bufs[i] = NULL;
+ }
+ }
+
+ wq->ctrl = NULL;
+}
+
+int vnic_wq_mem_size(struct vnic_wq *wq, unsigned int desc_count,
+ unsigned int desc_size)
+{
+ int mem_size = 0;
+
+ mem_size += vnic_dev_desc_ring_size(&wq->ring, desc_count, desc_size);
+
+ mem_size += VNIC_WQ_BUF_BLKS_NEEDED(wq->ring.desc_count) *
+ VNIC_WQ_BUF_BLK_SZ(wq->ring.desc_count);
+
+ return mem_size;
+}
+
+
+int vnic_wq_alloc(struct vnic_dev *vdev, struct vnic_wq *wq, unsigned int index,
+ unsigned int desc_count, unsigned int desc_size)
+{
+ int err;
+
+ wq->index = index;
+ wq->vdev = vdev;
+
+ err = vnic_wq_get_ctrl(vdev, wq, index, RES_TYPE_WQ);
+ if (err) {
+ pr_err("Failed to hook WQ[%d] resource, err %d\n", index, err);
+ return err;
+ }
+
+ vnic_wq_disable(wq);
+
+ err = vnic_wq_alloc_ring(vdev, wq, desc_count, desc_size);
+ if (err)
+ return err;
+
+ err = vnic_wq_alloc_bufs(wq);
+ if (err) {
+ vnic_wq_free(wq);
+ return err;
+ }
+
+ return 0;
+}
+
+void vnic_wq_init_start(struct vnic_wq *wq, unsigned int cq_index,
+ unsigned int fetch_index, unsigned int posted_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset)
+{
+ u64 paddr;
+ unsigned int count = wq->ring.desc_count;
+
+ paddr = (u64)wq->ring.base_addr | VNIC_PADDR_TARGET;
+ writeq(paddr, &wq->ctrl->ring_base);
+ iowrite32(count, &wq->ctrl->ring_size);
+ iowrite32(fetch_index, &wq->ctrl->fetch_index);
+ iowrite32(posted_index, &wq->ctrl->posted_index);
+ iowrite32(cq_index, &wq->ctrl->cq_index);
+ iowrite32(error_interrupt_enable, &wq->ctrl->error_interrupt_enable);
+ iowrite32(error_interrupt_offset, &wq->ctrl->error_interrupt_offset);
+ iowrite32(0, &wq->ctrl->error_status);
+
+ wq->to_use = wq->to_clean =
+ &wq->bufs[fetch_index / VNIC_WQ_BUF_BLK_ENTRIES(count)]
+ [fetch_index % VNIC_WQ_BUF_BLK_ENTRIES(count)];
+}
+
+void vnic_wq_init(struct vnic_wq *wq, unsigned int cq_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset)
+{
+ vnic_wq_init_start(wq, cq_index, 0, 0,
+ error_interrupt_enable,
+ error_interrupt_offset);
+}
+
+void vnic_wq_error_out(struct vnic_wq *wq, unsigned int error)
+{
+ iowrite32(error, &wq->ctrl->error_status);
+}
+
+unsigned int vnic_wq_error_status(struct vnic_wq *wq)
+{
+ return ioread32(&wq->ctrl->error_status);
+}
+
+void vnic_wq_enable(struct vnic_wq *wq)
+{
+ iowrite32(1, &wq->ctrl->enable);
+}
+
+int vnic_wq_disable(struct vnic_wq *wq)
+{
+ unsigned int wait;
+
+ iowrite32(0, &wq->ctrl->enable);
+
+ /* Wait for HW to ACK disable request */
+ for (wait = 0; wait < 1000; wait++) {
+ if (!(ioread32(&wq->ctrl->running)))
+ return 0;
+ udelay(10);
+ }
+
+ pr_err("Failed to disable WQ[%d]\n", wq->index);
+
+ return -ETIMEDOUT;
+}
+
+void vnic_wq_clean(struct vnic_wq *wq,
+ void (*buf_clean)(struct vnic_wq *wq, struct vnic_wq_buf *buf))
+{
+ struct vnic_wq_buf *buf;
+
+ buf = wq->to_clean;
+
+ while (vnic_wq_desc_used(wq) > 0) {
+
+ (*buf_clean)(wq, buf);
+
+ buf = wq->to_clean = buf->next;
+ wq->ring.desc_avail++;
+ }
+
+ wq->to_use = wq->to_clean = wq->bufs[0];
+
+ iowrite32(0, &wq->ctrl->fetch_index);
+ iowrite32(0, &wq->ctrl->posted_index);
+ iowrite32(0, &wq->ctrl->error_status);
+
+ vnic_dev_clear_desc_ring(&wq->ring);
+}
diff --git a/lib/librte_pmd_enic/vnic/vnic_wq.h b/lib/librte_pmd_enic/vnic/vnic_wq.h
new file mode 100644
index 0000000..f8219ad
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/vnic_wq.h
@@ -0,0 +1,283 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: vnic_wq.h 183023 2014-07-22 23:47:25Z xuywang $"
+
+#ifndef _VNIC_WQ_H_
+#define _VNIC_WQ_H_
+
+
+#include "vnic_dev.h"
+#include "vnic_cq.h"
+
+/* Work queue control */
+struct vnic_wq_ctrl {
+ u64 ring_base; /* 0x00 */
+ u32 ring_size; /* 0x08 */
+ u32 pad0;
+ u32 posted_index; /* 0x10 */
+ u32 pad1;
+ u32 cq_index; /* 0x18 */
+ u32 pad2;
+ u32 enable; /* 0x20 */
+ u32 pad3;
+ u32 running; /* 0x28 */
+ u32 pad4;
+ u32 fetch_index; /* 0x30 */
+ u32 pad5;
+ u32 dca_value; /* 0x38 */
+ u32 pad6;
+ u32 error_interrupt_enable; /* 0x40 */
+ u32 pad7;
+ u32 error_interrupt_offset; /* 0x48 */
+ u32 pad8;
+ u32 error_status; /* 0x50 */
+ u32 pad9;
+};
+
+struct vnic_wq_buf {
+ struct vnic_wq_buf *next;
+ dma_addr_t dma_addr;
+ void *os_buf;
+ unsigned int len;
+ unsigned int index;
+ int sop;
+ void *desc;
+ uint64_t wr_id; /* Cookie */
+ uint8_t cq_entry; /* Gets completion event from hw */
+ uint8_t desc_skip_cnt; /* Num descs to occupy */
+ uint8_t compressed_send; /* Both hdr and payload in one desc */
+};
+
+/* Break the vnic_wq_buf allocations into blocks of 32/64 entries */
+#define VNIC_WQ_BUF_MIN_BLK_ENTRIES 32
+#define VNIC_WQ_BUF_DFLT_BLK_ENTRIES 64
+#define VNIC_WQ_BUF_BLK_ENTRIES(entries) \
+ ((unsigned int)((entries < VNIC_WQ_BUF_DFLT_BLK_ENTRIES) ? \
+ VNIC_WQ_BUF_MIN_BLK_ENTRIES : VNIC_WQ_BUF_DFLT_BLK_ENTRIES))
+#define VNIC_WQ_BUF_BLK_SZ(entries) \
+ (VNIC_WQ_BUF_BLK_ENTRIES(entries) * sizeof(struct vnic_wq_buf))
+#define VNIC_WQ_BUF_BLKS_NEEDED(entries) \
+ DIV_ROUND_UP(entries, VNIC_WQ_BUF_BLK_ENTRIES(entries))
+#define VNIC_WQ_BUF_BLKS_MAX VNIC_WQ_BUF_BLKS_NEEDED(4096)
+
+struct vnic_wq {
+ unsigned int index;
+ struct vnic_dev *vdev;
+ struct vnic_wq_ctrl __iomem *ctrl; /* memory-mapped */
+ struct vnic_dev_ring ring;
+ struct vnic_wq_buf *bufs[VNIC_WQ_BUF_BLKS_MAX];
+ struct vnic_wq_buf *to_use;
+ struct vnic_wq_buf *to_clean;
+ unsigned int pkts_outstanding;
+ unsigned int socket_id;
+};
+
+static inline unsigned int vnic_wq_desc_avail(struct vnic_wq *wq)
+{
+ /* how many does SW own? */
+ return wq->ring.desc_avail;
+}
+
+static inline unsigned int vnic_wq_desc_used(struct vnic_wq *wq)
+{
+ /* how many does HW own? */
+ return wq->ring.desc_count - wq->ring.desc_avail - 1;
+}
+
+static inline void *vnic_wq_next_desc(struct vnic_wq *wq)
+{
+ return wq->to_use->desc;
+}
+
+#define PI_LOG2_CACHE_LINE_SIZE 5
+#define PI_INDEX_BITS 12
+#define PI_INDEX_MASK ((1U << PI_INDEX_BITS) - 1)
+#define PI_PREFETCH_LEN_MASK ((1U << PI_LOG2_CACHE_LINE_SIZE) - 1)
+#define PI_PREFETCH_LEN_OFF 16
+#define PI_PREFETCH_ADDR_BITS 43
+#define PI_PREFETCH_ADDR_MASK ((1ULL << PI_PREFETCH_ADDR_BITS) - 1)
+#define PI_PREFETCH_ADDR_OFF 21
+
+/** How many cache lines are touched by buffer (addr, len). */
+static inline unsigned int num_cache_lines_touched(dma_addr_t addr,
+ unsigned int len)
+{
+ const unsigned long mask = PI_PREFETCH_LEN_MASK;
+ const unsigned long laddr = (unsigned long)addr;
+ unsigned long lines, equiv_len;
+ /* A. If addr is aligned, our solution is just to round up len to the
+ next boundary.
+
+ e.g. addr = 0, len = 48
+ +--------------------+
+ |XXXXXXXXXXXXXXXXXXXX| 32-byte cacheline a
+ +--------------------+
+ |XXXXXXXXXX | cacheline b
+ +--------------------+
+
+ B. If addr is not aligned, however, we may use an extra
+ cacheline. e.g. addr = 12, len = 22
+
+ +--------------------+
+ | XXXXXXXXXXXXX|
+ +--------------------+
+ |XX |
+ +--------------------+
+
+ Our solution is to make the problem equivalent to case A
+ above by adding the empty space in the first cacheline to the length:
+ unsigned long len;
+
+ +--------------------+
+ |eeeeeeeXXXXXXXXXXXXX| "e" is empty space, which we add to len
+ +--------------------+
+ |XX |
+ +--------------------+
+
+ */
+ equiv_len = len + (laddr & mask);
+
+ /* Now we can just round up this len to the next 32-byte boundary. */
+ lines = (equiv_len + mask) & (~mask);
+
+ /* Scale bytes -> cachelines. */
+ return lines >> PI_LOG2_CACHE_LINE_SIZE;
+}
+
+static inline u64 vnic_cached_posted_index(dma_addr_t addr, unsigned int len,
+ unsigned int index)
+{
+ unsigned int num_cache_lines = num_cache_lines_touched(addr, len);
+ /* Wish we could avoid a branch here. We could have separate
+ * vnic_wq_post() and vinc_wq_post_inline(), the latter
+ * only supporting < 1k (2^5 * 2^5) sends, I suppose. This would
+ * eliminate the if (eop) branch as well.
+ */
+ if (num_cache_lines > PI_PREFETCH_LEN_MASK)
+ num_cache_lines = 0;
+ return (index & PI_INDEX_MASK) |
+ ((num_cache_lines & PI_PREFETCH_LEN_MASK) << PI_PREFETCH_LEN_OFF) |
+ (((addr >> PI_LOG2_CACHE_LINE_SIZE) &
+ PI_PREFETCH_ADDR_MASK) << PI_PREFETCH_ADDR_OFF);
+}
+
+static inline void vnic_wq_post(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr,
+ unsigned int len, int sop, int eop,
+ uint8_t desc_skip_cnt, uint8_t cq_entry,
+ uint8_t compressed_send, uint64_t wrid)
+{
+ struct vnic_wq_buf *buf = wq->to_use;
+
+ buf->sop = sop;
+ buf->cq_entry = cq_entry;
+ buf->compressed_send = compressed_send;
+ buf->desc_skip_cnt = desc_skip_cnt;
+ buf->os_buf = os_buf;
+ buf->dma_addr = dma_addr;
+ buf->len = len;
+ buf->wr_id = wrid;
+
+ buf = buf->next;
+ if (eop) {
+#ifdef DO_PREFETCH
+ uint64_t wr = vnic_cached_posted_index(dma_addr, len,
+ buf->index);
+#endif
+ /* Adding write memory barrier prevents compiler and/or CPU
+ * reordering, thus avoiding descriptor posting before
+ * descriptor is initialized. Otherwise, hardware can read
+ * stale descriptor fields.
+ */
+ wmb();
+#ifdef DO_PREFETCH
+ /* Intel chipsets seem to limit the rate of PIOs that we can
+ * push on the bus. Thus, it is very important to do a single
+ * 64 bit write here. With two 32-bit writes, my maximum
+ * pkt/sec rate was cut almost in half. -AJF
+ */
+ iowrite64((uint64_t)wr, &wq->ctrl->posted_index);
+#else
+ iowrite32(buf->index, &wq->ctrl->posted_index);
+#endif
+ }
+ wq->to_use = buf;
+
+ wq->ring.desc_avail -= desc_skip_cnt;
+}
+
+static inline void vnic_wq_service(struct vnic_wq *wq,
+ struct cq_desc *cq_desc, u16 completed_index,
+ void (*buf_service)(struct vnic_wq *wq,
+ struct cq_desc *cq_desc, struct vnic_wq_buf *buf, void *opaque),
+ void *opaque)
+{
+ struct vnic_wq_buf *buf;
+
+ buf = wq->to_clean;
+ while (1) {
+
+ (*buf_service)(wq, cq_desc, buf, opaque);
+
+ wq->ring.desc_avail++;
+
+ wq->to_clean = buf->next;
+
+ if (buf->index == completed_index)
+ break;
+
+ buf = wq->to_clean;
+ }
+}
+
+void vnic_wq_free(struct vnic_wq *wq);
+int vnic_wq_alloc(struct vnic_dev *vdev, struct vnic_wq *wq, unsigned int index,
+ unsigned int desc_count, unsigned int desc_size);
+void vnic_wq_init_start(struct vnic_wq *wq, unsigned int cq_index,
+ unsigned int fetch_index, unsigned int posted_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset);
+void vnic_wq_init(struct vnic_wq *wq, unsigned int cq_index,
+ unsigned int error_interrupt_enable,
+ unsigned int error_interrupt_offset);
+void vnic_wq_error_out(struct vnic_wq *wq, unsigned int error);
+unsigned int vnic_wq_error_status(struct vnic_wq *wq);
+void vnic_wq_enable(struct vnic_wq *wq);
+int vnic_wq_disable(struct vnic_wq *wq);
+void vnic_wq_clean(struct vnic_wq *wq,
+ void (*buf_clean)(struct vnic_wq *wq, struct vnic_wq_buf *buf));
+int vnic_wq_mem_size(struct vnic_wq *wq, unsigned int desc_count,
+ unsigned int desc_size);
+
+#endif /* _VNIC_WQ_H_ */
diff --git a/lib/librte_pmd_enic/vnic/wq_enet_desc.h b/lib/librte_pmd_enic/vnic/wq_enet_desc.h
new file mode 100644
index 0000000..ff2b768
--- /dev/null
+++ b/lib/librte_pmd_enic/vnic/wq_enet_desc.h
@@ -0,0 +1,114 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: wq_enet_desc.h 59839 2010-09-27 20:36:31Z roprabhu $"
+
+#ifndef _WQ_ENET_DESC_H_
+#define _WQ_ENET_DESC_H_
+
+/* Ethernet work queue descriptor: 16B */
+struct wq_enet_desc {
+ __le64 address;
+ __le16 length;
+ __le16 mss_loopback;
+ __le16 header_length_flags;
+ __le16 vlan_tag;
+};
+
+#define WQ_ENET_ADDR_BITS 64
+#define WQ_ENET_LEN_BITS 14
+#define WQ_ENET_LEN_MASK ((1 << WQ_ENET_LEN_BITS) - 1)
+#define WQ_ENET_MSS_BITS 14
+#define WQ_ENET_MSS_MASK ((1 << WQ_ENET_MSS_BITS) - 1)
+#define WQ_ENET_MSS_SHIFT 2
+#define WQ_ENET_LOOPBACK_SHIFT 1
+#define WQ_ENET_HDRLEN_BITS 10
+#define WQ_ENET_HDRLEN_MASK ((1 << WQ_ENET_HDRLEN_BITS) - 1)
+#define WQ_ENET_FLAGS_OM_BITS 2
+#define WQ_ENET_FLAGS_OM_MASK ((1 << WQ_ENET_FLAGS_OM_BITS) - 1)
+#define WQ_ENET_FLAGS_EOP_SHIFT 12
+#define WQ_ENET_FLAGS_CQ_ENTRY_SHIFT 13
+#define WQ_ENET_FLAGS_FCOE_ENCAP_SHIFT 14
+#define WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT 15
+
+#define WQ_ENET_OFFLOAD_MODE_CSUM 0
+#define WQ_ENET_OFFLOAD_MODE_RESERVED 1
+#define WQ_ENET_OFFLOAD_MODE_CSUM_L4 2
+#define WQ_ENET_OFFLOAD_MODE_TSO 3
+
+static inline void wq_enet_desc_enc(struct wq_enet_desc *desc,
+ u64 address, u16 length, u16 mss, u16 header_length,
+ u8 offload_mode, u8 eop, u8 cq_entry, u8 fcoe_encap,
+ u8 vlan_tag_insert, u16 vlan_tag, u8 loopback)
+{
+ desc->address = cpu_to_le64(address);
+ desc->length = cpu_to_le16(length & WQ_ENET_LEN_MASK);
+ desc->mss_loopback = cpu_to_le16((mss & WQ_ENET_MSS_MASK) <<
+ WQ_ENET_MSS_SHIFT | (loopback & 1) << WQ_ENET_LOOPBACK_SHIFT);
+ desc->header_length_flags = cpu_to_le16(
+ (header_length & WQ_ENET_HDRLEN_MASK) |
+ (offload_mode & WQ_ENET_FLAGS_OM_MASK) << WQ_ENET_HDRLEN_BITS |
+ (eop & 1) << WQ_ENET_FLAGS_EOP_SHIFT |
+ (cq_entry & 1) << WQ_ENET_FLAGS_CQ_ENTRY_SHIFT |
+ (fcoe_encap & 1) << WQ_ENET_FLAGS_FCOE_ENCAP_SHIFT |
+ (vlan_tag_insert & 1) << WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT);
+ desc->vlan_tag = cpu_to_le16(vlan_tag);
+}
+
+static inline void wq_enet_desc_dec(struct wq_enet_desc *desc,
+ u64 *address, u16 *length, u16 *mss, u16 *header_length,
+ u8 *offload_mode, u8 *eop, u8 *cq_entry, u8 *fcoe_encap,
+ u8 *vlan_tag_insert, u16 *vlan_tag, u8 *loopback)
+{
+ *address = le64_to_cpu(desc->address);
+ *length = le16_to_cpu(desc->length) & WQ_ENET_LEN_MASK;
+ *mss = (le16_to_cpu(desc->mss_loopback) >> WQ_ENET_MSS_SHIFT) &
+ WQ_ENET_MSS_MASK;
+ *loopback = (u8)((le16_to_cpu(desc->mss_loopback) >>
+ WQ_ENET_LOOPBACK_SHIFT) & 1);
+ *header_length = le16_to_cpu(desc->header_length_flags) &
+ WQ_ENET_HDRLEN_MASK;
+ *offload_mode = (u8)((le16_to_cpu(desc->header_length_flags) >>
+ WQ_ENET_HDRLEN_BITS) & WQ_ENET_FLAGS_OM_MASK);
+ *eop = (u8)((le16_to_cpu(desc->header_length_flags) >>
+ WQ_ENET_FLAGS_EOP_SHIFT) & 1);
+ *cq_entry = (u8)((le16_to_cpu(desc->header_length_flags) >>
+ WQ_ENET_FLAGS_CQ_ENTRY_SHIFT) & 1);
+ *fcoe_encap = (u8)((le16_to_cpu(desc->header_length_flags) >>
+ WQ_ENET_FLAGS_FCOE_ENCAP_SHIFT) & 1);
+ *vlan_tag_insert = (u8)((le16_to_cpu(desc->header_length_flags) >>
+ WQ_ENET_FLAGS_VLAN_TAG_INSERT_SHIFT) & 1);
+ *vlan_tag = le16_to_cpu(desc->vlan_tag);
+}
+
+#endif /* _WQ_ENET_DESC_H_ */
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
` (2 preceding siblings ...)
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 3/6] enicpmd: VNIC common code partially shared with ENIC kernel mode driver Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-11-27 14:49 ` Wodkowski, PawelX
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface Sujith Sankar
` (3 subsequent siblings)
7 siblings, 1 reply; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
lib/librte_pmd_enic/enic.h | 157 +++++
lib/librte_pmd_enic/enic_clsf.c | 244 +++++++
lib/librte_pmd_enic/enic_compat.h | 142 +++++
lib/librte_pmd_enic/enic_main.c | 1266 +++++++++++++++++++++++++++++++++++++
lib/librte_pmd_enic/enic_res.c | 221 +++++++
lib/librte_pmd_enic/enic_res.h | 168 +++++
6 files changed, 2198 insertions(+)
create mode 100644 lib/librte_pmd_enic/enic.h
create mode 100644 lib/librte_pmd_enic/enic_clsf.c
create mode 100644 lib/librte_pmd_enic/enic_compat.h
create mode 100644 lib/librte_pmd_enic/enic_main.c
create mode 100644 lib/librte_pmd_enic/enic_res.c
create mode 100644 lib/librte_pmd_enic/enic_res.h
diff --git a/lib/librte_pmd_enic/enic.h b/lib/librte_pmd_enic/enic.h
new file mode 100644
index 0000000..9f80fc0
--- /dev/null
+++ b/lib/librte_pmd_enic/enic.h
@@ -0,0 +1,157 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#ifndef _ENIC_H_
+#define _ENIC_H_
+
+#include "vnic_enet.h"
+#include "vnic_dev.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+#include "vnic_cq.h"
+#include "vnic_intr.h"
+#include "vnic_stats.h"
+#include "vnic_nic.h"
+#include "vnic_rss.h"
+#include "enic_res.h"
+
+#define DRV_NAME "enic_pmd"
+#define DRV_DESCRIPTION "Cisco VIC Ethernet NIC Poll-mode Driver"
+#define DRV_VERSION "1.0.0.4"
+#define DRV_COPYRIGHT "Copyright 2008-2014 Cisco Systems, Inc"
+
+#define ENIC_WQ_MAX 8
+#define ENIC_RQ_MAX 8
+#define ENIC_CQ_MAX (ENIC_WQ_MAX + ENIC_RQ_MAX)
+#define ENIC_INTR_MAX (ENIC_CQ_MAX + 2)
+
+#define VLAN_ETH_HLEN 18
+
+#define ENICPMD_SETTING(enic, f) ((enic->config.flags & VENETF_##f) ? 1 : 0)
+
+#define ENICPMD_BDF_LENGTH 13 /* 0000:00:00.0'\0' */
+#define PKT_TX_TCP_UDP_CKSUM 0x6000
+#define ENIC_CALC_IP_CKSUM 1
+#define ENIC_CALC_TCP_UDP_CKSUM 2
+#define ENIC_MAX_MTU 9000
+#define PAGE_SIZE 4096
+#define PAGE_ROUND_UP(x) \
+ ((((unsigned long)(x)) + PAGE_SIZE-1) & (~(PAGE_SIZE-1)))
+
+#define ENICPMD_VFIO_PATH "/dev/vfio/vfio"
+/*#define ENIC_DESC_COUNT_MAKE_ODD (x) do{if ((~(x)) & 1) { (x)--; } }while(0)*/
+
+#define PCI_DEVICE_ID_CISCO_VIC_ENET 0x0043 /* ethernet vnic */
+#define PCI_DEVICE_ID_CISCO_VIC_ENET_VF 0x0071 /* enet SRIOV VF */
+
+
+#define ENICPMD_FDIR_MAX 64
+
+struct enic_fdir_node {
+ struct rte_fdir_filter filter;
+ u16 fltr_id;
+ u16 rq_index;
+};
+
+struct enic_fdir {
+ struct rte_eth_fdir stats;
+ struct rte_hash *hash;
+ struct enic_fdir_node *nodes[ENICPMD_FDIR_MAX];
+};
+
+/* Per-instance private data structure */
+struct enic {
+ struct enic *next;
+ struct rte_pci_device *pdev;
+ struct vnic_enet_config config;
+ struct vnic_dev_bar bar0;
+ struct vnic_dev *vdev;
+
+ struct rte_eth_dev *rte_dev;
+ struct enic_fdir fdir;
+ char bdf_name[ENICPMD_BDF_LENGTH];
+ int dev_fd;
+ int iommu_group_fd;
+ int iommu_groupid;
+ int eventfd;
+ u_int8_t mac_addr[ETH_ALEN];
+ pthread_t err_intr_thread;
+ int promisc;
+ int allmulti;
+ int ig_vlan_strip_en;
+ int link_status;
+ u8 hw_ip_checksum;
+
+ unsigned int flags;
+ unsigned int priv_flags;
+
+ /* work queue */
+ struct vnic_wq wq[ENIC_WQ_MAX];
+ unsigned int wq_count;
+
+ /* receive queue */
+ struct vnic_rq rq[ENIC_RQ_MAX];
+ unsigned int rq_count;
+
+ /* completion queue */
+ struct vnic_cq cq[ENIC_CQ_MAX];
+ unsigned int cq_count;
+
+ /* interrupt resource */
+ struct vnic_intr intr;
+ unsigned int intr_count;
+};
+
+static inline unsigned int enic_cq_rq(struct enic *enic, unsigned int rq)
+{
+ return rq;
+}
+
+static inline unsigned int enic_cq_wq(struct enic *enic, unsigned int wq)
+{
+ return enic->rq_count + wq;
+}
+
+static inline unsigned int enic_msix_err_intr(struct enic *enic)
+{
+ return 0;
+}
+
+static inline struct enic *pmd_priv(struct rte_eth_dev *eth_dev)
+{
+ return (struct enic *)eth_dev->data->dev_private;
+}
+
+#endif /* _ENIC_H_ */
diff --git a/lib/librte_pmd_enic/enic_clsf.c b/lib/librte_pmd_enic/enic_clsf.c
new file mode 100644
index 0000000..0b3038c
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_clsf.c
@@ -0,0 +1,244 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#include <libgen.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_hash.h>
+#include <rte_byteorder.h>
+
+#include "enic_compat.h"
+#include "enic.h"
+#include "wq_enet_desc.h"
+#include "rq_enet_desc.h"
+#include "cq_enet_desc.h"
+#include "vnic_enet.h"
+#include "vnic_dev.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+#include "vnic_cq.h"
+#include "vnic_intr.h"
+#include "vnic_nic.h"
+
+#ifdef RTE_MACHINE_CPUFLAG_SSE4_2
+#include <rte_hash_crc.h>
+#define DEFAULT_HASH_FUNC rte_hash_crc
+#else
+#include <rte_jhash.h>
+#define DEFAULT_HASH_FUNC rte_jhash
+#endif
+
+#define SOCKET_0 0
+#define ENICPMD_CLSF_HASH_ENTRIES ENICPMD_FDIR_MAX
+#define ENICPMD_CLSF_BUCKET_ENTRIES 4
+
+int enic_fdir_del_fltr(struct enic *enic, struct rte_fdir_filter *params)
+{
+ int32_t pos;
+ struct enic_fdir_node *key;
+ /* See if the key is in the table */
+ pos = rte_hash_del_key(enic->fdir.hash, params);
+ switch (pos) {
+ case -EINVAL:
+ case -ENOENT:
+ enic->fdir.stats.f_remove++;
+ return -EINVAL;
+ default:
+ /* The entry is present in the table */
+ key = enic->fdir.nodes[pos];
+
+ /* Delete the filter */
+ vnic_dev_classifier(enic->vdev, CLSF_DEL,
+ &key->fltr_id, NULL);
+ rte_free(key);
+ enic->fdir.nodes[pos] = NULL;
+ enic->fdir.stats.free++;
+ enic->fdir.stats.remove++;
+ break;
+ }
+ return 0;
+}
+
+int enic_fdir_add_fltr(struct enic *enic, struct rte_fdir_filter *params,
+ u16 queue, u8 drop)
+{
+ struct enic_fdir_node *key;
+ struct filter fltr = {0};
+ int32_t pos;
+ u8 do_free = 0;
+ u16 old_fltr_id = 0;
+
+ if (!enic->fdir.hash || params->vlan_id || !params->l4type ||
+ (RTE_FDIR_IPTYPE_IPV6 == params->iptype) ||
+ (RTE_FDIR_L4TYPE_SCTP == params->l4type) ||
+ params->flex_bytes || drop) {
+ enic->fdir.stats.f_add++;
+ return -ENOTSUP;
+ }
+
+ /* See if the key is already there in the table */
+ pos = rte_hash_del_key(enic->fdir.hash, params);
+ switch (pos) {
+ case -EINVAL:
+ enic->fdir.stats.f_add++;
+ return -EINVAL;
+ case -ENOENT:
+ /* Add a new classifier entry */
+ if (!enic->fdir.stats.free) {
+ enic->fdir.stats.f_add++;
+ return -ENOSPC;
+ }
+ key = (struct enic_fdir_node *)rte_zmalloc(
+ "enic_fdir_node",
+ sizeof(struct enic_fdir_node), 0);
+ if (!key) {
+ enic->fdir.stats.f_add++;
+ return -ENOMEM;
+ }
+ break;
+ default:
+ /* The entry is already present in the table.
+ * Check if there is a change in queue
+ */
+ key = enic->fdir.nodes[pos];
+ enic->fdir.nodes[pos] = NULL;
+ if (unlikely(key->rq_index == queue)) {
+ /* Nothing to be done */
+ pos = rte_hash_add_key(enic->fdir.hash, params);
+ enic->fdir.nodes[pos] = key;
+ enic->fdir.stats.f_add++;
+ dev_warning(enic,
+ "FDIR rule is already present\n");
+ return 0;
+ }
+
+ if (likely(enic->fdir.stats.free)) {
+ /* Add the filter and then delete the old one.
+ * This is to avoid packets from going into the
+ * default queue during the window between
+ * delete and add
+ */
+ do_free = 1;
+ old_fltr_id = key->fltr_id;
+ } else {
+ /* No free slots in the classifier.
+ * Delete the filter and add the modified one later
+ */
+ vnic_dev_classifier(enic->vdev, CLSF_DEL,
+ &key->fltr_id, NULL);
+ enic->fdir.stats.free++;
+ }
+
+ break;
+ }
+
+ key->filter = *params;
+ key->rq_index = queue;
+
+ fltr.type = FILTER_IPV4_5TUPLE;
+ fltr.u.ipv4.src_addr = rte_be_to_cpu_32(params->ip_src.ipv4_addr);
+ fltr.u.ipv4.dst_addr = rte_be_to_cpu_32(params->ip_dst.ipv4_addr);
+ fltr.u.ipv4.src_port = rte_be_to_cpu_16(params->port_src);
+ fltr.u.ipv4.dst_port = rte_be_to_cpu_16(params->port_dst);
+
+ if (RTE_FDIR_L4TYPE_TCP == params->l4type)
+ fltr.u.ipv4.protocol = PROTO_TCP;
+ else
+ fltr.u.ipv4.protocol = PROTO_UDP;
+
+ fltr.u.ipv4.flags = FILTER_FIELDS_IPV4_5TUPLE;
+
+ if (!vnic_dev_classifier(enic->vdev, CLSF_ADD, &queue, &fltr)) {
+ key->fltr_id = queue;
+ } else {
+ dev_err(enic, "Add classifier entry failed\n");
+ enic->fdir.stats.f_add++;
+ rte_free(key);
+ return -1;
+ }
+
+ if (do_free)
+ vnic_dev_classifier(enic->vdev, CLSF_DEL, &old_fltr_id, NULL);
+ else{
+ enic->fdir.stats.free--;
+ enic->fdir.stats.add++;
+ }
+
+ pos = rte_hash_add_key(enic->fdir.hash, (void *)key);
+ enic->fdir.nodes[pos] = key;
+ return 0;
+}
+
+void enic_clsf_destroy(struct enic *enic)
+{
+ u32 index;
+ struct enic_fdir_node *key;
+ /* delete classifier entries */
+ for (index = 0; index < ENICPMD_FDIR_MAX; index++) {
+ key = enic->fdir.nodes[index];
+ if (key) {
+ vnic_dev_classifier(enic->vdev, CLSF_DEL,
+ &key->fltr_id, NULL);
+ rte_free(key);
+ }
+ }
+
+ if (enic->fdir.hash) {
+ rte_hash_free(enic->fdir.hash);
+ enic->fdir.hash = NULL;
+ }
+}
+
+int enic_clsf_init(struct enic *enic)
+{
+ struct rte_hash_parameters hash_params = {
+ .name = "enicpmd_clsf_hash",
+ .entries = ENICPMD_CLSF_HASH_ENTRIES,
+ .bucket_entries = ENICPMD_CLSF_BUCKET_ENTRIES,
+ .key_len = sizeof(struct rte_fdir_filter),
+ .hash_func = DEFAULT_HASH_FUNC,
+ .hash_func_init_val = 0,
+ .socket_id = SOCKET_0,
+ };
+
+ enic->fdir.hash = rte_hash_create(&hash_params);
+ memset(&enic->fdir.stats, 0, sizeof(enic->fdir.stats));
+ enic->fdir.stats.free = ENICPMD_FDIR_MAX;
+ return (NULL == enic->fdir.hash);
+}
+
+
+
diff --git a/lib/librte_pmd_enic/enic_compat.h b/lib/librte_pmd_enic/enic_compat.h
new file mode 100644
index 0000000..d22578e
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_compat.h
@@ -0,0 +1,142 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#ifndef _ENIC_COMPAT_H_
+#define _ENIC_COMPAT_H_
+
+#include <stdio.h>
+
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+
+#define ENIC_PAGE_ALIGN 4096ULL
+#define ENIC_ALIGN ENIC_PAGE_ALIGN
+#define NAME_MAX 255
+#define ETH_ALEN 6
+
+#define __iomem
+
+#define rmb() rte_rmb() /* dpdk rte provided rmb */
+#define wmb() rte_wmb() /* dpdk rte provided wmb */
+
+#define le16_to_cpu
+#define le32_to_cpu
+#define le64_to_cpu
+#define cpu_to_le16
+#define cpu_to_le32
+#define cpu_to_le64
+
+#ifndef offsetof
+#define offsetof(t, m) ((size_t) &((t *)0)->m)
+#endif
+
+#define pr_err(y, args...) dev_err(0, y, ##args)
+#define pr_warn(y, args...) dev_warning(0, y, ##args)
+#define BUG() pr_err("BUG at %s:%d", __func__, __LINE__)
+
+#define ALIGN(x, a) __ALIGN_MASK(x, (typeof(x))(a)-1)
+#define __ALIGN_MASK(x, mask) (((x)+(mask))&~(mask))
+#define udelay usleep
+#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+
+#define kzalloc(size, flags) calloc(1, size)
+#define kfree(x) free(x)
+
+#define dev_err(x, args...) printf("rte_enic_pmd : Error - " args)
+#define dev_info(x, args...) printf("rte_enic_pmd: Info - " args)
+#define dev_warning(x, args...) printf("rte_enic_pmd: Warning - " args)
+#define dev_trace(x, args...) printf("rte_enic_pmd: Trace - " args)
+
+#define __le16 u16
+#define __le32 u32
+#define __le64 u64
+
+typedef unsigned char u8;
+typedef unsigned short u16;
+typedef unsigned int u32;
+typedef unsigned long long u64;
+typedef unsigned long long dma_addr_t;
+
+static inline u_int32_t ioread32(volatile void *addr)
+{
+ return *(volatile u_int32_t *)addr;
+}
+
+static inline u16 ioread16(volatile void *addr)
+{
+ return *(volatile u16 *)addr;
+}
+
+static inline u_int8_t ioread8(volatile void *addr)
+{
+ return *(volatile u_int8_t *)addr;
+}
+
+static inline void iowrite32(u_int32_t val, volatile void *addr)
+{
+ *(volatile u_int32_t *)addr = val;
+}
+
+static inline void iowrite16(u16 val, volatile void *addr)
+{
+ *(volatile u16 *)addr = val;
+}
+
+static inline void iowrite8(u_int8_t val, volatile void *addr)
+{
+ *(volatile u_int8_t *)addr = val;
+}
+
+static inline unsigned int readl(volatile void __iomem *addr)
+{
+ return *(volatile unsigned int *)addr;
+}
+
+static inline void writel(unsigned int val, volatile void __iomem *addr)
+{
+ *(volatile unsigned int *)addr = val;
+}
+
+#define min_t(type, x, y) ({ \
+ type __min1 = (x); \
+ type __min2 = (y); \
+ __min1 < __min2 ? __min1 : __min2; })
+
+#define max_t(type, x, y) ({ \
+ type __max1 = (x); \
+ type __max2 = (y); \
+ __max1 > __max2 ? __max1 : __max2; })
+
+#endif /* _ENIC_COMPAT_H_ */
diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
new file mode 100644
index 0000000..c047cc8
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_main.c
@@ -0,0 +1,1266 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#include <stdio.h>
+
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <libgen.h>
+#ifdef RTE_EAL_VFIO
+#include <linux/vfio.h>
+#endif
+
+#include <rte_pci.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_ethdev.h>
+
+#include "enic_compat.h"
+#include "enic.h"
+#include "wq_enet_desc.h"
+#include "rq_enet_desc.h"
+#include "cq_enet_desc.h"
+#include "vnic_enet.h"
+#include "vnic_dev.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+#include "vnic_cq.h"
+#include "vnic_intr.h"
+#include "vnic_nic.h"
+
+static inline int enic_is_sriov_vf(struct enic *enic)
+{
+ return enic->pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_VF;
+}
+
+static int is_zero_addr(char *addr)
+{
+ return !(addr[0] | addr[1] | addr[2] | addr[3] | addr[4] | addr[5]);
+}
+
+static int is_mcast_addr(char *addr)
+{
+ return addr[0] & 1;
+}
+
+static int is_eth_addr_valid(char *addr)
+{
+ return !is_mcast_addr(addr) && !is_zero_addr(addr);
+}
+
+static inline struct rte_mbuf *
+enic_rxmbuf_alloc(struct rte_mempool *mp)
+{
+ struct rte_mbuf *m;
+
+ m = __rte_mbuf_raw_alloc(mp);
+ __rte_mbuf_sanity_check_raw(m, 0);
+ return m;
+}
+
+static const struct rte_memzone *ring_dma_zone_reserve(
+ struct rte_eth_dev *dev, const char *ring_name,
+ uint16_t queue_id, uint32_t ring_size, int socket_id)
+{
+ char z_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *mz;
+
+ snprintf(z_name, sizeof(z_name), "%s_%s_%d_%d",
+ dev->driver->pci_drv.name, ring_name,
+ dev->data->port_id, queue_id);
+
+ mz = rte_memzone_lookup(z_name);
+ if (mz)
+ return mz;
+
+ return rte_memzone_reserve_aligned(z_name, (uint64_t) ring_size,
+ socket_id, RTE_MEMZONE_1GB, ENIC_ALIGN);
+}
+
+void enic_set_hdr_split_size(struct enic *enic, u16 split_hdr_size)
+{
+ vnic_set_hdr_split_size(enic->vdev, split_hdr_size);
+}
+
+static void enic_free_wq_buf(struct vnic_wq *wq, struct vnic_wq_buf *buf)
+{
+ struct rte_mbuf *mbuf = (struct rte_mbuf *)buf->os_buf;
+
+ rte_mempool_put(mbuf->pool, mbuf);
+ buf->os_buf = NULL;
+}
+
+static void enic_wq_free_buf(struct vnic_wq *wq,
+ struct cq_desc *cq_desc, struct vnic_wq_buf *buf, void *opaque)
+{
+ enic_free_wq_buf(wq, buf);
+}
+
+static int enic_wq_service(struct vnic_dev *vdev, struct cq_desc *cq_desc,
+ u8 type, u16 q_number, u16 completed_index, void *opaque)
+{
+ struct enic *enic = vnic_dev_priv(vdev);
+
+ vnic_wq_service(&enic->wq[q_number], cq_desc,
+ completed_index, enic_wq_free_buf,
+ opaque);
+
+ return 0;
+}
+
+static void enic_log_q_error(struct enic *enic)
+{
+ unsigned int i;
+ u32 error_status;
+
+ for (i = 0; i < enic->wq_count; i++) {
+ error_status = vnic_wq_error_status(&enic->wq[i]);
+ if (error_status)
+ dev_err(enic, "WQ[%d] error_status %d\n", i,
+ error_status);
+ }
+
+ for (i = 0; i < enic->rq_count; i++) {
+ error_status = vnic_rq_error_status(&enic->rq[i]);
+ if (error_status)
+ dev_err(enic, "RQ[%d] error_status %d\n", i,
+ error_status);
+ }
+}
+
+unsigned int enic_cleanup_wq(struct enic *enic, struct vnic_wq *wq)
+{
+ unsigned int cq = enic_cq_wq(enic, wq->index);
+
+ /* Return the work done */
+ return vnic_cq_service(&enic->cq[cq],
+ -1 /*wq_work_to_do*/, enic_wq_service, NULL);
+}
+
+
+int enic_send_pkt(struct enic *enic, struct vnic_wq *wq,
+ struct rte_mbuf *tx_pkt, unsigned short len,
+ u_int8_t sop, u_int8_t eop,
+ u_int16_t ol_flags, u_int16_t vlan_tag)
+{
+ struct wq_enet_desc *desc = vnic_wq_next_desc(wq);
+ u_int16_t mss = 0;
+ u_int16_t header_length = 0;
+ u_int8_t cq_entry = eop;
+ u_int8_t vlan_tag_insert = 0;
+ unsigned char *buf = (unsigned char *)(tx_pkt->buf_addr) +
+ RTE_PKTMBUF_HEADROOM;
+ u_int64_t bus_addr = (dma_addr_t)
+ (tx_pkt->buf_physaddr + RTE_PKTMBUF_HEADROOM);
+
+ if (sop) {
+ if (ol_flags & PKT_TX_VLAN_PKT)
+ vlan_tag_insert = 1;
+
+ if (enic->hw_ip_checksum) {
+ if (ol_flags & PKT_TX_IP_CKSUM)
+ mss |= ENIC_CALC_IP_CKSUM;
+
+ if (ol_flags & PKT_TX_TCP_UDP_CKSUM)
+ mss |= ENIC_CALC_TCP_UDP_CKSUM;
+ }
+ }
+
+ wq_enet_desc_enc(desc,
+ bus_addr,
+ len,
+ mss,
+ 0 /* header_length */,
+ 0 /* offload_mode WQ_ENET_OFFLOAD_MODE_CSUM */,
+ eop,
+ cq_entry,
+ 0 /* fcoe_encap */,
+ vlan_tag_insert,
+ vlan_tag,
+ 0 /* loopback */);
+
+ vnic_wq_post(wq, (void *)tx_pkt, bus_addr, len,
+ sop, eop,
+ 1 /*desc_skip_cnt*/,
+ cq_entry,
+ 0 /*compressed send*/,
+ 0 /*wrid*/);
+
+ return 0;
+}
+
+void enic_dev_stats_clear(struct enic *enic)
+{
+ if (vnic_dev_stats_clear(enic->vdev))
+ dev_err(enic, "Error in clearing stats\n");
+}
+
+void enic_dev_stats_get(struct enic *enic, struct rte_eth_stats *r_stats)
+{
+ struct vnic_stats *stats;
+
+ memset(r_stats, 0, sizeof(*r_stats));
+ if (vnic_dev_stats_dump(enic->vdev, &stats)) {
+ dev_err(enic, "Error in getting stats\n");
+ return;
+ }
+
+ r_stats->ipackets = stats->rx.rx_frames_ok;
+ r_stats->opackets = stats->tx.tx_frames_ok;
+
+ r_stats->ibytes = stats->rx.rx_bytes_ok;
+ r_stats->obytes = stats->tx.tx_bytes_ok;
+
+ r_stats->ierrors = stats->rx.rx_errors;
+ r_stats->oerrors = stats->tx.tx_errors;
+
+ r_stats->imcasts = stats->rx.rx_multicast_frames_ok;
+ r_stats->rx_nombuf = stats->rx.rx_no_bufs;
+}
+
+void enic_del_mac_address(struct enic *enic)
+{
+ if (vnic_dev_del_addr(enic->vdev, enic->mac_addr))
+ dev_err(enic, "del mac addr failed\n");
+}
+
+void enic_set_mac_address(struct enic *enic, uint8_t *mac_addr)
+{
+ int err;
+
+ if (!is_eth_addr_valid(mac_addr)) {
+ dev_err(enic, "invalid mac address\n");
+ return;
+ }
+
+ err = vnic_dev_del_addr(enic->vdev, mac_addr);
+ if (err) {
+ dev_err(enic, "del mac addr failed\n");
+ return;
+ }
+
+ ether_addr_copy((struct ether_addr *)mac_addr,
+ (struct ether_addr *)enic->mac_addr);
+
+ err = vnic_dev_add_addr(enic->vdev, mac_addr);
+ if (err) {
+ dev_err(enic, "add mac addr failed\n");
+ return;
+ }
+}
+
+static void enic_free_rq_buf(struct vnic_rq *rq, struct vnic_rq_buf *buf)
+{
+ struct enic *enic = vnic_dev_priv(rq->vdev);
+
+ if (!buf->os_buf)
+ return;
+
+ rte_pktmbuf_free((struct rte_mbuf *)buf->os_buf);
+ buf->os_buf = NULL;
+}
+
+void enic_init_vnic_resources(struct enic *enic)
+{
+ unsigned int error_interrupt_enable = 1;
+ unsigned int error_interrupt_offset = 0;
+ int index = 0;
+ unsigned int cq_index = 0;
+
+ for (index = 0; index < enic->rq_count; index++) {
+ vnic_rq_init(&enic->rq[index],
+ enic_cq_rq(enic, index),
+ error_interrupt_enable,
+ error_interrupt_offset);
+ }
+
+ for (index = 0; index < enic->wq_count; index++) {
+ vnic_wq_init(&enic->wq[index],
+ enic_cq_wq(enic, index),
+ error_interrupt_enable,
+ error_interrupt_offset);
+ }
+
+ vnic_dev_stats_clear(enic->vdev);
+
+ for (index = 0; index < enic->cq_count; index++) {
+ vnic_cq_init(&enic->cq[index],
+ 0 /* flow_control_enable */,
+ 1 /* color_enable */,
+ 0 /* cq_head */,
+ 0 /* cq_tail */,
+ 1 /* cq_tail_color */,
+ 0 /* interrupt_enable */,
+ 1 /* cq_entry_enable */,
+ 0 /* cq_message_enable */,
+ 0 /* interrupt offset */,
+ 0 /* cq_message_addr */);
+ }
+
+ vnic_intr_init(&enic->intr,
+ enic->config.intr_timer_usec,
+ enic->config.intr_timer_type,
+ /*mask_on_assertion*/1);
+}
+
+
+static int enic_rq_alloc_buf(struct vnic_rq *rq)
+{
+ struct enic *enic = vnic_dev_priv(rq->vdev);
+ void *buf;
+ dma_addr_t dma_addr;
+ struct rq_enet_desc *desc = vnic_rq_next_desc(rq);
+ u_int8_t type = RQ_ENET_TYPE_ONLY_SOP;
+ u_int16_t len = ENIC_MAX_MTU + VLAN_ETH_HLEN;
+ u16 split_hdr_size = vnic_get_hdr_split_size(enic->vdev);
+ struct rte_mbuf *mbuf = enic_rxmbuf_alloc(rq->mp);
+ struct rte_mbuf *hdr_mbuf = NULL;
+
+ if (!mbuf) {
+ dev_err(enic, "mbuf alloc in enic_rq_alloc_buf failed\n");
+ return -1;
+ }
+
+ if (unlikely(split_hdr_size)) {
+ if (vnic_rq_desc_avail(rq) < 2) {
+ rte_mempool_put(mbuf->pool, mbuf);
+ return -1;
+ }
+ hdr_mbuf = enic_rxmbuf_alloc(rq->mp);
+ if (!hdr_mbuf) {
+ rte_mempool_put(mbuf->pool, mbuf);
+ dev_err(enic,
+ "hdr_mbuf alloc in enic_rq_alloc_buf failed\n");
+ return -1;
+ }
+
+ hdr_mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ buf = rte_pktmbuf_mtod(hdr_mbuf, void *);
+
+ hdr_mbuf->nb_segs = 2;
+ hdr_mbuf->port = rq->index;
+ hdr_mbuf->next = mbuf;
+
+ dma_addr = (dma_addr_t)
+ (hdr_mbuf->buf_physaddr + hdr_mbuf->data_off);
+
+ rq_enet_desc_enc(desc, dma_addr, type, split_hdr_size);
+
+ vnic_rq_post(rq, (void *)hdr_mbuf, 0 /*os_buf_index*/, dma_addr,
+ (unsigned int)split_hdr_size, 0 /*wrid*/);
+
+ desc = vnic_rq_next_desc(rq);
+ type = RQ_ENET_TYPE_NOT_SOP;
+ } else {
+ mbuf->nb_segs = 1;
+ mbuf->port = rq->index;
+ }
+
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+ buf = rte_pktmbuf_mtod(mbuf, void *);
+ mbuf->next = NULL;
+
+ dma_addr = (dma_addr_t)
+ (mbuf->buf_physaddr + mbuf->data_off);
+
+ rq_enet_desc_enc(desc, dma_addr, type, mbuf->buf_len);
+
+ vnic_rq_post(rq, (void *)mbuf, 0 /*os_buf_index*/, dma_addr,
+ (unsigned int)mbuf->buf_len, 0 /*wrid*/);
+
+ return 0;
+}
+
+static int enic_rq_indicate_buf(struct vnic_rq *rq,
+ struct cq_desc *cq_desc, struct vnic_rq_buf *buf,
+ int skipped, void *opaque)
+{
+ struct enic *enic = vnic_dev_priv(rq->vdev);
+ struct rte_mbuf **rx_pkt_bucket = (struct rte_mbuf **)opaque;
+ struct rte_mbuf *rx_pkt = NULL;
+ struct rte_mbuf *hdr_rx_pkt = NULL;
+
+ u8 type, color, eop, sop, ingress_port, vlan_stripped;
+ u8 fcoe, fcoe_sof, fcoe_fc_crc_ok, fcoe_enc_error, fcoe_eof;
+ u8 tcp_udp_csum_ok, udp, tcp, ipv4_csum_ok;
+ u8 ipv6, ipv4, ipv4_fragment, fcs_ok, rss_type, csum_not_calc;
+ u8 packet_error;
+ u16 q_number, completed_index, bytes_written, vlan_tci, checksum;
+ u32 rss_hash;
+
+ cq_enet_rq_desc_dec((struct cq_enet_rq_desc *)cq_desc,
+ &type, &color, &q_number, &completed_index,
+ &ingress_port, &fcoe, &eop, &sop, &rss_type,
+ &csum_not_calc, &rss_hash, &bytes_written,
+ &packet_error, &vlan_stripped, &vlan_tci, &checksum,
+ &fcoe_sof, &fcoe_fc_crc_ok, &fcoe_enc_error,
+ &fcoe_eof, &tcp_udp_csum_ok, &udp, &tcp,
+ &ipv4_csum_ok, &ipv6, &ipv4, &ipv4_fragment,
+ &fcs_ok);
+
+ if (packet_error) {
+ dev_err(enic, "packet error\n");
+ return;
+ }
+
+ rx_pkt = (struct rte_mbuf *)buf->os_buf;
+ buf->os_buf = NULL;
+
+ if (unlikely(skipped)) {
+ rx_pkt->data_len = 0;
+ return 0;
+ }
+
+ if (likely(!vnic_get_hdr_split_size(enic->vdev))) {
+ /* No header split configured */
+ *rx_pkt_bucket = rx_pkt;
+ rx_pkt->pkt_len = bytes_written;
+
+ if (ipv4) {
+ rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ if (!csum_not_calc) {
+ if (unlikely(!ipv4_csum_ok))
+ rx_pkt->ol_flags |= PKT_RX_IP_CKSUM_BAD;
+
+ if ((tcp || udp) && (!tcp_udp_csum_ok))
+ rx_pkt->ol_flags |= PKT_RX_L4_CKSUM_BAD;
+ }
+ } else if (ipv6)
+ rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ } else {
+ /* Header split */
+ if (sop && !eop) {
+ /* This piece is header */
+ *rx_pkt_bucket = rx_pkt;
+ rx_pkt->pkt_len = bytes_written;
+ } else {
+ if (sop && eop) {
+ /* The packet is smaller than split_hdr_size */
+ *rx_pkt_bucket = rx_pkt;
+ rx_pkt->pkt_len = bytes_written;
+ if (ipv4) {
+ rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ if (!csum_not_calc) {
+ if (unlikely(!ipv4_csum_ok))
+ rx_pkt->ol_flags |=
+ PKT_RX_IP_CKSUM_BAD;
+
+ if ((tcp || udp) &&
+ (!tcp_udp_csum_ok))
+ rx_pkt->ol_flags |=
+ PKT_RX_L4_CKSUM_BAD;
+ }
+ } else if (ipv6)
+ rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+ } else {
+ /* Payload */
+ hdr_rx_pkt = *rx_pkt_bucket;
+ hdr_rx_pkt->pkt_len += bytes_written;
+ if (ipv4) {
+ hdr_rx_pkt->ol_flags |= PKT_RX_IPV4_HDR;
+ if (!csum_not_calc) {
+ if (unlikely(!ipv4_csum_ok))
+ hdr_rx_pkt->ol_flags |=
+ PKT_RX_IP_CKSUM_BAD;
+
+ if ((tcp || udp) &&
+ (!tcp_udp_csum_ok))
+ hdr_rx_pkt->ol_flags |=
+ PKT_RX_L4_CKSUM_BAD;
+ }
+ } else if (ipv6)
+ hdr_rx_pkt->ol_flags |= PKT_RX_IPV6_HDR;
+
+ }
+ }
+ }
+
+ rx_pkt->data_len = bytes_written;
+
+ if (rss_hash) {
+ rx_pkt->ol_flags |= PKT_RX_RSS_HASH;
+ rx_pkt->hash.rss = rss_hash;
+ }
+
+ if (vlan_tci) {
+ rx_pkt->ol_flags |= PKT_RX_VLAN_PKT;
+ rx_pkt->vlan_tci = vlan_tci;
+ }
+
+ return eop;
+}
+
+static int enic_rq_service(struct vnic_dev *vdev, struct cq_desc *cq_desc,
+ u8 type, u16 q_number, u16 completed_index, void *opaque)
+{
+ struct enic *enic = vnic_dev_priv(vdev);
+
+ return vnic_rq_service(&enic->rq[q_number], cq_desc,
+ completed_index, VNIC_RQ_RETURN_DESC,
+ enic_rq_indicate_buf, opaque);
+
+}
+
+int enic_poll(struct vnic_rq *rq, struct rte_mbuf **rx_pkts,
+ unsigned int budget, unsigned int *work_done)
+{
+ struct enic *enic = vnic_dev_priv(rq->vdev);
+ unsigned int cq = enic_cq_rq(enic, rq->index);
+ int err = 0;
+
+ *work_done = vnic_cq_service(&enic->cq[cq],
+ budget, enic_rq_service, (void *)rx_pkts);
+
+ if (*work_done) {
+ vnic_rq_fill(rq, enic_rq_alloc_buf);
+
+ /* Need at least one buffer on ring to get going */
+ if (vnic_rq_desc_used(rq) == 0) {
+ dev_err(enic, "Unable to alloc receive buffers\n");
+ err = -1;
+ }
+ }
+ return err;
+}
+
+void *enic_alloc_consistent(void *priv, size_t size,
+ dma_addr_t *dma_handle, u8 *name)
+{
+ struct enic *enic = (struct enic *)priv;
+ void *vaddr;
+ const struct rte_memzone *rz;
+ *dma_handle = 0;
+
+ rz = rte_memzone_reserve_aligned(name, size, 0, 0, ENIC_ALIGN);
+ if (!rz) {
+ pr_err("%s : Failed to allocate memory requested for %s",
+ __func__, name);
+ return NULL;
+ }
+
+ vaddr = rz->addr;
+ *dma_handle = (dma_addr_t)rz->phys_addr;
+
+ return vaddr;
+}
+
+void enic_free_consistent(struct rte_pci_device *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle)
+{
+ /* Nothing to be done */
+}
+
+void enic_intr_handler(__rte_unused struct rte_intr_handle *handle,
+ void *arg)
+{
+ struct enic *enic = pmd_priv((struct rte_eth_dev *)arg);
+
+ dev_err(enic, "Err intr.\n");
+ vnic_intr_return_all_credits(&enic->intr);
+
+ enic_log_q_error(enic);
+}
+
+int enic_enable(struct enic *enic)
+{
+ int index;
+ void *res;
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ const struct rte_memzone *rmz;
+ struct rte_eth_dev *eth_dev = enic->rte_dev;
+
+ eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
+ eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ vnic_dev_notify_set(enic->vdev, -1); /* No Intr for notify */
+
+ if (enic_clsf_init(enic))
+ dev_warning(enic, "Init of hash table for clsf failed."\
+ "Flow director feature will not work\n");
+
+ /* Fill RQ bufs */
+ for (index = 0; index < enic->rq_count; index++) {
+ vnic_rq_fill(&enic->rq[index], enic_rq_alloc_buf);
+
+ /* Need at least one buffer on ring to get going
+ */
+ if (vnic_rq_desc_used(&enic->rq[index]) == 0) {
+ dev_err(enic, "Unable to alloc receive buffers\n");
+ return -1;
+ }
+ }
+
+ for (index = 0; index < enic->wq_count; index++)
+ vnic_wq_enable(&enic->wq[index]);
+ for (index = 0; index < enic->rq_count; index++)
+ vnic_rq_enable(&enic->rq[index]);
+
+ vnic_dev_enable_wait(enic->vdev);
+
+#ifndef RTE_EAL_VFIO
+ /* Register and enable error interrupt */
+ rte_intr_callback_register(&(enic->pdev->intr_handle),
+ enic_intr_handler, (void *)enic->rte_dev);
+
+ rte_intr_enable(&(enic->pdev->intr_handle));
+#endif
+ vnic_intr_unmask(&enic->intr);
+
+ return 0;
+}
+
+int enic_alloc_intr_resources(struct enic *enic)
+{
+ int err;
+
+ dev_info(enic, "vNIC resources used: "\
+ "wq %d rq %d cq %d intr %d\n",
+ enic->wq_count, enic->rq_count,
+ enic->cq_count, enic->intr_count);
+
+ err = vnic_intr_alloc(enic->vdev, &enic->intr, 0);
+ if (err)
+ enic_free_vnic_resources(enic);
+
+ return err;
+}
+
+void enic_free_rq(void *rxq)
+{
+ struct vnic_rq *rq = (struct vnic_rq *)rxq;
+ struct enic *enic = vnic_dev_priv(rq->vdev);
+
+ vnic_rq_free(rq);
+ vnic_cq_free(&enic->cq[rq->index]);
+}
+
+void enic_start_wq(struct enic *enic, uint16_t queue_idx)
+{
+ vnic_wq_enable(&enic->wq[queue_idx]);
+}
+
+int enic_stop_wq(struct enic *enic, uint16_t queue_idx)
+{
+ return vnic_wq_disable(&enic->wq[queue_idx]);
+}
+
+void enic_start_rq(struct enic *enic, uint16_t queue_idx)
+{
+ vnic_rq_enable(&enic->rq[queue_idx]);
+}
+
+int enic_stop_rq(struct enic *enic, uint16_t queue_idx)
+{
+ return vnic_rq_disable(&enic->rq[queue_idx]);
+}
+
+int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
+ unsigned int socket_id, struct rte_mempool *mp,
+ uint16_t nb_desc)
+{
+ int err;
+ struct vnic_rq *rq = &enic->rq[queue_idx];
+
+ rq->socket_id = socket_id;
+ rq->mp = mp;
+
+ if (nb_desc) {
+ if (nb_desc > enic->config.rq_desc_count) {
+ dev_warning(enic,
+ "RQ %d - number of rx desc in cmd line (%d)"\
+ "is greater than that in the UCSM/CIMC adapter"\
+ "policy. Applying the value in the adapter "\
+ "policy (%d).\n",
+ queue_idx, nb_desc, enic->config.rq_desc_count);
+ } else if (nb_desc != enic->config.rq_desc_count) {
+ enic->config.rq_desc_count = nb_desc;
+ dev_info(enic,
+ "RX Queues - effective number of descs:%d\n",
+ nb_desc);
+ }
+ }
+
+ /* Allocate queue resources */
+ err = vnic_rq_alloc(enic->vdev, &enic->rq[queue_idx], queue_idx,
+ enic->config.rq_desc_count,
+ sizeof(struct rq_enet_desc));
+ if (err) {
+ dev_err(enic, "error in allocation of rq\n");
+ return err;
+ }
+
+ err = vnic_cq_alloc(enic->vdev, &enic->cq[queue_idx], queue_idx,
+ socket_id, enic->config.rq_desc_count,
+ sizeof(struct cq_enet_rq_desc));
+ if (err) {
+ vnic_rq_free(rq);
+ dev_err(enic, "error in allocation of cq for rq\n");
+ }
+
+ return err;
+}
+
+void enic_free_wq(void *txq)
+{
+ struct vnic_wq *wq = (struct vnic_wq *)txq;
+ struct enic *enic = vnic_dev_priv(wq->vdev);
+
+ vnic_wq_free(wq);
+ vnic_cq_free(&enic->cq[enic->rq_count + wq->index]);
+}
+
+int enic_alloc_wq(struct enic *enic, uint16_t queue_idx,
+ unsigned int socket_id, uint16_t nb_desc)
+{
+ int err;
+ struct vnic_wq *wq = &enic->wq[queue_idx];
+ unsigned int cq_index = enic_cq_wq(enic, queue_idx);
+
+ wq->socket_id = socket_id;
+ if (nb_desc) {
+ if (nb_desc > enic->config.wq_desc_count) {
+ dev_warning(enic,
+ "WQ %d - number of tx desc in cmd line (%d)"\
+ "is greater than that in the UCSM/CIMC adapter"\
+ "policy. Applying the value in the adapter "\
+ "policy (%d)\n",
+ queue_idx, nb_desc, enic->config.wq_desc_count);
+ } else if (nb_desc != enic->config.wq_desc_count) {
+ enic->config.wq_desc_count = nb_desc;
+ dev_info(enic,
+ "TX Queues - effective number of descs:%d\n",
+ nb_desc);
+ }
+ }
+
+ /* Allocate queue resources */
+ err = vnic_wq_alloc(enic->vdev, &enic->wq[queue_idx], queue_idx,
+ enic->config.wq_desc_count,
+ sizeof(struct wq_enet_desc));
+ if (err) {
+ dev_err(enic, "error in allocation of wq\n");
+ return err;
+ }
+
+ err = vnic_cq_alloc(enic->vdev, &enic->cq[cq_index], cq_index,
+ socket_id, enic->config.wq_desc_count,
+ sizeof(struct cq_enet_wq_desc));
+ if (err) {
+ vnic_wq_free(wq);
+ dev_err(enic, "error in allocation of cq for wq\n");
+ }
+
+ return err;
+}
+
+int enic_disable(struct enic *enic)
+{
+ unsigned int i;
+ int err;
+
+ vnic_intr_mask(&enic->intr);
+ (void)vnic_intr_masked(&enic->intr); /* flush write */
+
+ vnic_dev_disable(enic->vdev);
+
+ enic_clsf_destroy(enic);
+
+ if (!enic_is_sriov_vf(enic))
+ vnic_dev_del_addr(enic->vdev, enic->mac_addr);
+
+ for (i = 0; i < enic->wq_count; i++) {
+ err = vnic_wq_disable(&enic->wq[i]);
+ if (err)
+ return err;
+ }
+ for (i = 0; i < enic->rq_count; i++) {
+ err = vnic_rq_disable(&enic->rq[i]);
+ if (err)
+ return err;
+ }
+
+ vnic_dev_set_reset_flag(enic->vdev, 1);
+ vnic_dev_notify_unset(enic->vdev);
+
+ for (i = 0; i < enic->wq_count; i++)
+ vnic_wq_clean(&enic->wq[i], enic_free_wq_buf);
+ for (i = 0; i < enic->rq_count; i++)
+ vnic_rq_clean(&enic->rq[i], enic_free_rq_buf);
+ for (i = 0; i < enic->cq_count; i++)
+ vnic_cq_clean(&enic->cq[i]);
+ vnic_intr_clean(&enic->intr);
+
+ return 0;
+}
+
+static int enic_dev_wait(struct vnic_dev *vdev,
+ int (*start)(struct vnic_dev *, int),
+ int (*finished)(struct vnic_dev *, int *),
+ int arg)
+{
+ int done;
+ int err;
+ int i;
+
+ err = start(vdev, arg);
+ if (err)
+ return err;
+
+ /* Wait for func to complete...2 seconds max */
+ for (i = 0; i < 2000; i++) {
+ err = finished(vdev, &done);
+ if (err)
+ return err;
+ if (done)
+ return 0;
+ usleep(1000);
+ }
+ return -ETIMEDOUT;
+}
+
+static int enic_dev_open(struct enic *enic)
+{
+ int err;
+
+ err = enic_dev_wait(enic->vdev, vnic_dev_open,
+ vnic_dev_open_done, 0);
+ if (err)
+ dev_err(enic_get_dev(enic),
+ "vNIC device open failed, err %d\n", err);
+
+ return err;
+}
+
+static int enic_set_rsskey(struct enic *enic)
+{
+ dma_addr_t rss_key_buf_pa;
+ union vnic_rss_key *rss_key_buf_va = NULL;
+ union vnic_rss_key rss_key = {
+ .key[0].b = {85, 67, 83, 97, 119, 101, 115, 111, 109, 101},
+ .key[1].b = {80, 65, 76, 79, 117, 110, 105, 113, 117, 101},
+ .key[2].b = {76, 73, 78, 85, 88, 114, 111, 99, 107, 115},
+ .key[3].b = {69, 78, 73, 67, 105, 115, 99, 111, 111, 108},
+ };
+ int err;
+ char name[NAME_MAX];
+
+ snprintf(name, NAME_MAX, "rss_key-%s", enic->bdf_name);
+ rss_key_buf_va = enic_alloc_consistent(enic, sizeof(union vnic_rss_key),
+ &rss_key_buf_pa, name);
+ if (!rss_key_buf_va)
+ return -ENOMEM;
+
+ rte_memcpy(rss_key_buf_va, &rss_key, sizeof(union vnic_rss_key));
+
+ err = enic_set_rss_key(enic,
+ rss_key_buf_pa,
+ sizeof(union vnic_rss_key));
+
+ enic_free_consistent(enic->pdev, sizeof(union vnic_rss_key),
+ rss_key_buf_va, rss_key_buf_pa);
+
+ return err;
+}
+
+static int enic_set_rsscpu(struct enic *enic, u8 rss_hash_bits)
+{
+ dma_addr_t rss_cpu_buf_pa;
+ union vnic_rss_cpu *rss_cpu_buf_va = NULL;
+ unsigned int i;
+ int err;
+ char name[NAME_MAX];
+
+ snprintf(name, NAME_MAX, "rss_cpu-%s", enic->bdf_name);
+ rss_cpu_buf_va = enic_alloc_consistent(enic, sizeof(union vnic_rss_cpu),
+ &rss_cpu_buf_pa, name);
+ if (!rss_cpu_buf_va)
+ return -ENOMEM;
+
+ for (i = 0; i < (1 << rss_hash_bits); i++)
+ (*rss_cpu_buf_va).cpu[i/4].b[i%4] = i % enic->rq_count;
+
+ err = enic_set_rss_cpu(enic,
+ rss_cpu_buf_pa,
+ sizeof(union vnic_rss_cpu));
+
+ enic_free_consistent(enic->pdev, sizeof(union vnic_rss_cpu),
+ rss_cpu_buf_va, rss_cpu_buf_pa);
+
+ return err;
+}
+
+static int enic_set_niccfg(struct enic *enic, u8 rss_default_cpu,
+ u8 rss_hash_type, u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable)
+{
+ const u8 tso_ipid_split_en = 0;
+ int err;
+
+ /* Enable VLAN tag stripping */
+
+ err = enic_set_nic_cfg(enic,
+ rss_default_cpu, rss_hash_type,
+ rss_hash_bits, rss_base_cpu,
+ rss_enable, tso_ipid_split_en,
+ enic->ig_vlan_strip_en);
+
+ return err;
+}
+
+int enic_set_rss_nic_cfg(struct enic *enic)
+{
+ const u8 rss_default_cpu = 0;
+ const u8 rss_hash_type = NIC_CFG_RSS_HASH_TYPE_IPV4 |
+ NIC_CFG_RSS_HASH_TYPE_TCP_IPV4 |
+ NIC_CFG_RSS_HASH_TYPE_IPV6 |
+ NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
+ const u8 rss_hash_bits = 7;
+ const u8 rss_base_cpu = 0;
+ u8 rss_enable = ENIC_SETTING(enic, RSS) && (enic->rq_count > 1);
+
+ if (rss_enable) {
+ if (!enic_set_rsskey(enic)) {
+ if (enic_set_rsscpu(enic, rss_hash_bits)) {
+ rss_enable = 0;
+ dev_warning(enic, "RSS disabled, "\
+ "Failed to set RSS cpu indirection table.");
+ }
+ } else {
+ rss_enable = 0;
+ dev_warning(enic,
+ "RSS disabled, Failed to set RSS key.\n");
+ }
+ }
+
+ return enic_set_niccfg(enic, rss_default_cpu, rss_hash_type,
+ rss_hash_bits, rss_base_cpu, rss_enable);
+}
+
+int enic_setup_finish(struct enic *enic)
+{
+ int ret;
+
+ ret = enic_set_rss_nic_cfg(enic);
+ if (ret) {
+ dev_err(enic, "Failed to config nic, aborting.\n");
+ return -1;
+ }
+
+ vnic_dev_add_addr(enic->vdev, enic->mac_addr);
+
+ /* Default conf */
+ vnic_dev_packet_filter(enic->vdev,
+ 1 /* directed */,
+ 1 /* multicast */,
+ 1 /* broadcast */,
+ 0 /* promisc */,
+ 1 /* allmulti */);
+
+ enic->promisc = 0;
+ enic->allmulti = 1;
+
+ return 0;
+}
+
+#ifdef RTE_EAL_VFIO
+static void enic_eventfd_init(struct enic *enic)
+{
+ enic->eventfd = enic->pdev->intr_handle.fd;
+}
+
+void *enic_err_intr_handler(void *arg)
+{
+ struct enic *enic = (struct enic *)arg;
+ unsigned int intr = enic_msix_err_intr(enic);
+ ssize_t size;
+ uint64_t data;
+
+ while (1) {
+ size = read(enic->eventfd, &data, sizeof(data));
+ dev_err(enic, "Err intr.\n");
+ vnic_intr_return_all_credits(&enic->intr);
+
+ enic_log_q_error(enic);
+ }
+
+ return NULL;
+}
+#endif
+
+void enic_add_packet_filter(struct enic *enic)
+{
+ /* Args -> directed, multicast, broadcast, promisc, allmulti */
+ vnic_dev_packet_filter(enic->vdev, 1, 1, 1,
+ enic->promisc, enic->allmulti);
+}
+
+int enic_get_link_status(struct enic *enic)
+{
+ return vnic_dev_link_status(enic->vdev);
+}
+
+
+#ifdef RTE_EAL_VFIO
+static int enic_create_err_intr_thread(struct enic *enic)
+{
+ pthread_attr_t intr_attr;
+
+ /* create threads for error interrupt handling */
+ pthread_attr_init(&intr_attr);
+ pthread_attr_setstacksize(&intr_attr, 0x100000);
+
+ /* ERR */
+ if (pthread_create(&enic->err_intr_thread, &intr_attr,
+ enic_err_intr_handler, (void *)enic)) {
+ dev_err(enic, "Failed to create err interrupt handler threads\n");
+ return -1;
+ }
+
+ pthread_attr_destroy(&intr_attr);
+
+ return 0;
+}
+
+
+static int enic_set_intr_mode(struct enic *enic)
+{
+ struct vfio_irq_set *irq_set;
+ int *fds;
+ int size;
+ int ret = -1;
+ int index;
+
+ if (enic->intr_count < 1) {
+ dev_err(enic, "Unsupported resource conf.\n");
+ return -1;
+ }
+ vnic_dev_set_intr_mode(enic->vdev, VNIC_DEV_INTR_MODE_MSIX);
+
+ enic->intr_count = 1;
+
+ enic_eventfd_init(enic);
+ size = sizeof(*irq_set) + (sizeof(int));
+
+ irq_set = rte_zmalloc("enic_vfio_irq", size, 0);
+ irq_set->argsz = size;
+ irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
+ irq_set->start = 0;
+ irq_set->count = 1; /* For error interrupt only */
+ irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
+ VFIO_IRQ_SET_ACTION_TRIGGER;
+ fds = (int *)&irq_set->data;
+
+ fds[0] = enic->eventfd;
+
+ ret = ioctl(enic->pdev->intr_handle.vfio_dev_fd,
+ VFIO_DEVICE_SET_IRQS, irq_set);
+ rte_free(irq_set);
+ if (ret) {
+ dev_err(enic, "Failed to set eventfds for interrupts\n");
+ return -1;
+ }
+
+ enic_create_err_intr_thread(enic);
+ return 0;
+}
+
+static void enic_clear_intr_mode(struct enic *enic)
+{
+ vnic_dev_set_intr_mode(enic->vdev, VNIC_DEV_INTR_MODE_UNKNOWN);
+}
+#endif
+
+static void enic_dev_deinit(struct enic *enic)
+{
+ unsigned int i;
+ struct rte_eth_dev *eth_dev = enic->rte_dev;
+
+ if (eth_dev->data->mac_addrs)
+ rte_free(eth_dev->data->mac_addrs);
+
+#ifdef RTE_EAL_VFIO
+ enic_clear_intr_mode(enic);
+#endif
+}
+
+
+int enic_set_vnic_res(struct enic *enic)
+{
+ struct rte_eth_dev *eth_dev = enic->rte_dev;
+
+ if ((enic->rq_count < eth_dev->data->nb_rx_queues) ||
+ (enic->wq_count < eth_dev->data->nb_tx_queues)) {
+ dev_err(dev, "Not enough resources configured, aborting\n");
+ return -1;
+ }
+
+ enic->rq_count = eth_dev->data->nb_rx_queues;
+ enic->wq_count = eth_dev->data->nb_tx_queues;
+ if (enic->cq_count < (enic->rq_count + enic->wq_count)) {
+ dev_err(dev, "Not enough resources configured, aborting\n");
+ return -1;
+ }
+
+ enic->cq_count = enic->rq_count + enic->wq_count;
+ return 0;
+}
+
+static int enic_dev_init(struct enic *enic)
+{
+ unsigned int i;
+ int err;
+ struct rte_eth_dev *eth_dev = enic->rte_dev;
+
+ vnic_dev_intr_coal_timer_info_default(enic->vdev);
+
+ /* Get vNIC configuration
+ */
+ err = enic_get_vnic_config(enic);
+ if (err) {
+ dev_err(dev, "Get vNIC configuration failed, aborting\n");
+ return err;
+ }
+
+ eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr", ETH_ALEN, 0);
+ if (!eth_dev->data->mac_addrs) {
+ dev_err(enic, "mac addr storage alloc failed, aborting.\n");
+ return -1;
+ }
+ ether_addr_copy((struct ether_addr *) enic->mac_addr,
+ ð_dev->data->mac_addrs[0]);
+
+
+ /* Get available resource counts
+ */
+ enic_get_res_counts(enic);
+
+#ifdef RTE_EAL_VFIO
+ /* Set interrupt mode based on resource counts and system
+ * capabilities
+ */
+ err = enic_set_intr_mode(enic);
+ if (err) {
+ rte_free(eth_dev->data->mac_addrs);
+ enic_clear_intr_mode(enic);
+ dev_err(dev, "Failed to set intr mode based on resource "\
+ "counts and system capabilities, aborting\n");
+ return err;
+ }
+#endif
+
+ vnic_dev_set_reset_flag(enic->vdev, 0);
+
+ return 0;
+
+}
+
+int enic_probe(struct enic *enic)
+{
+ const char *bdf = enic->bdf_name;
+ struct rte_pci_device *pdev = enic->pdev;
+ struct rte_eth_dev *eth_dev = enic->rte_dev;
+ unsigned int i;
+ int err = -1;
+
+ dev_info(enic, " Initializing ENIC PMD version %s\n", DRV_VERSION);
+
+ enic->bar0.vaddr = (void *)pdev->mem_resource[0].addr;
+ enic->bar0.len = pdev->mem_resource[0].len;
+
+ /* Register vNIC device */
+ enic->vdev = vnic_dev_register(NULL, enic, enic->pdev, &enic->bar0, 1);
+ if (!enic->vdev) {
+ dev_err(enic, "vNIC registration failed, aborting\n");
+ goto err_out;
+ }
+
+ vnic_register_cbacks(enic->vdev,
+ enic_alloc_consistent,
+ enic_free_consistent);
+
+ /* Issue device open to get device in known state */
+ err = enic_dev_open(enic);
+ if (err) {
+ dev_err(enic, "vNIC dev open failed, aborting\n");
+ goto err_out_unregister;
+ }
+
+ /* Set ingress vlan rewrite mode before vnic initialization */
+ err = vnic_dev_set_ig_vlan_rewrite_mode(enic->vdev,
+ IG_VLAN_REWRITE_MODE_PRIORITY_TAG_DEFAULT_VLAN);
+ if (err) {
+ dev_err(enic,
+ "Failed to set ingress vlan rewrite mode, aborting.\n");
+ goto err_out_dev_close;
+ }
+
+ /* Issue device init to initialize the vnic-to-switch link.
+ * We'll start with carrier off and wait for link UP
+ * notification later to turn on carrier. We don't need
+ * to wait here for the vnic-to-switch link initialization
+ * to complete; link UP notification is the indication that
+ * the process is complete.
+ */
+
+ err = vnic_dev_init(enic->vdev, 0);
+ if (err) {
+ dev_err(enic, "vNIC dev init failed, aborting\n");
+ goto err_out_dev_close;
+ }
+
+ err = enic_dev_init(enic);
+ if (err) {
+ dev_err(enic, "Device initialization failed, aborting\n");
+ goto err_out_dev_close;
+ }
+
+ return 0;
+
+err_out_dev_close:
+ vnic_dev_close(enic->vdev);
+err_out_unregister:
+ vnic_dev_unregister(enic->vdev);
+err_out:
+ return err;
+}
+
+void enic_remove(struct enic *enic)
+{
+ enic_dev_deinit(enic);
+ vnic_dev_close(enic->vdev);
+ vnic_dev_unregister(enic->vdev);
+}
+
diff --git a/lib/librte_pmd_enic/enic_res.c b/lib/librte_pmd_enic/enic_res.c
new file mode 100644
index 0000000..41bff3a
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_res.c
@@ -0,0 +1,221 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: enic_res.c 171146 2014-05-02 07:08:20Z ssujith $"
+
+#include "enic_compat.h"
+#include "rte_ethdev.h"
+#include "wq_enet_desc.h"
+#include "rq_enet_desc.h"
+#include "cq_enet_desc.h"
+#include "vnic_resource.h"
+#include "vnic_enet.h"
+#include "vnic_dev.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+#include "vnic_cq.h"
+#include "vnic_intr.h"
+#include "vnic_stats.h"
+#include "vnic_nic.h"
+#include "vnic_rss.h"
+#include "enic_res.h"
+#include "enic.h"
+
+int enic_get_vnic_config(struct enic *enic)
+{
+ struct vnic_enet_config *c = &enic->config;
+ int err;
+
+ err = vnic_dev_get_mac_addr(enic->vdev, enic->mac_addr);
+ if (err) {
+ dev_err(enic_get_dev(enic),
+ "Error getting MAC addr, %d\n", err);
+ return err;
+ }
+
+#define GET_CONFIG(m) \
+ do { \
+ err = vnic_dev_spec(enic->vdev, \
+ offsetof(struct vnic_enet_config, m), \
+ sizeof(c->m), &c->m); \
+ if (err) { \
+ dev_err(enic_get_dev(enic), \
+ "Error getting %s, %d\n", #m, err); \
+ return err; \
+ } \
+ } while (0)
+
+ GET_CONFIG(flags);
+ GET_CONFIG(wq_desc_count);
+ GET_CONFIG(rq_desc_count);
+ GET_CONFIG(mtu);
+ GET_CONFIG(intr_timer_type);
+ GET_CONFIG(intr_mode);
+ GET_CONFIG(intr_timer_usec);
+ GET_CONFIG(loop_tag);
+ GET_CONFIG(num_arfs);
+
+ c->wq_desc_count =
+ min_t(u32, ENIC_MAX_WQ_DESCS,
+ max_t(u32, ENIC_MIN_WQ_DESCS,
+ c->wq_desc_count));
+ c->wq_desc_count &= 0xffffffe0; /* must be aligned to groups of 32 */
+
+ c->rq_desc_count =
+ min_t(u32, ENIC_MAX_RQ_DESCS,
+ max_t(u32, ENIC_MIN_RQ_DESCS,
+ c->rq_desc_count));
+ c->rq_desc_count &= 0xffffffe0; /* must be aligned to groups of 32 */
+
+ if (c->mtu == 0)
+ c->mtu = 1500;
+ c->mtu = min_t(u16, ENIC_MAX_MTU,
+ max_t(u16, ENIC_MIN_MTU,
+ c->mtu));
+
+ c->intr_timer_usec = min_t(u32, c->intr_timer_usec,
+ vnic_dev_get_intr_coal_timer_max(enic->vdev));
+
+ dev_info(enic_get_dev(enic),
+ "vNIC MAC addr %02x:%02x:%02x:%02x:%02x:%02x "
+ "wq/rq %d/%d mtu %d\n",
+ enic->mac_addr[0], enic->mac_addr[1], enic->mac_addr[2],
+ enic->mac_addr[3], enic->mac_addr[4], enic->mac_addr[5],
+ c->wq_desc_count, c->rq_desc_count, c->mtu);
+ dev_info(enic_get_dev(enic), "vNIC csum tx/rx %s/%s "
+ "rss %s intr mode %s type %s timer %d usec "
+ "loopback tag 0x%04x\n",
+ ENIC_SETTING(enic, TXCSUM) ? "yes" : "no",
+ ENIC_SETTING(enic, RXCSUM) ? "yes" : "no",
+ ENIC_SETTING(enic, RSS) ? "yes" : "no",
+ c->intr_mode == VENET_INTR_MODE_INTX ? "INTx" :
+ c->intr_mode == VENET_INTR_MODE_MSI ? "MSI" :
+ c->intr_mode == VENET_INTR_MODE_ANY ? "any" :
+ "unknown",
+ c->intr_timer_type == VENET_INTR_TYPE_MIN ? "min" :
+ c->intr_timer_type == VENET_INTR_TYPE_IDLE ? "idle" :
+ "unknown",
+ c->intr_timer_usec,
+ c->loop_tag);
+
+ return 0;
+}
+
+int enic_add_vlan(struct enic *enic, u16 vlanid)
+{
+ u64 a0 = vlanid, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ err = vnic_dev_cmd(enic->vdev, CMD_VLAN_ADD, &a0, &a1, wait);
+ if (err)
+ dev_err(enic_get_dev(enic), "Can't add vlan id, %d\n", err);
+
+ return err;
+}
+
+int enic_del_vlan(struct enic *enic, u16 vlanid)
+{
+ u64 a0 = vlanid, a1 = 0;
+ int wait = 1000;
+ int err;
+
+ err = vnic_dev_cmd(enic->vdev, CMD_VLAN_DEL, &a0, &a1, wait);
+ if (err)
+ dev_err(enic_get_dev(enic), "Can't delete vlan id, %d\n", err);
+
+ return err;
+}
+
+int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type,
+ u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en,
+ u8 ig_vlan_strip_en)
+{
+ u64 a0, a1;
+ u32 nic_cfg;
+ int wait = 1000;
+
+ vnic_set_nic_cfg(&nic_cfg, rss_default_cpu,
+ rss_hash_type, rss_hash_bits, rss_base_cpu,
+ rss_enable, tso_ipid_split_en, ig_vlan_strip_en);
+
+ a0 = nic_cfg;
+ a1 = 0;
+
+ return vnic_dev_cmd(enic->vdev, CMD_NIC_CFG, &a0, &a1, wait);
+}
+
+int enic_set_rss_key(struct enic *enic, dma_addr_t key_pa, u64 len)
+{
+ u64 a0 = (u64)key_pa, a1 = len;
+ int wait = 1000;
+
+ return vnic_dev_cmd(enic->vdev, CMD_RSS_KEY, &a0, &a1, wait);
+}
+
+int enic_set_rss_cpu(struct enic *enic, dma_addr_t cpu_pa, u64 len)
+{
+ u64 a0 = (u64)cpu_pa, a1 = len;
+ int wait = 1000;
+
+ return vnic_dev_cmd(enic->vdev, CMD_RSS_CPU, &a0, &a1, wait);
+}
+
+void enic_free_vnic_resources(struct enic *enic)
+{
+ unsigned int i;
+
+ for (i = 0; i < enic->wq_count; i++)
+ vnic_wq_free(&enic->wq[i]);
+ for (i = 0; i < enic->rq_count; i++)
+ vnic_rq_free(&enic->rq[i]);
+ for (i = 0; i < enic->cq_count; i++)
+ vnic_cq_free(&enic->cq[i]);
+ vnic_intr_free(&enic->intr);
+}
+
+void enic_get_res_counts(struct enic *enic)
+{
+ enic->wq_count = vnic_dev_get_res_count(enic->vdev, RES_TYPE_WQ);
+ enic->rq_count = vnic_dev_get_res_count(enic->vdev, RES_TYPE_RQ);
+ enic->cq_count = vnic_dev_get_res_count(enic->vdev, RES_TYPE_CQ);
+ enic->intr_count = vnic_dev_get_res_count(enic->vdev,
+ RES_TYPE_INTR_CTRL);
+
+ dev_info(enic_get_dev(enic),
+ "vNIC resources avail: wq %d rq %d cq %d intr %d\n",
+ enic->wq_count, enic->rq_count,
+ enic->cq_count, enic->intr_count);
+}
+
+
diff --git a/lib/librte_pmd_enic/enic_res.h b/lib/librte_pmd_enic/enic_res.h
new file mode 100644
index 0000000..ea60f6a
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_res.h
@@ -0,0 +1,168 @@
+/*
+ * Copyright 2008-2010 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id: enic_res.h 173137 2014-05-16 03:27:22Z sanpilla $"
+
+#ifndef _ENIC_RES_H_
+#define _ENIC_RES_H_
+
+#include "wq_enet_desc.h"
+#include "rq_enet_desc.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+
+#define ENIC_MIN_WQ_DESCS 64
+#define ENIC_MAX_WQ_DESCS 4096
+#define ENIC_MIN_RQ_DESCS 64
+#define ENIC_MAX_RQ_DESCS 4096
+
+#define ENIC_MIN_MTU 68
+#define ENIC_MAX_MTU 9000
+
+#define ENIC_MULTICAST_PERFECT_FILTERS 32
+#define ENIC_UNICAST_PERFECT_FILTERS 32
+
+#define ENIC_NON_TSO_MAX_DESC 16
+
+#define ENIC_SETTING(enic, f) ((enic->config.flags & VENETF_##f) ? 1 : 0)
+
+static inline void enic_queue_wq_desc_ex(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr, unsigned int len,
+ unsigned int mss_or_csum_offset, unsigned int hdr_len,
+ int vlan_tag_insert, unsigned int vlan_tag,
+ int offload_mode, int cq_entry, int sop, int eop, int loopback)
+{
+ struct wq_enet_desc *desc = vnic_wq_next_desc(wq);
+ u8 desc_skip_cnt = 1;
+ u8 compressed_send = 0;
+ u64 wrid = 0;
+
+ wq_enet_desc_enc(desc,
+ (u64)dma_addr | VNIC_PADDR_TARGET,
+ (u16)len,
+ (u16)mss_or_csum_offset,
+ (u16)hdr_len, (u8)offload_mode,
+ (u8)eop, (u8)cq_entry,
+ 0, /* fcoe_encap */
+ (u8)vlan_tag_insert,
+ (u16)vlan_tag,
+ (u8)loopback);
+
+ vnic_wq_post(wq, os_buf, dma_addr, len, sop, eop, desc_skip_cnt,
+ (u8)cq_entry, compressed_send, wrid);
+}
+
+static inline void enic_queue_wq_desc_cont(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr, unsigned int len,
+ int eop, int loopback)
+{
+ enic_queue_wq_desc_ex(wq, os_buf, dma_addr, len,
+ 0, 0, 0, 0, 0,
+ eop, 0 /* !SOP */, eop, loopback);
+}
+
+static inline void enic_queue_wq_desc(struct vnic_wq *wq, void *os_buf,
+ dma_addr_t dma_addr, unsigned int len, int vlan_tag_insert,
+ unsigned int vlan_tag, int eop, int loopback)
+{
+ enic_queue_wq_desc_ex(wq, os_buf, dma_addr, len,
+ 0, 0, vlan_tag_insert, vlan_tag,
+ WQ_ENET_OFFLOAD_MODE_CSUM,
+ eop, 1 /* SOP */, eop, loopback);
+}
+
+static inline void enic_queue_wq_desc_csum(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr, unsigned int len,
+ int ip_csum, int tcpudp_csum, int vlan_tag_insert,
+ unsigned int vlan_tag, int eop, int loopback)
+{
+ enic_queue_wq_desc_ex(wq, os_buf, dma_addr, len,
+ (ip_csum ? 1 : 0) + (tcpudp_csum ? 2 : 0),
+ 0, vlan_tag_insert, vlan_tag,
+ WQ_ENET_OFFLOAD_MODE_CSUM,
+ eop, 1 /* SOP */, eop, loopback);
+}
+
+static inline void enic_queue_wq_desc_csum_l4(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr, unsigned int len,
+ unsigned int csum_offset, unsigned int hdr_len,
+ int vlan_tag_insert, unsigned int vlan_tag, int eop, int loopback)
+{
+ enic_queue_wq_desc_ex(wq, os_buf, dma_addr, len,
+ csum_offset, hdr_len, vlan_tag_insert, vlan_tag,
+ WQ_ENET_OFFLOAD_MODE_CSUM_L4,
+ eop, 1 /* SOP */, eop, loopback);
+}
+
+static inline void enic_queue_wq_desc_tso(struct vnic_wq *wq,
+ void *os_buf, dma_addr_t dma_addr, unsigned int len,
+ unsigned int mss, unsigned int hdr_len, int vlan_tag_insert,
+ unsigned int vlan_tag, int eop, int loopback)
+{
+ enic_queue_wq_desc_ex(wq, os_buf, dma_addr, len,
+ mss, hdr_len, vlan_tag_insert, vlan_tag,
+ WQ_ENET_OFFLOAD_MODE_TSO,
+ eop, 1 /* SOP */, eop, loopback);
+}
+static inline void enic_queue_rq_desc(struct vnic_rq *rq,
+ void *os_buf, unsigned int os_buf_index,
+ dma_addr_t dma_addr, unsigned int len)
+{
+ struct rq_enet_desc *desc = vnic_rq_next_desc(rq);
+ u64 wrid = 0;
+ u8 type = os_buf_index ?
+ RQ_ENET_TYPE_NOT_SOP : RQ_ENET_TYPE_ONLY_SOP;
+
+ rq_enet_desc_enc(desc,
+ (u64)dma_addr | VNIC_PADDR_TARGET,
+ type, (u16)len);
+
+ vnic_rq_post(rq, os_buf, os_buf_index, dma_addr, len, wrid);
+}
+
+struct enic;
+
+int enic_get_vnic_config(struct enic *);
+int enic_add_vlan(struct enic *enic, u16 vlanid);
+int enic_del_vlan(struct enic *enic, u16 vlanid);
+int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type,
+ u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en,
+ u8 ig_vlan_strip_en);
+int enic_set_rss_key(struct enic *enic, dma_addr_t key_pa, u64 len);
+int enic_set_rss_cpu(struct enic *enic, dma_addr_t cpu_pa, u64 len);
+void enic_get_res_counts(struct enic *enic);
+void enic_init_vnic_resources(struct enic *enic);
+int enic_alloc_vnic_resources(struct enic *);
+void enic_free_vnic_resources(struct enic *);
+
+#endif /* _ENIC_RES_H_ */
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
` (3 preceding siblings ...)
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-12-29 8:15 ` Wu, Jingjing
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 6/6] enicpmd: DPDK changes for accommodating ENIC PMD Sujith Sankar
` (2 subsequent siblings)
7 siblings, 1 reply; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
lib/librte_pmd_enic/enic_etherdev.c | 613 ++++++++++++++++++++++++++++++++++++
1 file changed, 613 insertions(+)
create mode 100644 lib/librte_pmd_enic/enic_etherdev.c
diff --git a/lib/librte_pmd_enic/enic_etherdev.c b/lib/librte_pmd_enic/enic_etherdev.c
new file mode 100644
index 0000000..d0577aa
--- /dev/null
+++ b/lib/librte_pmd_enic/enic_etherdev.c
@@ -0,0 +1,613 @@
+/*
+ * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
+ * Copyright 2007 Nuova Systems, Inc. All rights reserved.
+ *
+ * Copyright (c) 2014, Cisco Systems, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ *
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
+ * COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
+ * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+ * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
+ * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#ident "$Id$"
+
+#include <stdio.h>
+#include <stdint.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_ethdev.h>
+#include <rte_string_fns.h>
+
+#include "vnic_intr.h"
+#include "vnic_cq.h"
+#include "vnic_wq.h"
+#include "vnic_rq.h"
+#include "vnic_enet.h"
+#include "enic.h"
+
+#define ENICPMD_FUNC_TRACE() \
+ RTE_LOG(DEBUG, PMD, "ENICPMD trace: %s\n", __func__)
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static struct rte_pci_id pci_id_enic_map[] = {
+#define RTE_PCI_DEV_ID_DECL_ENIC(vend, dev) {RTE_PCI_DEVICE(vend, dev)},
+#ifndef PCI_VENDOR_ID_CISCO
+#define PCI_VENDOR_ID_CISCO 0x1137
+#endif
+#include "rte_pci_dev_ids.h"
+RTE_PCI_DEV_ID_DECL_ENIC(PCI_VENDOR_ID_CISCO, PCI_DEVICE_ID_CISCO_VIC_ENET)
+RTE_PCI_DEV_ID_DECL_ENIC(PCI_VENDOR_ID_CISCO, PCI_DEVICE_ID_CISCO_VIC_ENET_VF)
+{.vendor_id = 0, /* Sentinal */},
+};
+
+static int enicpmd_fdir_remove_perfect_filter(struct rte_eth_dev *eth_dev,
+ struct rte_fdir_filter *fdir_filter,
+ uint16_t soft_id)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ return enic_fdir_del_fltr(enic, fdir_filter);
+}
+
+static int enicpmd_fdir_add_perfect_filter(struct rte_eth_dev *eth_dev,
+ struct rte_fdir_filter *fdir_filter, uint16_t soft_id,
+ uint8_t queue, uint8_t drop)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ return enic_fdir_add_fltr(enic, fdir_filter, (uint16_t)queue, drop);
+}
+
+static void enicpmd_fdir_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_fdir *fdir)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ *fdir = enic->fdir.stats;
+}
+
+static void enicpmd_dev_tx_queue_release(void *txq)
+{
+ ENICPMD_FUNC_TRACE();
+ enic_free_wq(txq);
+}
+
+static int enicpmd_dev_setup_intr(struct enic *enic)
+{
+ int ret;
+ int index;
+
+ ENICPMD_FUNC_TRACE();
+
+ /* Are we done with the init of all the queues? */
+ for (index = 0; index < enic->cq_count; index++) {
+ if (!enic->cq[index].ctrl)
+ break;
+ }
+
+ if (enic->cq_count != index)
+ return 0;
+
+ ret = enic_alloc_intr_resources(enic);
+ if (ret) {
+ dev_err(enic, "alloc intr failed\n");
+ return ret;
+ }
+ enic_init_vnic_resources(enic);
+
+ ret = enic_setup_finish(enic);
+ if (ret)
+ dev_err(enic, "setup could not be finished\n");
+
+ return ret;
+}
+
+static int enicpmd_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_txconf *tx_conf)
+{
+ int ret;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ eth_dev->data->tx_queues[queue_idx] = (void *)&enic->wq[queue_idx];
+
+ ret = enic_alloc_wq(enic, queue_idx, socket_id, nb_desc);
+ if (ret) {
+ dev_err(enic, "error in allocating wq\n");
+ return ret;
+ }
+
+ return enicpmd_dev_setup_intr(enic);
+}
+
+static int enicpmd_dev_tx_queue_start(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ enic_start_wq(enic, queue_idx);
+
+ return 0;
+}
+
+static int enicpmd_dev_tx_queue_stop(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx)
+{
+ int ret;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ ret = enic_stop_wq(enic, queue_idx);
+ if (ret)
+ dev_err(enic, "error in stopping wq %d\n", queue_idx);
+
+ return ret;
+}
+
+static int enicpmd_dev_rx_queue_start(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ enic_start_rq(enic, queue_idx);
+
+ return 0;
+}
+
+static int enicpmd_dev_rx_queue_stop(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx)
+{
+ int ret;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ ret = enic_stop_rq(enic, queue_idx);
+ if (ret)
+ dev_err(enic, "error in stopping rq %d\n", queue_idx);
+
+ return ret;
+}
+
+static void enicpmd_dev_rx_queue_release(void *rxq)
+{
+ ENICPMD_FUNC_TRACE();
+ enic_free_rq(rxq);
+}
+
+static int enicpmd_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
+ uint16_t queue_idx,
+ uint16_t nb_desc,
+ unsigned int socket_id,
+ const struct rte_eth_rxconf *rx_conf,
+ struct rte_mempool *mp)
+{
+ int ret;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ eth_dev->data->rx_queues[queue_idx] = (void *)&enic->rq[queue_idx];
+
+ ret = enic_alloc_rq(enic, queue_idx, socket_id, mp, nb_desc);
+ if (ret) {
+ dev_err(enic, "error in allocating rq\n");
+ return ret;
+ }
+
+ return enicpmd_dev_setup_intr(enic);
+}
+
+static int enicpmd_vlan_filter_set(struct rte_eth_dev *eth_dev,
+ uint16_t vlan_id, int on)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ if (on)
+ enic_add_vlan(enic, vlan_id);
+ else
+ enic_del_vlan(enic, vlan_id);
+ return 0;
+}
+
+static void enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ if (mask & ETH_VLAN_STRIP_MASK) {
+ if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+ enic->ig_vlan_strip_en = 1;
+ else
+ enic->ig_vlan_strip_en = 0;
+ }
+ enic_set_rss_nic_cfg(enic);
+
+
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ dev_warning(enic,
+ "Configuration of VLAN filter is not supported\n");
+ }
+
+ if (mask & ETH_VLAN_EXTEND_MASK) {
+ dev_warning(enic,
+ "Configuration of extended VLAN is not supported\n");
+ }
+}
+
+static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
+{
+ int ret;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ ret = enic_set_vnic_res(enic);
+ if (ret) {
+ dev_err(enic, "Set vNIC resource num failed, aborting\n");
+ return ret;
+ }
+
+ if (eth_dev->data->dev_conf.rxmode.split_hdr_size &&
+ eth_dev->data->dev_conf.rxmode.header_split) {
+ /* Enable header-data-split */
+ enic_set_hdr_split_size(enic,
+ eth_dev->data->dev_conf.rxmode.split_hdr_size);
+ }
+
+ enic->hw_ip_checksum = eth_dev->data->dev_conf.rxmode.hw_ip_checksum;
+ return 0;
+}
+
+/* Start the device.
+ * It returns 0 on success.
+ */
+static int enicpmd_dev_start(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ return enic_enable(enic);
+}
+
+/*
+ * Stop device: disable rx and tx functions to allow for reconfiguring.
+ */
+static void enicpmd_dev_stop(struct rte_eth_dev *eth_dev)
+{
+ struct rte_eth_link link;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_disable(enic);
+ memset(&link, 0, sizeof(link));
+ rte_atomic64_cmpset((uint64_t *)ð_dev->data->dev_link,
+ *(uint64_t *)ð_dev->data->dev_link,
+ *(uint64_t *)&link);
+}
+
+/*
+ * Stop device.
+ */
+static void enicpmd_dev_close(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_remove(enic);
+}
+
+static int enicpmd_dev_link_update(struct rte_eth_dev *eth_dev,
+ int wait_to_complete)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+ int ret;
+ int link_status = 0;
+
+ ENICPMD_FUNC_TRACE();
+ link_status = enic_get_link_status(enic);
+ ret = (link_status == enic->link_status);
+ enic->link_status = link_status;
+ eth_dev->data->dev_link.link_status = link_status;
+ eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+ eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
+ return ret;
+}
+
+static void enicpmd_dev_stats_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_stats *stats)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_dev_stats_get(enic, stats);
+}
+
+static void enicpmd_dev_stats_reset(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_dev_stats_clear(enic);
+}
+
+static void enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
+ struct rte_eth_dev_info *device_info)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ device_info->max_rx_queues = enic->rq_count;
+ device_info->max_tx_queues = enic->wq_count;
+ device_info->min_rx_bufsize = ENIC_MIN_MTU;
+ device_info->max_rx_pktlen = enic->config.mtu;
+ device_info->max_mac_addrs = 1;
+ device_info->rx_offload_capa =
+ DEV_RX_OFFLOAD_VLAN_STRIP |
+ DEV_RX_OFFLOAD_IPV4_CKSUM |
+ DEV_RX_OFFLOAD_UDP_CKSUM |
+ DEV_RX_OFFLOAD_TCP_CKSUM;
+ device_info->tx_offload_capa =
+ DEV_TX_OFFLOAD_VLAN_INSERT |
+ DEV_TX_OFFLOAD_IPV4_CKSUM |
+ DEV_TX_OFFLOAD_UDP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_CKSUM;
+}
+
+static void enicpmd_dev_promiscuous_enable(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic->promisc = 1;
+ enic_add_packet_filter(enic);
+}
+
+static void enicpmd_dev_promiscuous_disable(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic->promisc = 0;
+ enic_add_packet_filter(enic);
+}
+
+static void enicpmd_dev_allmulticast_enable(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic->allmulti = 1;
+ enic_add_packet_filter(enic);
+}
+
+static void enicpmd_dev_allmulticast_disable(struct rte_eth_dev *eth_dev)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic->allmulti = 0;
+ enic_add_packet_filter(enic);
+}
+
+static void enicpmd_add_mac_addr(struct rte_eth_dev *eth_dev,
+ struct ether_addr *mac_addr,
+ uint32_t index, uint32_t pool)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_set_mac_address(enic, mac_addr->addr_bytes);
+}
+
+static void enicpmd_remove_mac_addr(struct rte_eth_dev *eth_dev, uint32_t index)
+{
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+ enic_del_mac_address(enic);
+}
+
+
+static uint16_t enicpmd_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ unsigned int index;
+ unsigned int frags;
+ unsigned int pkt_len;
+ unsigned int seg_len;
+ unsigned int inc_len;
+ unsigned int nb_segs;
+ struct rte_mbuf *tx_pkt;
+ struct vnic_wq *wq = (struct vnic_wq *)tx_queue;
+ struct enic *enic = vnic_dev_priv(wq->vdev);
+ unsigned char *buf;
+ unsigned short vlan_id;
+ unsigned short ol_flags;
+
+ for (index = 0; index < nb_pkts; index++) {
+ tx_pkt = *tx_pkts++;
+ inc_len = 0;
+ nb_segs = tx_pkt->nb_segs;
+ if (nb_segs > vnic_wq_desc_avail(wq)) {
+ /* wq cleanup and try again */
+ if (!enic_cleanup_wq(enic, wq) ||
+ (nb_segs > vnic_wq_desc_avail(wq)))
+ return index;
+ }
+ pkt_len = tx_pkt->pkt_len;
+ vlan_id = tx_pkt->vlan_tci;
+ ol_flags = tx_pkt->ol_flags;
+ for (frags = 0; inc_len < pkt_len; frags++) {
+ if (!tx_pkt)
+ break;
+ seg_len = tx_pkt->data_len;
+ inc_len += seg_len;
+ if (enic_send_pkt(enic, wq, tx_pkt,
+ (unsigned short)seg_len, !frags,
+ (pkt_len == inc_len), ol_flags, vlan_id)) {
+ break;
+ }
+ tx_pkt = tx_pkt->next;
+ }
+ }
+
+ enic_cleanup_wq(enic, wq);
+ return index;
+}
+
+static uint16_t enicpmd_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+ uint16_t nb_pkts)
+{
+ struct vnic_rq *rq = (struct vnic_rq *)rx_queue;
+ unsigned int work_done;
+
+ if (enic_poll(rq, rx_pkts, (unsigned int)nb_pkts, &work_done))
+ dev_err(enic, "error in enicpmd poll\n");
+
+ return work_done;
+}
+
+static struct eth_dev_ops enicpmd_eth_dev_ops = {
+ .dev_configure = enicpmd_dev_configure,
+ .dev_start = enicpmd_dev_start,
+ .dev_stop = enicpmd_dev_stop,
+ .dev_set_link_up = NULL,
+ .dev_set_link_down = NULL,
+ .dev_close = enicpmd_dev_close,
+ .promiscuous_enable = enicpmd_dev_promiscuous_enable,
+ .promiscuous_disable = enicpmd_dev_promiscuous_disable,
+ .allmulticast_enable = enicpmd_dev_allmulticast_enable,
+ .allmulticast_disable = enicpmd_dev_allmulticast_disable,
+ .link_update = enicpmd_dev_link_update,
+ .stats_get = enicpmd_dev_stats_get,
+ .stats_reset = enicpmd_dev_stats_reset,
+ .queue_stats_mapping_set = NULL,
+ .dev_infos_get = enicpmd_dev_info_get,
+ .mtu_set = NULL,
+ .vlan_filter_set = enicpmd_vlan_filter_set,
+ .vlan_tpid_set = NULL,
+ .vlan_offload_set = enicpmd_vlan_offload_set,
+ .vlan_strip_queue_set = NULL,
+ .rx_queue_start = enicpmd_dev_rx_queue_start,
+ .rx_queue_stop = enicpmd_dev_rx_queue_stop,
+ .tx_queue_start = enicpmd_dev_tx_queue_start,
+ .tx_queue_stop = enicpmd_dev_tx_queue_stop,
+ .rx_queue_setup = enicpmd_dev_rx_queue_setup,
+ .rx_queue_release = enicpmd_dev_rx_queue_release,
+ .rx_queue_count = NULL,
+ .rx_descriptor_done = NULL,
+ .tx_queue_setup = enicpmd_dev_tx_queue_setup,
+ .tx_queue_release = enicpmd_dev_tx_queue_release,
+ .dev_led_on = NULL,
+ .dev_led_off = NULL,
+ .flow_ctrl_get = NULL,
+ .flow_ctrl_set = NULL,
+ .priority_flow_ctrl_set = NULL,
+ .mac_addr_add = enicpmd_add_mac_addr,
+ .mac_addr_remove = enicpmd_remove_mac_addr,
+ .fdir_add_signature_filter = NULL,
+ .fdir_update_signature_filter = NULL,
+ .fdir_remove_signature_filter = NULL,
+ .fdir_infos_get = enicpmd_fdir_info_get,
+ .fdir_add_perfect_filter = enicpmd_fdir_add_perfect_filter,
+ .fdir_update_perfect_filter = enicpmd_fdir_add_perfect_filter,
+ .fdir_remove_perfect_filter = enicpmd_fdir_remove_perfect_filter,
+ .fdir_set_masks = NULL,
+};
+
+struct enic *enicpmd_list_head = NULL;
+/* Initialize the driver
+ * It returns 0 on success.
+ */
+static int eth_enicpmd_dev_init(
+ __attribute__((unused))struct eth_driver *eth_drv,
+ struct rte_eth_dev *eth_dev)
+{
+ struct rte_pci_device *pdev;
+ struct rte_pci_addr *addr;
+ struct enic *enic = pmd_priv(eth_dev);
+
+ ENICPMD_FUNC_TRACE();
+
+ enic->rte_dev = eth_dev;
+ eth_dev->dev_ops = &enicpmd_eth_dev_ops;
+ eth_dev->rx_pkt_burst = &enicpmd_recv_pkts;
+ eth_dev->tx_pkt_burst = &enicpmd_xmit_pkts;
+
+ pdev = eth_dev->pci_dev;
+ enic->pdev = pdev;
+ addr = &pdev->addr;
+
+ snprintf(enic->bdf_name, ENICPMD_BDF_LENGTH, "%04x:%02x:%02x.%x",
+ addr->domain, addr->bus, addr->devid, addr->function);
+
+ return enic_probe(enic);
+}
+
+static struct eth_driver rte_enic_pmd = {
+ {
+ .name = "rte_enic_pmd",
+ .id_table = pci_id_enic_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+ },
+ .eth_dev_init = eth_enicpmd_dev_init,
+ .dev_private_size = sizeof(struct enic),
+};
+
+/* Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register as the [Poll Mode] Driver of Cisco ENIC device.
+ */
+int rte_enic_pmd_init(const char *name __rte_unused,
+ const char *params __rte_unused)
+{
+ ENICPMD_FUNC_TRACE();
+
+ rte_eth_driver_register(&rte_enic_pmd);
+ return 0;
+}
+
+static struct rte_driver rte_enic_driver = {
+ .type = PMD_PDEV,
+ .init = rte_enic_pmd_init,
+};
+
+PMD_REGISTER_DRIVER(rte_enic_driver);
+
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] [PATCH v6 6/6] enicpmd: DPDK changes for accommodating ENIC PMD
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
` (4 preceding siblings ...)
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface Sujith Sankar
@ 2014-11-25 17:26 ` Sujith Sankar
2014-11-25 19:51 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD David Marchand
2014-11-25 20:11 ` Neil Horman
7 siblings, 0 replies; 36+ messages in thread
From: Sujith Sankar @ 2014-11-25 17:26 UTC (permalink / raw)
To: dev; +Cc: prrao
Signed-off-by: Sujith Sankar <ssujith@cisco.com>
---
config/common_linuxapp | 5 +++++
lib/Makefile | 1 +
mk/rte.app.mk | 4 ++++
3 files changed, 10 insertions(+)
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 6243d4b..51edbd9 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -210,6 +210,11 @@ CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM=4
CONFIG_RTE_LIBRTE_I40E_ITR_INTERVAL=-1
#
+# Compile burst-oriented Cisco ENIC PMD driver
+#
+CONFIG_RTE_LIBRTE_ENIC_PMD=y
+
+#
# Compile burst-oriented VIRTIO PMD driver
#
CONFIG_RTE_LIBRTE_VIRTIO_PMD=y
diff --git a/lib/Makefile b/lib/Makefile
index 204ef11..53e55d9 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -43,6 +43,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
DIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += librte_pmd_e1000
DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += librte_pmd_ixgbe
+DIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += librte_pmd_enic
DIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += librte_pmd_i40e
DIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += librte_pmd_bond
DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += librte_pmd_ring
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 59468b0..bef823b 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -186,6 +186,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_VMXNET3_PMD),y)
LDLIBS += -lrte_pmd_vmxnet3_uio
endif
+ifeq ($(CONFIG_RTE_LIBRTE_ENIC_PMD),y)
+LDLIBS += -lrte_pmd_enic
+endif
+
ifeq ($(CONFIG_RTE_LIBRTE_VIRTIO_PMD),y)
LDLIBS += -lrte_pmd_virtio_uio
endif
--
1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
` (5 preceding siblings ...)
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 6/6] enicpmd: DPDK changes for accommodating ENIC PMD Sujith Sankar
@ 2014-11-25 19:51 ` David Marchand
2014-11-26 22:11 ` Thomas Monjalon
2014-11-25 20:11 ` Neil Horman
7 siblings, 1 reply; 36+ messages in thread
From: David Marchand @ 2014-11-25 19:51 UTC (permalink / raw)
To: Sujith Sankar; +Cc: dev, Prasad Rao (prrao)
On Tue, Nov 25, 2014 at 6:26 PM, Sujith Sankar <ssujith@cisco.com> wrote:
> ENIC PMD is the poll-mode driver for the Cisco Systems Inc. VIC to be
> used with DPDK suite.
>
> Sujith Sankar (6):
> enicpmd: License text
> enicpmd: Makefile
> enicpmd: VNIC common code partially shared with ENIC kernel mode
> driver
> enicpmd: pmd specific code
> enicpmd: DPDK-ENIC PMD interface
> enicpmd: DPDK changes for accommodating ENIC PMD
>
Acked-by: David Marchand <david.marchand@6wind.com>
Thanks Sujith.
--
David Marchand
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
` (6 preceding siblings ...)
2014-11-25 19:51 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD David Marchand
@ 2014-11-25 20:11 ` Neil Horman
7 siblings, 0 replies; 36+ messages in thread
From: Neil Horman @ 2014-11-25 20:11 UTC (permalink / raw)
To: Sujith Sankar; +Cc: dev, prrao
On Tue, Nov 25, 2014 at 10:56:39PM +0530, Sujith Sankar wrote:
> ENIC PMD is the poll-mode driver for the Cisco Systems Inc. VIC to be
> used with DPDK suite.
>
> Sujith Sankar (6):
> enicpmd: License text
> enicpmd: Makefile
> enicpmd: VNIC common code partially shared with ENIC kernel mode
> driver
> enicpmd: pmd specific code
> enicpmd: DPDK-ENIC PMD interface
> enicpmd: DPDK changes for accommodating ENIC PMD
>
> config/common_linuxapp | 5 +
> lib/Makefile | 1 +
> lib/librte_pmd_enic/LICENSE | 27 +
> lib/librte_pmd_enic/Makefile | 67 ++
> lib/librte_pmd_enic/enic.h | 157 ++++
> lib/librte_pmd_enic/enic_clsf.c | 244 ++++++
> lib/librte_pmd_enic/enic_compat.h | 142 ++++
> lib/librte_pmd_enic/enic_etherdev.c | 613 +++++++++++++++
> lib/librte_pmd_enic/enic_main.c | 1266 ++++++++++++++++++++++++++++++
> lib/librte_pmd_enic/enic_res.c | 221 ++++++
> lib/librte_pmd_enic/enic_res.h | 168 ++++
> lib/librte_pmd_enic/vnic/cq_desc.h | 126 +++
> lib/librte_pmd_enic/vnic/cq_enet_desc.h | 261 ++++++
> lib/librte_pmd_enic/vnic/rq_enet_desc.h | 76 ++
> lib/librte_pmd_enic/vnic/vnic_cq.c | 117 +++
> lib/librte_pmd_enic/vnic/vnic_cq.h | 152 ++++
> lib/librte_pmd_enic/vnic/vnic_dev.c | 1063 +++++++++++++++++++++++++
> lib/librte_pmd_enic/vnic/vnic_dev.h | 203 +++++
> lib/librte_pmd_enic/vnic/vnic_devcmd.h | 774 ++++++++++++++++++
> lib/librte_pmd_enic/vnic/vnic_enet.h | 78 ++
> lib/librte_pmd_enic/vnic/vnic_intr.c | 83 ++
> lib/librte_pmd_enic/vnic/vnic_intr.h | 126 +++
> lib/librte_pmd_enic/vnic/vnic_nic.h | 88 +++
> lib/librte_pmd_enic/vnic/vnic_resource.h | 97 +++
> lib/librte_pmd_enic/vnic/vnic_rq.c | 246 ++++++
> lib/librte_pmd_enic/vnic/vnic_rq.h | 282 +++++++
> lib/librte_pmd_enic/vnic/vnic_rss.c | 85 ++
> lib/librte_pmd_enic/vnic/vnic_rss.h | 61 ++
> lib/librte_pmd_enic/vnic/vnic_stats.h | 86 ++
> lib/librte_pmd_enic/vnic/vnic_wq.c | 245 ++++++
> lib/librte_pmd_enic/vnic/vnic_wq.h | 283 +++++++
> lib/librte_pmd_enic/vnic/wq_enet_desc.h | 114 +++
> mk/rte.app.mk | 4 +
> 33 files changed, 7561 insertions(+)
> create mode 100644 lib/librte_pmd_enic/LICENSE
> create mode 100644 lib/librte_pmd_enic/Makefile
> create mode 100644 lib/librte_pmd_enic/enic.h
> create mode 100644 lib/librte_pmd_enic/enic_clsf.c
> create mode 100644 lib/librte_pmd_enic/enic_compat.h
> create mode 100644 lib/librte_pmd_enic/enic_etherdev.c
> create mode 100644 lib/librte_pmd_enic/enic_main.c
> create mode 100644 lib/librte_pmd_enic/enic_res.c
> create mode 100644 lib/librte_pmd_enic/enic_res.h
> create mode 100644 lib/librte_pmd_enic/vnic/cq_desc.h
> create mode 100644 lib/librte_pmd_enic/vnic/cq_enet_desc.h
> create mode 100644 lib/librte_pmd_enic/vnic/rq_enet_desc.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_cq.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_dev.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_devcmd.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_enet.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_intr.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_nic.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_resource.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_rq.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_rss.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_stats.h
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.c
> create mode 100644 lib/librte_pmd_enic/vnic/vnic_wq.h
> create mode 100644 lib/librte_pmd_enic/vnic/wq_enet_desc.h
>
> --
> 1.9.1
>
>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-25 19:51 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD David Marchand
@ 2014-11-26 22:11 ` Thomas Monjalon
2014-11-27 4:27 ` Sujith Sankar (ssujith)
0 siblings, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2014-11-26 22:11 UTC (permalink / raw)
To: dev, Sujith Sankar; +Cc: Prasad Rao (prrao)
> > ENIC PMD is the poll-mode driver for the Cisco Systems Inc. VIC to be
> > used with DPDK suite.
> >
> > Sujith Sankar (6):
> > enicpmd: License text
> > enicpmd: Makefile
> > enicpmd: VNIC common code partially shared with ENIC kernel mode
> > driver
> > enicpmd: pmd specific code
> > enicpmd: DPDK-ENIC PMD interface
> > enicpmd: DPDK changes for accommodating ENIC PMD
> >
>
> Acked-by: David Marchand <david.marchand@6wind.com>
>
> Thanks Sujith.
Applied and enabled in BSD configuration (not tested).
It would be nice to have some documentation for this driver now.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-26 22:11 ` Thomas Monjalon
@ 2014-11-27 4:27 ` Sujith Sankar (ssujith)
2014-11-27 15:31 ` Thomas Monjalon
0 siblings, 1 reply; 36+ messages in thread
From: Sujith Sankar (ssujith) @ 2014-11-27 4:27 UTC (permalink / raw)
To: Thomas Monjalon, dev; +Cc: Prasad Rao (prrao)
Thanks Thomas, David and Neil !
I shall work on finishing the documentation.
About that, you had mentioned that you wanted it in doc/drivers/ path.
Could I send a patch with documentation in the path doc/drivers/enicpmd/ ?
Thanks,
-Sujith
On 27/11/14 3:41 am, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:
>> > ENIC PMD is the poll-mode driver for the Cisco Systems Inc. VIC to be
>> > used with DPDK suite.
>> >
>> > Sujith Sankar (6):
>> > enicpmd: License text
>> > enicpmd: Makefile
>> > enicpmd: VNIC common code partially shared with ENIC kernel mode
>> > driver
>> > enicpmd: pmd specific code
>> > enicpmd: DPDK-ENIC PMD interface
>> > enicpmd: DPDK changes for accommodating ENIC PMD
>> >
>>
>> Acked-by: David Marchand <david.marchand@6wind.com>
>>
>> Thanks Sujith.
>
>Applied and enabled in BSD configuration (not tested).
>
>It would be nice to have some documentation for this driver now.
>
>Thanks
>--
>Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code Sujith Sankar
@ 2014-11-27 14:49 ` Wodkowski, PawelX
0 siblings, 0 replies; 36+ messages in thread
From: Wodkowski, PawelX @ 2014-11-27 14:49 UTC (permalink / raw)
To: Sujith Sankar, dev, Thomas Monjalon; +Cc: prrao
> diff --git a/lib/librte_pmd_enic/enic_main.c b/lib/librte_pmd_enic/enic_main.c
> new file mode 100644
> index 0000000..c047cc8
> --- /dev/null
> +++ b/lib/librte_pmd_enic/enic_main.c
> @@ -0,0 +1,1266 @@
> +/*
> + * Copyright 2008-2014 Cisco Systems, Inc. All rights reserved.
> +#ident "$Id$"
> +
> +#include <stdio.h>
> +
> +#include <sys/stat.h>
> +#include <sys/mman.h>
> +#include <fcntl.h>
> +#include <libgen.h>
> +#ifdef RTE_EAL_VFIO
> +#include <linux/vfio.h>
> +#endif
> +
This gives compilation error: lib/librte_pmd_enic/enic_main.c:43:24: fatal error: linux/vfio.h: No such file or directory
Ubuntu 12.04
Linux grizzly 3.11.0-26-generic #45~precise1-Ubuntu SMP Tue Jul 15 04:02:35 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
lib/librte_eal/linuxapp/eal/eal_vfio.h gives some tips how vfio might be included.
Pawel
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-27 4:27 ` Sujith Sankar (ssujith)
@ 2014-11-27 15:31 ` Thomas Monjalon
2015-01-20 11:25 ` David Marchand
0 siblings, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2014-11-27 15:31 UTC (permalink / raw)
To: Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
2014-11-27 04:27, Sujith Sankar:
> Thanks Thomas, David and Neil !
>
> I shall work on finishing the documentation.
> About that, you had mentioned that you wanted it in doc/drivers/ path.
> Could I send a patch with documentation in the path doc/drivers/enicpmd/ ?
Yes.
I'd prefer doc/drivers/enic/ but it's a detail ;)
The format must be sphinx rst to allow web publishing.
It would be great to have some design documentation of every drivers
in doc/drivers.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface Sujith Sankar
@ 2014-12-29 8:15 ` Wu, Jingjing
2014-12-30 4:45 ` Sujith Sankar (ssujith)
0 siblings, 1 reply; 36+ messages in thread
From: Wu, Jingjing @ 2014-12-29 8:15 UTC (permalink / raw)
To: Sujith Sankar, dev; +Cc: prrao
Hi, ssujith
> + .tx_queue_release = enicpmd_dev_tx_queue_release,
> + .dev_led_on = NULL,
> + .dev_led_off = NULL,
> + .flow_ctrl_get = NULL,
> + .flow_ctrl_set = NULL,
> + .priority_flow_ctrl_set = NULL,
> + .mac_addr_add = enicpmd_add_mac_addr,
> + .mac_addr_remove = enicpmd_remove_mac_addr,
> + .fdir_add_signature_filter = NULL,
> + .fdir_update_signature_filter = NULL,
> + .fdir_remove_signature_filter = NULL,
> + .fdir_infos_get = enicpmd_fdir_info_get,
> + .fdir_add_perfect_filter = enicpmd_fdir_add_perfect_filter,
> + .fdir_update_perfect_filter = enicpmd_fdir_add_perfect_filter,
> + .fdir_remove_perfect_filter = enicpmd_fdir_remove_perfect_filter,
> + .fdir_set_masks = NULL,
> +};
> +
I found that in perfect fdir is also supported in enic driver.
During the R1.8 development, we defined a new dev_ops call filter_ctrl, which can be used to control kinds of filters, flow director is included too. Which is mentioned in http://www.dpdk.org/ml/archives/dev/2014-September/005179.html .
In R1.8, filter_ctrl is only used by i40e driver. And we also planned use it in the existing ixgbe/e1000 driver in the next days. The old APIs such as fdir_add_perfect_filter, fdir_remove_perfect_filter can be replaced then.
So, do you have any plan to migrate the fdir in enic to the filter_ctrl API?
Jingjing
Thanks!
> +struct enic *enicpmd_list_head = NULL;
> +/* Initialize the driver
> + * It returns 0 on success.
> + */
> +static int eth_enicpmd_dev_init(
> + __attribute__((unused))struct eth_driver *eth_drv,
> + struct rte_eth_dev *eth_dev)
> +{
> + struct rte_pci_device *pdev;
> + struct rte_pci_addr *addr;
> + struct enic *enic = pmd_priv(eth_dev);
> +
> + ENICPMD_FUNC_TRACE();
> +
> + enic->rte_dev = eth_dev;
> + eth_dev->dev_ops = &enicpmd_eth_dev_ops;
> + eth_dev->rx_pkt_burst = &enicpmd_recv_pkts;
> + eth_dev->tx_pkt_burst = &enicpmd_xmit_pkts;
> +
> + pdev = eth_dev->pci_dev;
> + enic->pdev = pdev;
> + addr = &pdev->addr;
> +
> + snprintf(enic->bdf_name, ENICPMD_BDF_LENGTH,
> "%04x:%02x:%02x.%x",
> + addr->domain, addr->bus, addr->devid, addr->function);
> +
> + return enic_probe(enic);
> +}
> +
> +static struct eth_driver rte_enic_pmd = {
> + {
> + .name = "rte_enic_pmd",
> + .id_table = pci_id_enic_map,
> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> + },
> + .eth_dev_init = eth_enicpmd_dev_init,
> + .dev_private_size = sizeof(struct enic), };
> +
> +/* Driver initialization routine.
> + * Invoked once at EAL init time.
> + * Register as the [Poll Mode] Driver of Cisco ENIC device.
> + */
> +int rte_enic_pmd_init(const char *name __rte_unused,
> + const char *params __rte_unused)
> +{
> + ENICPMD_FUNC_TRACE();
> +
> + rte_eth_driver_register(&rte_enic_pmd);
> + return 0;
> +}
> +
> +static struct rte_driver rte_enic_driver = {
> + .type = PMD_PDEV,
> + .init = rte_enic_pmd_init,
> +};
> +
> +PMD_REGISTER_DRIVER(rte_enic_driver);
> +
> --
> 1.9.1
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
2014-12-29 8:15 ` Wu, Jingjing
@ 2014-12-30 4:45 ` Sujith Sankar (ssujith)
2015-01-06 9:41 ` Thomas Monjalon
2015-01-30 8:53 ` Wu, Jingjing
0 siblings, 2 replies; 36+ messages in thread
From: Sujith Sankar (ssujith) @ 2014-12-30 4:45 UTC (permalink / raw)
To: Wu, Jingjing, dev; +Cc: Prasad Rao (prrao)
On 29/12/14 1:45 pm, "Wu, Jingjing" <jingjing.wu@intel.com> wrote:
>Hi, ssujith
>
>> + .tx_queue_release = enicpmd_dev_tx_queue_release,
>> + .dev_led_on = NULL,
>> + .dev_led_off = NULL,
>> + .flow_ctrl_get = NULL,
>> + .flow_ctrl_set = NULL,
>> + .priority_flow_ctrl_set = NULL,
>> + .mac_addr_add = enicpmd_add_mac_addr,
>> + .mac_addr_remove = enicpmd_remove_mac_addr,
>> + .fdir_add_signature_filter = NULL,
>> + .fdir_update_signature_filter = NULL,
>> + .fdir_remove_signature_filter = NULL,
>> + .fdir_infos_get = enicpmd_fdir_info_get,
>> + .fdir_add_perfect_filter = enicpmd_fdir_add_perfect_filter,
>> + .fdir_update_perfect_filter = enicpmd_fdir_add_perfect_filter,
>> + .fdir_remove_perfect_filter = enicpmd_fdir_remove_perfect_filter,
>> + .fdir_set_masks = NULL,
>> +};
>> +
>
>I found that in perfect fdir is also supported in enic driver.
>
>During the R1.8 development, we defined a new dev_ops call filter_ctrl,
>which can be used to control kinds of filters, flow director is included
>too. Which is mentioned in
>http://www.dpdk.org/ml/archives/dev/2014-September/005179.html .
>In R1.8, filter_ctrl is only used by i40e driver. And we also planned use
>it in the existing ixgbe/e1000 driver in the next days. The old APIs such
>as fdir_add_perfect_filter, fdir_remove_perfect_filter can be replaced
>then.
Hi Jingjing,
Thanks for the info and the link. I shall take a look at it.
It looks like bringing in one interface for all filter related operations.
I believe ENIC should also move to it.
Thanks,
-Sujith
>
>So, do you have any plan to migrate the fdir in enic to the filter_ctrl
>API?
>
>Jingjing
>
>Thanks!
>
>> +struct enic *enicpmd_list_head = NULL;
>> +/* Initialize the driver
>> + * It returns 0 on success.
>> + */
>> +static int eth_enicpmd_dev_init(
>> + __attribute__((unused))struct eth_driver *eth_drv,
>> + struct rte_eth_dev *eth_dev)
>> +{
>> + struct rte_pci_device *pdev;
>> + struct rte_pci_addr *addr;
>> + struct enic *enic = pmd_priv(eth_dev);
>> +
>> + ENICPMD_FUNC_TRACE();
>> +
>> + enic->rte_dev = eth_dev;
>> + eth_dev->dev_ops = &enicpmd_eth_dev_ops;
>> + eth_dev->rx_pkt_burst = &enicpmd_recv_pkts;
>> + eth_dev->tx_pkt_burst = &enicpmd_xmit_pkts;
>> +
>> + pdev = eth_dev->pci_dev;
>> + enic->pdev = pdev;
>> + addr = &pdev->addr;
>> +
>> + snprintf(enic->bdf_name, ENICPMD_BDF_LENGTH,
>> "%04x:%02x:%02x.%x",
>> + addr->domain, addr->bus, addr->devid, addr->function);
>> +
>> + return enic_probe(enic);
>> +}
>> +
>> +static struct eth_driver rte_enic_pmd = {
>> + {
>> + .name = "rte_enic_pmd",
>> + .id_table = pci_id_enic_map,
>> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
>> + },
>> + .eth_dev_init = eth_enicpmd_dev_init,
>> + .dev_private_size = sizeof(struct enic), };
>> +
>> +/* Driver initialization routine.
>> + * Invoked once at EAL init time.
>> + * Register as the [Poll Mode] Driver of Cisco ENIC device.
>> + */
>> +int rte_enic_pmd_init(const char *name __rte_unused,
>> + const char *params __rte_unused)
>> +{
>> + ENICPMD_FUNC_TRACE();
>> +
>> + rte_eth_driver_register(&rte_enic_pmd);
>> + return 0;
>> +}
>> +
>> +static struct rte_driver rte_enic_driver = {
>> + .type = PMD_PDEV,
>> + .init = rte_enic_pmd_init,
>> +};
>> +
>> +PMD_REGISTER_DRIVER(rte_enic_driver);
>> +
>> --
>> 1.9.1
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
2014-12-30 4:45 ` Sujith Sankar (ssujith)
@ 2015-01-06 9:41 ` Thomas Monjalon
2015-01-30 8:53 ` Wu, Jingjing
1 sibling, 0 replies; 36+ messages in thread
From: Thomas Monjalon @ 2015-01-06 9:41 UTC (permalink / raw)
To: Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
2014-12-30 04:45, Sujith Sankar:
> On 29/12/14 1:45 pm, "Wu, Jingjing" <jingjing.wu@intel.com> wrote:
> >I found that in perfect fdir is also supported in enic driver.
> >
> >During the R1.8 development, we defined a new dev_ops call filter_ctrl,
> >which can be used to control kinds of filters, flow director is included
> >too. Which is mentioned in
> >http://www.dpdk.org/ml/archives/dev/2014-September/005179.html .
> >In R1.8, filter_ctrl is only used by i40e driver. And we also planned use
> >it in the existing ixgbe/e1000 driver in the next days. The old APIs such
> >as fdir_add_perfect_filter, fdir_remove_perfect_filter can be replaced
> >then.
>
> Hi Jingjing,
> Thanks for the info and the link. I shall take a look at it.
> It looks like bringing in one interface for all filter related operations.
> I believe ENIC should also move to it.
Yes please.
It is planned to remove or deprecate old flow director API.
--
Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2014-11-27 15:31 ` Thomas Monjalon
@ 2015-01-20 11:25 ` David Marchand
2015-01-21 5:03 ` Sujith Sankar (ssujith)
0 siblings, 1 reply; 36+ messages in thread
From: David Marchand @ 2015-01-20 11:25 UTC (permalink / raw)
To: Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
Hello Sujith,
Any news on the documentation and the performance numbers you said you
would send ?
Thanks.
--
David Marchand
On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon <thomas.monjalon@6wind.com>
wrote:
> 2014-11-27 04:27, Sujith Sankar:
> > Thanks Thomas, David and Neil !
> >
> > I shall work on finishing the documentation.
> > About that, you had mentioned that you wanted it in doc/drivers/ path.
> > Could I send a patch with documentation in the path doc/drivers/enicpmd/
> ?
>
> Yes.
> I'd prefer doc/drivers/enic/ but it's a detail ;)
> The format must be sphinx rst to allow web publishing.
>
> It would be great to have some design documentation of every drivers
> in doc/drivers.
>
> Thanks
> --
> Thomas
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-01-20 11:25 ` David Marchand
@ 2015-01-21 5:03 ` Sujith Sankar (ssujith)
2015-02-26 11:49 ` Thomas Monjalon
0 siblings, 1 reply; 36+ messages in thread
From: Sujith Sankar (ssujith) @ 2015-01-21 5:03 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Prasad Rao (prrao)
Hi David,
Apologies for the delay. I was not able to find quality time to finish it as a few other things have been keeping me busy. But I shall work on it and provide the doc and the perf details soon.
In the mean time, it would be great if you could point me to some resources on running pktgen-dpdk as I was stuck on it.
Thanks,
-Sujith
From: David Marchand <david.marchand@6wind.com<mailto:david.marchand@6wind.com>>
Date: Tuesday, 20 January 2015 4:55 pm
To: "Sujith Sankar (ssujith)" <ssujith@cisco.com<mailto:ssujith@cisco.com>>
Cc: "dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>, "Prasad Rao (prrao)" <prrao@cisco.com<mailto:prrao@cisco.com>>, Neil Horman <nhorman@tuxdriver.com<mailto:nhorman@tuxdriver.com>>, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>>
Subject: Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
Hello Sujith,
Any news on the documentation and the performance numbers you said you would send ?
Thanks.
--
David Marchand
On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
2014-11-27 04:27, Sujith Sankar:
> Thanks Thomas, David and Neil !
>
> I shall work on finishing the documentation.
> About that, you had mentioned that you wanted it in doc/drivers/ path.
> Could I send a patch with documentation in the path doc/drivers/enicpmd/ ?
Yes.
I'd prefer doc/drivers/enic/ but it's a detail ;)
The format must be sphinx rst to allow web publishing.
It would be great to have some design documentation of every drivers
in doc/drivers.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
2014-12-30 4:45 ` Sujith Sankar (ssujith)
2015-01-06 9:41 ` Thomas Monjalon
@ 2015-01-30 8:53 ` Wu, Jingjing
1 sibling, 0 replies; 36+ messages in thread
From: Wu, Jingjing @ 2015-01-30 8:53 UTC (permalink / raw)
To: 'Sujith Sankar (ssujith)', dev; +Cc: Prasad Rao (prrao)
Hi, ssujith
> -----Original Message-----
> From: Sujith Sankar (ssujith) [mailto:ssujith@cisco.com]
> Sent: Tuesday, December 30, 2014 12:46 PM
> To: Wu, Jingjing; dev@dpdk.org
> Cc: Prasad Rao (prrao)
> Subject: Re: [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface
>
>
>
> On 29/12/14 1:45 pm, "Wu, Jingjing" <jingjing.wu@intel.com> wrote:
>
> >Hi, ssujith
> >
> >> + .tx_queue_release = enicpmd_dev_tx_queue_release,
> >> + .dev_led_on = NULL,
> >> + .dev_led_off = NULL,
> >> + .flow_ctrl_get = NULL,
> >> + .flow_ctrl_set = NULL,
> >> + .priority_flow_ctrl_set = NULL,
> >> + .mac_addr_add = enicpmd_add_mac_addr,
> >> + .mac_addr_remove = enicpmd_remove_mac_addr,
> >> + .fdir_add_signature_filter = NULL,
> >> + .fdir_update_signature_filter = NULL,
> >> + .fdir_remove_signature_filter = NULL,
> >> + .fdir_infos_get = enicpmd_fdir_info_get,
> >> + .fdir_add_perfect_filter = enicpmd_fdir_add_perfect_filter,
> >> + .fdir_update_perfect_filter = enicpmd_fdir_add_perfect_filter,
> >> + .fdir_remove_perfect_filter = enicpmd_fdir_remove_perfect_filter,
> >> + .fdir_set_masks = NULL,
> >> +};
> >> +
> >
> >I found that in perfect fdir is also supported in enic driver.
> >
> >During the R1.8 development, we defined a new dev_ops call filter_ctrl,
> >which can be used to control kinds of filters, flow director is
> >included too. Which is mentioned in
> >http://www.dpdk.org/ml/archives/dev/2014-September/005179.html .
> >In R1.8, filter_ctrl is only used by i40e driver. And we also planned
> >use it in the existing ixgbe/e1000 driver in the next days. The old
> >APIs such as fdir_add_perfect_filter, fdir_remove_perfect_filter can be
> >replaced then.
>
>
> Hi Jingjing,
> Thanks for the info and the link. I shall take a look at it.
> It looks like bringing in one interface for all filter related operations.
> I believe ENIC should also move to it.
>
> Thanks,
> -Sujith
>
Just let you know that, I already sent the patch to migrate the flow director in ixgbe driver to new filter_ctrl API.
http://www.dpdk.org/ml/archives/dev/2015-January/011830.html
To avoid compile error and influence to enic driver. I didn't remove the old APIs and structures in rte_ethdev. I think they can be removed when the migration in enic driver is done. Do you have any plan for this?
Thank u!
Jingjing
> >
> >So, do you have any plan to migrate the fdir in enic to the filter_ctrl
> >API?
> >
> >Jingjing
> >
> >Thanks!
> >
> >> +struct enic *enicpmd_list_head = NULL;
> >> +/* Initialize the driver
> >> + * It returns 0 on success.
> >> + */
> >> +static int eth_enicpmd_dev_init(
> >> + __attribute__((unused))struct eth_driver *eth_drv,
> >> + struct rte_eth_dev *eth_dev)
> >> +{
> >> + struct rte_pci_device *pdev;
> >> + struct rte_pci_addr *addr;
> >> + struct enic *enic = pmd_priv(eth_dev);
> >> +
> >> + ENICPMD_FUNC_TRACE();
> >> +
> >> + enic->rte_dev = eth_dev;
> >> + eth_dev->dev_ops = &enicpmd_eth_dev_ops;
> >> + eth_dev->rx_pkt_burst = &enicpmd_recv_pkts;
> >> + eth_dev->tx_pkt_burst = &enicpmd_xmit_pkts;
> >> +
> >> + pdev = eth_dev->pci_dev;
> >> + enic->pdev = pdev;
> >> + addr = &pdev->addr;
> >> +
> >> + snprintf(enic->bdf_name, ENICPMD_BDF_LENGTH,
> >> "%04x:%02x:%02x.%x",
> >> + addr->domain, addr->bus, addr->devid, addr->function);
> >> +
> >> + return enic_probe(enic);
> >> +}
> >> +
> >> +static struct eth_driver rte_enic_pmd = {
> >> + {
> >> + .name = "rte_enic_pmd",
> >> + .id_table = pci_id_enic_map,
> >> + .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
> >> + },
> >> + .eth_dev_init = eth_enicpmd_dev_init,
> >> + .dev_private_size = sizeof(struct enic), };
> >> +
> >> +/* Driver initialization routine.
> >> + * Invoked once at EAL init time.
> >> + * Register as the [Poll Mode] Driver of Cisco ENIC device.
> >> + */
> >> +int rte_enic_pmd_init(const char *name __rte_unused,
> >> + const char *params __rte_unused)
> >> +{
> >> + ENICPMD_FUNC_TRACE();
> >> +
> >> + rte_eth_driver_register(&rte_enic_pmd);
> >> + return 0;
> >> +}
> >> +
> >> +static struct rte_driver rte_enic_driver = {
> >> + .type = PMD_PDEV,
> >> + .init = rte_enic_pmd_init,
> >> +};
> >> +
> >> +PMD_REGISTER_DRIVER(rte_enic_driver);
> >> +
> >> --
> >> 1.9.1
> >
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-01-21 5:03 ` Sujith Sankar (ssujith)
@ 2015-02-26 11:49 ` Thomas Monjalon
2015-02-26 13:08 ` Wiles, Keith
0 siblings, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2015-02-26 11:49 UTC (permalink / raw)
To: Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
Hi Sujith,
Do you have news about doc for enic?
In case you didn't find the doc for pktgen-dpdk, it's available here:
http://pktgen.readthedocs.org
2015-01-21 05:03, Sujith Sankar:
> Hi David,
>
> Apologies for the delay. I was not able to find quality time to finish it
> as a few other things have been keeping me busy. But I shall work on it
> and provide the doc and the perf details soon.
> In the mean time, it would be great if you could point me to some resources
> on running pktgen-dpdk as I was stuck on it.
>
> Thanks,
> -Sujith
>
> From: David Marchand <david.marchand@6wind.com<mailto:david.marchand@6wind.com>>
> Date: Tuesday, 20 January 2015 4:55 pm
> > Hello Sujith,
> >
> > Any news on the documentation and the performance numbers you said you
> > would send ?
> >
> > Thanks.
> >
> > --
> > David Marchand
> >
> > On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon
> > <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
> > > 2014-11-27 04:27, Sujith Sankar:
> > > > Thanks Thomas, David and Neil !
> > > >
> > > > I shall work on finishing the documentation.
> > > > About that, you had mentioned that you wanted it in doc/drivers/ path.
> > > > Could I send a patch with documentation in the path
> > > > doc/drivers/enicpmd/
> > > > ?
> > >
> > > Yes.
> > > I'd prefer doc/drivers/enic/ but it's a detail ;)
> > > The format must be sphinx rst to allow web publishing.
> > >
> > > It would be great to have some design documentation of every drivers
> > > in doc/drivers.
> > >
> > > Thanks
> > > --
> > > Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-02-26 11:49 ` Thomas Monjalon
@ 2015-02-26 13:08 ` Wiles, Keith
2015-02-27 8:09 ` Sujith Sankar (ssujith)
0 siblings, 1 reply; 36+ messages in thread
From: Wiles, Keith @ 2015-02-26 13:08 UTC (permalink / raw)
To: Thomas Monjalon, Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
On 2/26/15, 5:49 AM, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:
>Hi Sujith,
>
>Do you have news about doc for enic?
>
>In case you didn't find the doc for pktgen-dpdk, it's available here:
> http://pktgen.readthedocs.org
Hi Sujith,
If you can not find the answer your questions in the doc please me an
email.
Regards,
++Keith
>
>2015-01-21 05:03, Sujith Sankar:
>> Hi David,
>>
>> Apologies for the delay. I was not able to find quality time to finish
>>it
>> as a few other things have been keeping me busy. But I shall work on it
>> and provide the doc and the perf details soon.
>> In the mean time, it would be great if you could point me to some
>>resources
>> on running pktgen-dpdk as I was stuck on it.
>>
>> Thanks,
>> -Sujith
>>
>> From: David Marchand
>><david.marchand@6wind.com<mailto:david.marchand@6wind.com>>
>> Date: Tuesday, 20 January 2015 4:55 pm
>> > Hello Sujith,
>> >
>> > Any news on the documentation and the performance numbers you said you
>> > would send ?
>> >
>> > Thanks.
>> >
>> > --
>> > David Marchand
>> >
>> > On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon
>> > <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
>> > > 2014-11-27 04:27, Sujith Sankar:
>> > > > Thanks Thomas, David and Neil !
>> > > >
>> > > > I shall work on finishing the documentation.
>> > > > About that, you had mentioned that you wanted it in doc/drivers/
>>path.
>> > > > Could I send a patch with documentation in the path
>> > > > doc/drivers/enicpmd/
>> > > > ?
>> > >
>> > > Yes.
>> > > I'd prefer doc/drivers/enic/ but it's a detail ;)
>> > > The format must be sphinx rst to allow web publishing.
>> > >
>> > > It would be great to have some design documentation of every drivers
>> > > in doc/drivers.
>> > >
>> > > Thanks
>> > > --
>> > > Thomas
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-02-26 13:08 ` Wiles, Keith
@ 2015-02-27 8:09 ` Sujith Sankar (ssujith)
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
` (2 more replies)
0 siblings, 3 replies; 36+ messages in thread
From: Sujith Sankar (ssujith) @ 2015-02-27 8:09 UTC (permalink / raw)
To: Wiles, Keith, Thomas Monjalon; +Cc: dev, Prasad Rao (prrao)
Hi Thomas,
No update on it from my side :-(
It would take some more time for me to start working on it (and flow
director api) as a few other things are keeping me busy.
Keith, Thanks ! I shall get back in case of questions.
Regards,
-Sujith
On 26/02/15 6:38 pm, "Wiles, Keith" <keith.wiles@intel.com> wrote:
>On 2/26/15, 5:49 AM, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:
>
>>Hi Sujith,
>>
>>Do you have news about doc for enic?
>>
>>In case you didn't find the doc for pktgen-dpdk, it's available here:
>> http://pktgen.readthedocs.org
>
>Hi Sujith,
>
>If you can not find the answer your questions in the doc please me an
>email.
>
>
>Regards,
>
>++Keith
>>
>>2015-01-21 05:03, Sujith Sankar:
>>> Hi David,
>>>
>>> Apologies for the delay. I was not able to find quality time to finish
>>>it
>>> as a few other things have been keeping me busy. But I shall work on
>>>it
>>> and provide the doc and the perf details soon.
>>> In the mean time, it would be great if you could point me to some
>>>resources
>>> on running pktgen-dpdk as I was stuck on it.
>>>
>>> Thanks,
>>> -Sujith
>>>
>>> From: David Marchand
>>><david.marchand@6wind.com<mailto:david.marchand@6wind.com>>
>>> Date: Tuesday, 20 January 2015 4:55 pm
>>> > Hello Sujith,
>>> >
>>> > Any news on the documentation and the performance numbers you said
>>>you
>>> > would send ?
>>> >
>>> > Thanks.
>>> >
>>> > --
>>> > David Marchand
>>> >
>>> > On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon
>>> > <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
>>> > > 2014-11-27 04:27, Sujith Sankar:
>>> > > > Thanks Thomas, David and Neil !
>>> > > >
>>> > > > I shall work on finishing the documentation.
>>> > > > About that, you had mentioned that you wanted it in doc/drivers/
>>>path.
>>> > > > Could I send a patch with documentation in the path
>>> > > > doc/drivers/enicpmd/
>>> > > > ?
>>> > >
>>> > > Yes.
>>> > > I'd prefer doc/drivers/enic/ but it's a detail ;)
>>> > > The format must be sphinx rst to allow web publishing.
>>> > >
>>> > > It would be great to have some design documentation of every
>>>drivers
>>> > > in doc/drivers.
>>> > >
>>> > > Thanks
>>> > > --
>>> > > Thomas
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-27 8:09 ` Sujith Sankar (ssujith)
@ 2015-02-27 8:47 ` lhffjzh
2015-02-27 9:03 ` lhffjzh
2015-02-27 10:55 ` Thomas Monjalon
2015-02-27 10:46 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Thomas Monjalon
2015-05-11 9:25 ` Thomas Monjalon
2 siblings, 2 replies; 36+ messages in thread
From: lhffjzh @ 2015-02-27 8:47 UTC (permalink / raw)
To: 'Sujith Sankar (ssujith)', 'Wiles, Keith',
'Thomas Monjalon'
Cc: dev, 'Prasad Rao (prrao)'
Hi All,
We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0" can
receive network packet, do anyone kindly know why? BTW, all of network
packet has same destination ip address but has more than 200 different
source ip address.
In addition, by our test, all of rx queues can receive network packet with
10G ixgbe NIC with similar code.
Thanks and Regards,
Haifeng
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
@ 2015-02-27 9:03 ` lhffjzh
2015-02-27 10:55 ` Thomas Monjalon
1 sibling, 0 replies; 36+ messages in thread
From: lhffjzh @ 2015-02-27 9:03 UTC (permalink / raw)
To: dev; +Cc: 'Prasad Rao (prrao)'
Hi,
BTW, I try on both 1.7.1 and 1.8.0.
Thanks and Regards,
Haifeng
-----Original Message-----
From: dev-bounces@dpdk.org [mailto:dev-bounces@dpdk.org] On Behalf Of
lhffjzh
Sent: Friday, February 27, 2015 4:47 PM
To: 'Sujith Sankar (ssujith)'; 'Wiles, Keith'; 'Thomas Monjalon'
Cc: dev@dpdk.org; 'Prasad Rao (prrao)'
Subject: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e
NIC
Hi All,
We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0" can
receive network packet, do anyone kindly know why? BTW, all of network
packet has same destination ip address but has more than 200 different
source ip address.
In addition, by our test, all of rx queues can receive network packet with
10G ixgbe NIC with similar code.
Thanks and Regards,
Haifeng
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-02-27 8:09 ` Sujith Sankar (ssujith)
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
@ 2015-02-27 10:46 ` Thomas Monjalon
2015-03-11 9:05 ` Sujith Sankar (ssujith)
2015-05-11 9:25 ` Thomas Monjalon
2 siblings, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2015-02-27 10:46 UTC (permalink / raw)
To: Sujith Sankar (ssujith); +Cc: dev, Prasad Rao (prrao)
2015-02-27 08:09, Sujith Sankar:
> Hi Thomas,
>
> No update on it from my side :-(
> It would take some more time for me to start working on it (and flow
> director api) as a few other things are keeping me busy.
It's unfortunate.
We now have a PMD without documentation and blocking the removal of the
old flow director API.
Is there someone else at Cisco able to work on it?
I think the first priority should be on the flow director API as it has a
wider impact.
Thanks
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
2015-02-27 9:03 ` lhffjzh
@ 2015-02-27 10:55 ` Thomas Monjalon
2015-02-28 1:47 ` lhffjzh
1 sibling, 1 reply; 36+ messages in thread
From: Thomas Monjalon @ 2015-02-27 10:55 UTC (permalink / raw)
To: lhffjzh; +Cc: dev
2015-02-27 16:47, lhffjzh:
> Hi All,
>
> We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0" can
> receive network packet, do anyone kindly know why? BTW, all of network
> packet has same destination ip address but has more than 200 different
> source ip address.
It's possible that you don't have any answer for 2 reasons:
- you replied in a thread dedicated to Cisco enic questions
- you didn't describe your usage enough to understand your problem
I suggest to use the button "new email" instead of "reply all" to
start a new question with enough details.
Did you noticed you put some Cisco guys in CC instead of putting the
Intel responsible for i40e (see MAINTAINERS file)?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-27 10:55 ` Thomas Monjalon
@ 2015-02-28 1:47 ` lhffjzh
2015-02-28 3:17 ` Zhang, Helin
0 siblings, 1 reply; 36+ messages in thread
From: lhffjzh @ 2015-02-28 1:47 UTC (permalink / raw)
To: 'Thomas Monjalon'; +Cc: dev, maintainers
Hi Thomas,
Thanks very much for your reminder, you give me many help in this mail list.
The issue with detailed information just as below. but I don't know
who is the dpdk i40e maintainers? is maintainers@dpdk.org?
Hardware list:
2 i40e 40G NICs
Xeon E5-2670 v2(10 cores)
32G memory
I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network packet by
DPDK,
and another for receiving. I bind 4 processor's logical cores with 4 rx
queue
"0,1,2,3" on receiving NIC, when I start to send packet, only rx queue "0"
can receive
the UDP packet, the others queue always receive nothing. but it is work well
on ixgbe
10G NICs, I can receive network packet from all rx queues. does anyone
kindly know why?
Regards,
Haifeng
-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Friday, February 27, 2015 6:55 PM
To: lhffjzh
Cc: dev@dpdk.org
Subject: Re: Why only rx queue "0" can receive network packet by i40e NIC
2015-02-27 16:47, lhffjzh:
> Hi All,
>
> We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
can
> receive network packet, do anyone kindly know why? BTW, all of network
> packet has same destination ip address but has more than 200 different
> source ip address.
It's possible that you don't have any answer for 2 reasons:
- you replied in a thread dedicated to Cisco enic questions
- you didn't describe your usage enough to understand your problem
I suggest to use the button "new email" instead of "reply all" to
start a new question with enough details.
Did you noticed you put some Cisco guys in CC instead of putting the
Intel responsible for i40e (see MAINTAINERS file)?
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-28 1:47 ` lhffjzh
@ 2015-02-28 3:17 ` Zhang, Helin
2015-02-28 4:33 ` lhffjzh
0 siblings, 1 reply; 36+ messages in thread
From: Zhang, Helin @ 2015-02-28 3:17 UTC (permalink / raw)
To: lhffjzh, 'Thomas Monjalon'; +Cc: dev, maintainers
Hi Haifeng
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> Sent: Saturday, February 28, 2015 9:48 AM
> To: 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Hi Thomas,
>
> Thanks very much for your reminder, you give me many help in this mail list.
>
> The issue with detailed information just as below. but I don't know who is the
> dpdk i40e maintainers? is maintainers@dpdk.org?
>
> Hardware list:
> 2 i40e 40G NICs
> Xeon E5-2670 v2(10 cores)
> 32G memory
>
> I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network packet by
> DPDK, and another for receiving. I bind 4 processor's logical cores with 4 rx
> queue "0,1,2,3" on receiving NIC, when I start to send packet, only rx queue
> "0"
> can receive
> the UDP packet, the others queue always receive nothing. but it is work well on
> ixgbe 10G NICs, I can receive network packet from all rx queues. does anyone
> kindly know why?
Could you help to list the DPDK version you are using now?
Two possible reasons:
1. UDP rss is not enabled on your board correctly.
I40e has different rss flags from ixgbe, so I am wondering if you use it correctly.
In addition, this will be unified from 2.0. So I care about the DPDK version.
2. The UDP stream is occasionally hit the hash key of queue 0.
You'd better to try to send your UDP stream with random 5-tuples, to get the
hash value hit different queues randomly.
Regards,
Helin
>
>
> Regards,
> Haifeng
>
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, February 27, 2015 6:55 PM
> To: lhffjzh
> Cc: dev@dpdk.org
> Subject: Re: Why only rx queue "0" can receive network packet by i40e NIC
>
> 2015-02-27 16:47, lhffjzh:
> > Hi All,
> >
> > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> can
> > receive network packet, do anyone kindly know why? BTW, all of network
> > packet has same destination ip address but has more than 200 different
> > source ip address.
>
> It's possible that you don't have any answer for 2 reasons:
> - you replied in a thread dedicated to Cisco enic questions
> - you didn't describe your usage enough to understand your problem
>
> I suggest to use the button "new email" instead of "reply all" to
> start a new question with enough details.
>
> Did you noticed you put some Cisco guys in CC instead of putting the
> Intel responsible for i40e (see MAINTAINERS file)?
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-28 3:17 ` Zhang, Helin
@ 2015-02-28 4:33 ` lhffjzh
2015-02-28 14:33 ` Zhang, Helin
0 siblings, 1 reply; 36+ messages in thread
From: lhffjzh @ 2015-02-28 4:33 UTC (permalink / raw)
To: 'Zhang, Helin', 'Thomas Monjalon'; +Cc: dev, maintainers
Hi Helin,
Thanks a lot for your great help, all of rx queue
received network packet after I update rss_hf
from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK ".
static struct rte_eth_conf port_conf = {
.rxmode = {
.mq_mode = ETH_MQ_RX_RSS,
.max_rx_pkt_len = ETHER_MAX_LEN,
.split_hdr_size = 0,
.header_split = 0, /**< Header Split disabled */
.hw_ip_checksum = 1, /**< IP checksum offload enabled */
.hw_vlan_filter = 0, /**< VLAN filtering disabled */
.jumbo_frame = 0, /**< Jumbo Frame Support disabled */
.hw_strip_crc = 0, /**< CRC stripped by hardware */
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_hf = ETH_RSS_PROTO_MASK,
},
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
},
.fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE,
};
Regards,
Haifeng
-----Original Message-----
From: Zhang, Helin [mailto:helin.zhang@intel.com]
Sent: Saturday, February 28, 2015 11:18 AM
To: lhffjzh; 'Thomas Monjalon'
Cc: dev@dpdk.org; maintainers@dpdk.org
Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
i40e NIC
Hi Haifeng
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> Sent: Saturday, February 28, 2015 9:48 AM
> To: 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network packet
by
> i40e NIC
>
> Hi Thomas,
>
> Thanks very much for your reminder, you give me many help in this mail
list.
>
> The issue with detailed information just as below. but I don't know who is
the
> dpdk i40e maintainers? is maintainers@dpdk.org?
>
> Hardware list:
> 2 i40e 40G NICs
> Xeon E5-2670 v2(10 cores)
> 32G memory
>
> I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network packet by
> DPDK, and another for receiving. I bind 4 processor's logical cores with 4
rx
> queue "0,1,2,3" on receiving NIC, when I start to send packet, only rx
queue
> "0"
> can receive
> the UDP packet, the others queue always receive nothing. but it is work
well on
> ixgbe 10G NICs, I can receive network packet from all rx queues. does
anyone
> kindly know why?
Could you help to list the DPDK version you are using now?
Two possible reasons:
1. UDP rss is not enabled on your board correctly.
I40e has different rss flags from ixgbe, so I am wondering if you
use it correctly.
In addition, this will be unified from 2.0. So I care about the DPDK
version.
2. The UDP stream is occasionally hit the hash key of queue 0.
You'd better to try to send your UDP stream with random 5-tuples, to
get the
hash value hit different queues randomly.
Regards,
Helin
>
>
> Regards,
> Haifeng
>
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, February 27, 2015 6:55 PM
> To: lhffjzh
> Cc: dev@dpdk.org
> Subject: Re: Why only rx queue "0" can receive network packet by i40e NIC
>
> 2015-02-27 16:47, lhffjzh:
> > Hi All,
> >
> > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> can
> > receive network packet, do anyone kindly know why? BTW, all of network
> > packet has same destination ip address but has more than 200 different
> > source ip address.
>
> It's possible that you don't have any answer for 2 reasons:
> - you replied in a thread dedicated to Cisco enic questions
> - you didn't describe your usage enough to understand your problem
>
> I suggest to use the button "new email" instead of "reply all" to
> start a new question with enough details.
>
> Did you noticed you put some Cisco guys in CC instead of putting the
> Intel responsible for i40e (see MAINTAINERS file)?
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-28 4:33 ` lhffjzh
@ 2015-02-28 14:33 ` Zhang, Helin
2015-07-23 0:47 ` Jeff Venable, Sr.
0 siblings, 1 reply; 36+ messages in thread
From: Zhang, Helin @ 2015-02-28 14:33 UTC (permalink / raw)
To: lhffjzh, 'Thomas Monjalon'; +Cc: dev, maintainers
Good to know that!
> -----Original Message-----
> From: lhffjzh [mailto:lhffjzh@126.com]
> Sent: Saturday, February 28, 2015 12:34 PM
> To: Zhang, Helin; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Hi Helin,
>
> Thanks a lot for your great help, all of rx queue received network packet after I
> update rss_hf from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK ".
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> .max_rx_pkt_len = ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .header_split = 0, /**< Header Split disabled */
> .hw_ip_checksum = 1, /**< IP checksum offload enabled */
> .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> .hw_strip_crc = 0, /**< CRC stripped by hardware */
> },
> .rx_adv_conf = {
> .rss_conf = {
> .rss_key = NULL,
> .rss_hf = ETH_RSS_PROTO_MASK,
> },
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> },
> .fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE, };
>
>
> Regards,
> Haifeng
>
> -----Original Message-----
> From: Zhang, Helin [mailto:helin.zhang@intel.com]
> Sent: Saturday, February 28, 2015 11:18 AM
> To: lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Hi Haifeng
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> > Sent: Saturday, February 28, 2015 9:48 AM
> > To: 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > packet
> by
> > i40e NIC
> >
> > Hi Thomas,
> >
> > Thanks very much for your reminder, you give me many help in this mail
> list.
> >
> > The issue with detailed information just as below. but I don't know
> > who is
> the
> > dpdk i40e maintainers? is maintainers@dpdk.org?
> >
> > Hardware list:
> > 2 i40e 40G NICs
> > Xeon E5-2670 v2(10 cores)
> > 32G memory
> >
> > I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network packet
> > by DPDK, and another for receiving. I bind 4 processor's logical cores
> > with 4
> rx
> > queue "0,1,2,3" on receiving NIC, when I start to send packet, only rx
> queue
> > "0"
> > can receive
> > the UDP packet, the others queue always receive nothing. but it is
> > work
> well on
> > ixgbe 10G NICs, I can receive network packet from all rx queues. does
> anyone
> > kindly know why?
> Could you help to list the DPDK version you are using now?
> Two possible reasons:
> 1. UDP rss is not enabled on your board correctly.
> I40e has different rss flags from ixgbe, so I am wondering if you use it
> correctly.
> In addition, this will be unified from 2.0. So I care about the DPDK version.
> 2. The UDP stream is occasionally hit the hash key of queue 0.
> You'd better to try to send your UDP stream with random 5-tuples, to get
> the
> hash value hit different queues randomly.
>
> Regards,
> Helin
>
> >
> >
> > Regards,
> > Haifeng
> >
> > -----Original Message-----
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Friday, February 27, 2015 6:55 PM
> > To: lhffjzh
> > Cc: dev@dpdk.org
> > Subject: Re: Why only rx queue "0" can receive network packet by i40e
> > NIC
> >
> > 2015-02-27 16:47, lhffjzh:
> > > Hi All,
> > >
> > > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> > can
> > > receive network packet, do anyone kindly know why? BTW, all of
> > > network packet has same destination ip address but has more than 200
> > > different source ip address.
> >
> > It's possible that you don't have any answer for 2 reasons:
> > - you replied in a thread dedicated to Cisco enic questions
> > - you didn't describe your usage enough to understand your problem
> >
> > I suggest to use the button "new email" instead of "reply all" to
> > start a new question with enough details.
> >
> > Did you noticed you put some Cisco guys in CC instead of putting the
> > Intel responsible for i40e (see MAINTAINERS file)?
> >
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-02-27 10:46 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Thomas Monjalon
@ 2015-03-11 9:05 ` Sujith Sankar (ssujith)
0 siblings, 0 replies; 36+ messages in thread
From: Sujith Sankar (ssujith) @ 2015-03-11 9:05 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Prasad Rao (prrao)
On 27/02/15 4:16 pm, "Thomas Monjalon" <thomas.monjalon@6wind.com> wrote:
>2015-02-27 08:09, Sujith Sankar:
>> Hi Thomas,
>>
>> No update on it from my side :-(
>> It would take some more time for me to start working on it (and flow
>> director api) as a few other things are keeping me busy.
>
>It's unfortunate.
>We now have a PMD without documentation and blocking the removal of the
>old flow director API.
>Is there someone else at Cisco able to work on it?
>I think the first priority should be on the flow director API as it has a
>wider impact.
Sure Thomas. I shall pick it up as soon as possible.
Thanks.
>
>Thanks
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD
2015-02-27 8:09 ` Sujith Sankar (ssujith)
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
2015-02-27 10:46 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Thomas Monjalon
@ 2015-05-11 9:25 ` Thomas Monjalon
2 siblings, 0 replies; 36+ messages in thread
From: Thomas Monjalon @ 2015-05-11 9:25 UTC (permalink / raw)
To: Sujith Sankar, dev; +Cc: Prasad Rao
Hi Sujith,
2015-02-27 08:09, Sujith Sankar:
> Hi Thomas,
>
> No update on it from my side :-(
> It would take some more time for me to start working on it (and flow
> director api) as a few other things are keeping me busy.
[...]
Documentation was split to better welcome new NICs:
http://dpdk.org/doc/guides/nics/index.html
It would be nice to have some insights about features, design and performance
numbers for enic.
Thanks
> >>2015-01-21 05:03, Sujith Sankar:
> >>> Hi David,
> >>>
> >>> Apologies for the delay. I was not able to find quality time to finish
> >>>it
> >>> as a few other things have been keeping me busy. But I shall work on
> >>>it
> >>> and provide the doc and the perf details soon.
> >>> In the mean time, it would be great if you could point me to some
> >>>resources
> >>> on running pktgen-dpdk as I was stuck on it.
> >>>
> >>> Thanks,
> >>> -Sujith
> >>>
> >>> From: David Marchand
> >>><david.marchand@6wind.com<mailto:david.marchand@6wind.com>>
> >>> Date: Tuesday, 20 January 2015 4:55 pm
> >>> > Hello Sujith,
> >>> >
> >>> > Any news on the documentation and the performance numbers you said
> >>>you
> >>> > would send ?
> >>> >
> >>> > Thanks.
> >>> >
> >>> > --
> >>> > David Marchand
> >>> >
> >>> > On Thu, Nov 27, 2014 at 4:31 PM, Thomas Monjalon
> >>> > <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
> >>> > > 2014-11-27 04:27, Sujith Sankar:
> >>> > > > Thanks Thomas, David and Neil !
> >>> > > >
> >>> > > > I shall work on finishing the documentation.
> >>> > > > About that, you had mentioned that you wanted it in doc/drivers/
> >>>path.
> >>> > > > Could I send a patch with documentation in the path
> >>> > > > doc/drivers/enicpmd/
> >>> > > > ?
> >>> > >
> >>> > > Yes.
> >>> > > I'd prefer doc/drivers/enic/ but it's a detail ;)
> >>> > > The format must be sphinx rst to allow web publishing.
> >>> > >
> >>> > > It would be great to have some design documentation of every
> >>>drivers
> >>> > > in doc/drivers.
> >>> > >
> >>> > > Thanks
> >>> > > --
> >>> > > Thomas
> >
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-02-28 14:33 ` Zhang, Helin
@ 2015-07-23 0:47 ` Jeff Venable, Sr.
2015-07-23 0:56 ` Zhang, Helin
0 siblings, 1 reply; 36+ messages in thread
From: Jeff Venable, Sr. @ 2015-07-23 0:47 UTC (permalink / raw)
To: Zhang, Helin, lhffjzh, 'Thomas Monjalon'; +Cc: dev
Is the I40E incapable of operating RSS with ETH_RSS_IP (i.e. hashing without L4 ports)?
Thanks,
Jeff
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhang, Helin
Sent: Saturday, February 28, 2015 6:34 AM
To: lhffjzh; 'Thomas Monjalon'
Cc: dev@dpdk.org; maintainers@dpdk.org
Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
Good to know that!
> -----Original Message-----
> From: lhffjzh [mailto:lhffjzh@126.com]
> Sent: Saturday, February 28, 2015 12:34 PM
> To: Zhang, Helin; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> packet by i40e NIC
>
> Hi Helin,
>
> Thanks a lot for your great help, all of rx queue received network
> packet after I update rss_hf from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK ".
>
> static struct rte_eth_conf port_conf = {
> .rxmode = {
> .mq_mode = ETH_MQ_RX_RSS,
> .max_rx_pkt_len = ETHER_MAX_LEN,
> .split_hdr_size = 0,
> .header_split = 0, /**< Header Split disabled */
> .hw_ip_checksum = 1, /**< IP checksum offload enabled */
> .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> .hw_strip_crc = 0, /**< CRC stripped by hardware */
> },
> .rx_adv_conf = {
> .rss_conf = {
> .rss_key = NULL,
> .rss_hf = ETH_RSS_PROTO_MASK,
> },
> },
> .txmode = {
> .mq_mode = ETH_MQ_TX_NONE,
> },
> .fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE, };
>
>
> Regards,
> Haifeng
>
> -----Original Message-----
> From: Zhang, Helin [mailto:helin.zhang@intel.com]
> Sent: Saturday, February 28, 2015 11:18 AM
> To: lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> packet by i40e NIC
>
> Hi Haifeng
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> > Sent: Saturday, February 28, 2015 9:48 AM
> > To: 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > packet
> by
> > i40e NIC
> >
> > Hi Thomas,
> >
> > Thanks very much for your reminder, you give me many help in this
> > mail
> list.
> >
> > The issue with detailed information just as below. but I don't know
> > who is
> the
> > dpdk i40e maintainers? is maintainers@dpdk.org?
> >
> > Hardware list:
> > 2 i40e 40G NICs
> > Xeon E5-2670 v2(10 cores)
> > 32G memory
> >
> > I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network
> > packet by DPDK, and another for receiving. I bind 4 processor's
> > logical cores with 4
> rx
> > queue "0,1,2,3" on receiving NIC, when I start to send packet, only
> > rx
> queue
> > "0"
> > can receive
> > the UDP packet, the others queue always receive nothing. but it is
> > work
> well on
> > ixgbe 10G NICs, I can receive network packet from all rx queues.
> > does
> anyone
> > kindly know why?
> Could you help to list the DPDK version you are using now?
> Two possible reasons:
> 1. UDP rss is not enabled on your board correctly.
> I40e has different rss flags from ixgbe, so I am wondering if you use
> it correctly.
> In addition, this will be unified from 2.0. So I care about the DPDK version.
> 2. The UDP stream is occasionally hit the hash key of queue 0.
> You'd better to try to send your UDP stream with random 5-tuples, to
> get the
> hash value hit different queues randomly.
>
> Regards,
> Helin
>
> >
> >
> > Regards,
> > Haifeng
> >
> > -----Original Message-----
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Friday, February 27, 2015 6:55 PM
> > To: lhffjzh
> > Cc: dev@dpdk.org
> > Subject: Re: Why only rx queue "0" can receive network packet by
> > i40e NIC
> >
> > 2015-02-27 16:47, lhffjzh:
> > > Hi All,
> > >
> > > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> > can
> > > receive network packet, do anyone kindly know why? BTW, all of
> > > network packet has same destination ip address but has more than
> > > 200 different source ip address.
> >
> > It's possible that you don't have any answer for 2 reasons:
> > - you replied in a thread dedicated to Cisco enic questions
> > - you didn't describe your usage enough to understand your problem
> >
> > I suggest to use the button "new email" instead of "reply all" to
> > start a new question with enough details.
> >
> > Did you noticed you put some Cisco guys in CC instead of putting the
> > Intel responsible for i40e (see MAINTAINERS file)?
> >
>
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-07-23 0:47 ` Jeff Venable, Sr.
@ 2015-07-23 0:56 ` Zhang, Helin
2015-07-30 1:58 ` Jeff Venable, Sr.
0 siblings, 1 reply; 36+ messages in thread
From: Zhang, Helin @ 2015-07-23 0:56 UTC (permalink / raw)
To: Jeff Venable, Sr., lhffjzh, 'Thomas Monjalon'; +Cc: dev
> -----Original Message-----
> From: Jeff Venable, Sr. [mailto:jeff@vectranetworks.com]
> Sent: Wednesday, July 22, 2015 5:47 PM
> To: Zhang, Helin; lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Is the I40E incapable of operating RSS with ETH_RSS_IP (i.e. hashing without L4
> ports)?
Why do you think like this? Sorry, I am a bit confused.
ETH_RSS_IP is a super set of all IP based rss types. Please see the rss types listed
in rte_ethdev.h.
The supports rss types of each NIC can be queried via 'struct rte_eth_dev_info' of
field 'flow_type_rss_offloads'.
Regards,
Helin
>
> Thanks,
>
> Jeff
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhang, Helin
> Sent: Saturday, February 28, 2015 6:34 AM
> To: lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Good to know that!
>
> > -----Original Message-----
> > From: lhffjzh [mailto:lhffjzh@126.com]
> > Sent: Saturday, February 28, 2015 12:34 PM
> > To: Zhang, Helin; 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Hi Helin,
> >
> > Thanks a lot for your great help, all of rx queue received network
> > packet after I update rss_hf from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK ".
> >
> > static struct rte_eth_conf port_conf = {
> > .rxmode = {
> > .mq_mode = ETH_MQ_RX_RSS,
> > .max_rx_pkt_len = ETHER_MAX_LEN,
> > .split_hdr_size = 0,
> > .header_split = 0, /**< Header Split disabled */
> > .hw_ip_checksum = 1, /**< IP checksum offload enabled */
> > .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> > .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> > .hw_strip_crc = 0, /**< CRC stripped by hardware */
> > },
> > .rx_adv_conf = {
> > .rss_conf = {
> > .rss_key = NULL,
> > .rss_hf = ETH_RSS_PROTO_MASK,
> > },
> > },
> > .txmode = {
> > .mq_mode = ETH_MQ_TX_NONE,
> > },
> > .fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE, };
> >
> >
> > Regards,
> > Haifeng
> >
> > -----Original Message-----
> > From: Zhang, Helin [mailto:helin.zhang@intel.com]
> > Sent: Saturday, February 28, 2015 11:18 AM
> > To: lhffjzh; 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Hi Haifeng
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> > > Sent: Saturday, February 28, 2015 9:48 AM
> > > To: 'Thomas Monjalon'
> > > Cc: dev@dpdk.org; maintainers@dpdk.org
> > > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > > packet
> > by
> > > i40e NIC
> > >
> > > Hi Thomas,
> > >
> > > Thanks very much for your reminder, you give me many help in this
> > > mail
> > list.
> > >
> > > The issue with detailed information just as below. but I don't know
> > > who is
> > the
> > > dpdk i40e maintainers? is maintainers@dpdk.org?
> > >
> > > Hardware list:
> > > 2 i40e 40G NICs
> > > Xeon E5-2670 v2(10 cores)
> > > 32G memory
> > >
> > > I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network
> > > packet by DPDK, and another for receiving. I bind 4 processor's
> > > logical cores with 4
> > rx
> > > queue "0,1,2,3" on receiving NIC, when I start to send packet, only
> > > rx
> > queue
> > > "0"
> > > can receive
> > > the UDP packet, the others queue always receive nothing. but it is
> > > work
> > well on
> > > ixgbe 10G NICs, I can receive network packet from all rx queues.
> > > does
> > anyone
> > > kindly know why?
> > Could you help to list the DPDK version you are using now?
> > Two possible reasons:
> > 1. UDP rss is not enabled on your board correctly.
> > I40e has different rss flags from ixgbe, so I am wondering if you use
> > it correctly.
> > In addition, this will be unified from 2.0. So I care about the DPDK version.
> > 2. The UDP stream is occasionally hit the hash key of queue 0.
> > You'd better to try to send your UDP stream with random 5-tuples, to
> > get the
> > hash value hit different queues randomly.
> >
> > Regards,
> > Helin
> >
> > >
> > >
> > > Regards,
> > > Haifeng
> > >
> > > -----Original Message-----
> > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > Sent: Friday, February 27, 2015 6:55 PM
> > > To: lhffjzh
> > > Cc: dev@dpdk.org
> > > Subject: Re: Why only rx queue "0" can receive network packet by
> > > i40e NIC
> > >
> > > 2015-02-27 16:47, lhffjzh:
> > > > Hi All,
> > > >
> > > > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> > > can
> > > > receive network packet, do anyone kindly know why? BTW, all of
> > > > network packet has same destination ip address but has more than
> > > > 200 different source ip address.
> > >
> > > It's possible that you don't have any answer for 2 reasons:
> > > - you replied in a thread dedicated to Cisco enic questions
> > > - you didn't describe your usage enough to understand your problem
> > >
> > > I suggest to use the button "new email" instead of "reply all" to
> > > start a new question with enough details.
> > >
> > > Did you noticed you put some Cisco guys in CC instead of putting the
> > > Intel responsible for i40e (see MAINTAINERS file)?
> > >
> >
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-07-23 0:56 ` Zhang, Helin
@ 2015-07-30 1:58 ` Jeff Venable, Sr.
2015-07-31 15:35 ` Zhang, Helin
0 siblings, 1 reply; 36+ messages in thread
From: Jeff Venable, Sr. @ 2015-07-30 1:58 UTC (permalink / raw)
To: Zhang, Helin; +Cc: dev
Hi Helin,
We do not want RSS to include L4 ports in the hash because packet fragments would get routed to queue #0 and would be more difficult to work with. We are using the model where multiple CPUs are pulling from the NIC queues independently with no shared state, so each 'pipeline' has private fragment reassembly state for the sessions it is managing.
Getting RSS Toeplitz hash to work on { source_ip, dest_ip } tuples only using a symmetric rss-key is important. This works properly with all other Intel NICs in the DPDK thus far that we have tested until the i40E PMD with the Intel X710-DA4. The Microsoft RSS specification allows for this.
With the i40E PMD, we have been unsuccessful at enabling this RSS configuration. From the source code and XL710 controller datasheet, we cannot find any reference to the flags for this RSS mode. Unless we can achieve feature parity with the other Intel NICs, we don't want to write special case code for this one driver which makes the XL710 controller unusable for us and seems contrary to the intent of the DPDK APIs which are abstracting this behavior.
Do you have any suggestions?
Thanks kindly,
Jeff
-----Original Message-----
From: Zhang, Helin [mailto:helin.zhang@intel.com]
Sent: Wednesday, July 22, 2015 5:56 PM
To: Jeff Venable, Sr. <jeff@vectranetworks.com>; lhffjzh <lhffjzh@126.com>; 'Thomas Monjalon' <thomas.monjalon@6wind.com>
Cc: dev@dpdk.org
Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
> -----Original Message-----
> From: Jeff Venable, Sr. [mailto:jeff@vectranetworks.com]
> Sent: Wednesday, July 22, 2015 5:47 PM
> To: Zhang, Helin; lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> packet by i40e NIC
>
> Is the I40E incapable of operating RSS with ETH_RSS_IP (i.e. hashing
> without L4 ports)?
Why do you think like this? Sorry, I am a bit confused.
ETH_RSS_IP is a super set of all IP based rss types. Please see the rss types listed in rte_ethdev.h.
The supports rss types of each NIC can be queried via 'struct rte_eth_dev_info' of field 'flow_type_rss_offloads'.
Regards,
Helin
>
> Thanks,
>
> Jeff
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhang, Helin
> Sent: Saturday, February 28, 2015 6:34 AM
> To: lhffjzh; 'Thomas Monjalon'
> Cc: dev@dpdk.org; maintainers@dpdk.org
> Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> packet by i40e NIC
>
> Good to know that!
>
> > -----Original Message-----
> > From: lhffjzh [mailto:lhffjzh@126.com]
> > Sent: Saturday, February 28, 2015 12:34 PM
> > To: Zhang, Helin; 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Hi Helin,
> >
> > Thanks a lot for your great help, all of rx queue received network
> > packet after I update rss_hf from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK ".
> >
> > static struct rte_eth_conf port_conf = {
> > .rxmode = {
> > .mq_mode = ETH_MQ_RX_RSS,
> > .max_rx_pkt_len = ETHER_MAX_LEN,
> > .split_hdr_size = 0,
> > .header_split = 0, /**< Header Split disabled */
> > .hw_ip_checksum = 1, /**< IP checksum offload enabled */
> > .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> > .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> > .hw_strip_crc = 0, /**< CRC stripped by hardware */
> > },
> > .rx_adv_conf = {
> > .rss_conf = {
> > .rss_key = NULL,
> > .rss_hf = ETH_RSS_PROTO_MASK,
> > },
> > },
> > .txmode = {
> > .mq_mode = ETH_MQ_TX_NONE,
> > },
> > .fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE, };
> >
> >
> > Regards,
> > Haifeng
> >
> > -----Original Message-----
> > From: Zhang, Helin [mailto:helin.zhang@intel.com]
> > Sent: Saturday, February 28, 2015 11:18 AM
> > To: lhffjzh; 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Hi Haifeng
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> > > Sent: Saturday, February 28, 2015 9:48 AM
> > > To: 'Thomas Monjalon'
> > > Cc: dev@dpdk.org; maintainers@dpdk.org
> > > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > > packet
> > by
> > > i40e NIC
> > >
> > > Hi Thomas,
> > >
> > > Thanks very much for your reminder, you give me many help in this
> > > mail
> > list.
> > >
> > > The issue with detailed information just as below. but I don't
> > > know who is
> > the
> > > dpdk i40e maintainers? is maintainers@dpdk.org?
> > >
> > > Hardware list:
> > > 2 i40e 40G NICs
> > > Xeon E5-2670 v2(10 cores)
> > > 32G memory
> > >
> > > I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network
> > > packet by DPDK, and another for receiving. I bind 4 processor's
> > > logical cores with 4
> > rx
> > > queue "0,1,2,3" on receiving NIC, when I start to send packet,
> > > only rx
> > queue
> > > "0"
> > > can receive
> > > the UDP packet, the others queue always receive nothing. but it is
> > > work
> > well on
> > > ixgbe 10G NICs, I can receive network packet from all rx queues.
> > > does
> > anyone
> > > kindly know why?
> > Could you help to list the DPDK version you are using now?
> > Two possible reasons:
> > 1. UDP rss is not enabled on your board correctly.
> > I40e has different rss flags from ixgbe, so I am wondering if you
> > use it correctly.
> > In addition, this will be unified from 2.0. So I care about the DPDK version.
> > 2. The UDP stream is occasionally hit the hash key of queue 0.
> > You'd better to try to send your UDP stream with random 5-tuples,
> > to get the
> > hash value hit different queues randomly.
> >
> > Regards,
> > Helin
> >
> > >
> > >
> > > Regards,
> > > Haifeng
> > >
> > > -----Original Message-----
> > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > Sent: Friday, February 27, 2015 6:55 PM
> > > To: lhffjzh
> > > Cc: dev@dpdk.org
> > > Subject: Re: Why only rx queue "0" can receive network packet by
> > > i40e NIC
> > >
> > > 2015-02-27 16:47, lhffjzh:
> > > > Hi All,
> > > >
> > > > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> > > can
> > > > receive network packet, do anyone kindly know why? BTW, all of
> > > > network packet has same destination ip address but has more than
> > > > 200 different source ip address.
> > >
> > > It's possible that you don't have any answer for 2 reasons:
> > > - you replied in a thread dedicated to Cisco enic questions
> > > - you didn't describe your usage enough to understand your problem
> > >
> > > I suggest to use the button "new email" instead of "reply all" to
> > > start a new question with enough details.
> > >
> > > Did you noticed you put some Cisco guys in CC instead of putting
> > > the Intel responsible for i40e (see MAINTAINERS file)?
> > >
> >
^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC
2015-07-30 1:58 ` Jeff Venable, Sr.
@ 2015-07-31 15:35 ` Zhang, Helin
0 siblings, 0 replies; 36+ messages in thread
From: Zhang, Helin @ 2015-07-31 15:35 UTC (permalink / raw)
To: Jeff Venable, Sr.; +Cc: dev
> -----Original Message-----
> From: Jeff Venable, Sr. [mailto:jeff@vectranetworks.com]
> Sent: Wednesday, July 29, 2015 6:59 PM
> To: Zhang, Helin
> Cc: dev@dpdk.org; lhffjzh; 'Thomas Monjalon'
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
> Hi Helin,
>
> We do not want RSS to include L4 ports in the hash because packet fragments
> would get routed to queue #0 and would be more difficult to work with. We are
> using the model where multiple CPUs are pulling from the NIC queues
> independently with no shared state, so each 'pipeline' has private fragment
> reassembly state for the sessions it is managing.
>
> Getting RSS Toeplitz hash to work on { source_ip, dest_ip } tuples only using a
> symmetric rss-key is important. This works properly with all other Intel NICs in
> the DPDK thus far that we have tested until the i40E PMD with the Intel X710-DA4.
> The Microsoft RSS specification allows for this.
This is a bit similar to some complains to flow director. I will check it with somebody
else to see if there is anything we can do for this. And I remember there might a bit hard.
Will let you know if I have any update.
Thanks,
Helin
>
> With the i40E PMD, we have been unsuccessful at enabling this RSS configuration.
> From the source code and XL710 controller datasheet, we cannot find any
> reference to the flags for this RSS mode. Unless we can achieve feature parity
> with the other Intel NICs, we don't want to write special case code for this one
> driver which makes the XL710 controller unusable for us and seems contrary to
> the intent of the DPDK APIs which are abstracting this behavior.
>
> Do you have any suggestions?
>
> Thanks kindly,
>
> Jeff
>
> -----Original Message-----
> From: Zhang, Helin [mailto:helin.zhang@intel.com]
> Sent: Wednesday, July 22, 2015 5:56 PM
> To: Jeff Venable, Sr. <jeff@vectranetworks.com>; lhffjzh <lhffjzh@126.com>;
> 'Thomas Monjalon' <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network packet by
> i40e NIC
>
>
>
> > -----Original Message-----
> > From: Jeff Venable, Sr. [mailto:jeff@vectranetworks.com]
> > Sent: Wednesday, July 22, 2015 5:47 PM
> > To: Zhang, Helin; lhffjzh; 'Thomas Monjalon'
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Is the I40E incapable of operating RSS with ETH_RSS_IP (i.e. hashing
> > without L4 ports)?
> Why do you think like this? Sorry, I am a bit confused.
> ETH_RSS_IP is a super set of all IP based rss types. Please see the rss types listed
> in rte_ethdev.h.
> The supports rss types of each NIC can be queried via 'struct rte_eth_dev_info'
> of field 'flow_type_rss_offloads'.
>
> Regards,
> Helin
>
> >
> > Thanks,
> >
> > Jeff
> >
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhang, Helin
> > Sent: Saturday, February 28, 2015 6:34 AM
> > To: lhffjzh; 'Thomas Monjalon'
> > Cc: dev@dpdk.org; maintainers@dpdk.org
> > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > packet by i40e NIC
> >
> > Good to know that!
> >
> > > -----Original Message-----
> > > From: lhffjzh [mailto:lhffjzh@126.com]
> > > Sent: Saturday, February 28, 2015 12:34 PM
> > > To: Zhang, Helin; 'Thomas Monjalon'
> > > Cc: dev@dpdk.org; maintainers@dpdk.org
> > > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > > packet by i40e NIC
> > >
> > > Hi Helin,
> > >
> > > Thanks a lot for your great help, all of rx queue received network
> > > packet after I update rss_hf from "ETH_RSS_IP" to " ETH_RSS_PROTO_MASK
> ".
> > >
> > > static struct rte_eth_conf port_conf = {
> > > .rxmode = {
> > > .mq_mode = ETH_MQ_RX_RSS,
> > > .max_rx_pkt_len = ETHER_MAX_LEN,
> > > .split_hdr_size = 0,
> > > .header_split = 0, /**< Header Split disabled */
> > > .hw_ip_checksum = 1, /**< IP checksum offload enabled */
> > > .hw_vlan_filter = 0, /**< VLAN filtering disabled */
> > > .jumbo_frame = 0, /**< Jumbo Frame Support disabled */
> > > .hw_strip_crc = 0, /**< CRC stripped by hardware */
> > > },
> > > .rx_adv_conf = {
> > > .rss_conf = {
> > > .rss_key = NULL,
> > > .rss_hf = ETH_RSS_PROTO_MASK,
> > > },
> > > },
> > > .txmode = {
> > > .mq_mode = ETH_MQ_TX_NONE,
> > > },
> > > .fdir_conf.mode = RTE_FDIR_MODE_SIGNATURE, };
> > >
> > >
> > > Regards,
> > > Haifeng
> > >
> > > -----Original Message-----
> > > From: Zhang, Helin [mailto:helin.zhang@intel.com]
> > > Sent: Saturday, February 28, 2015 11:18 AM
> > > To: lhffjzh; 'Thomas Monjalon'
> > > Cc: dev@dpdk.org; maintainers@dpdk.org
> > > Subject: RE: [dpdk-dev] Why only rx queue "0" can receive network
> > > packet by i40e NIC
> > >
> > > Hi Haifeng
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of lhffjzh
> > > > Sent: Saturday, February 28, 2015 9:48 AM
> > > > To: 'Thomas Monjalon'
> > > > Cc: dev@dpdk.org; maintainers@dpdk.org
> > > > Subject: Re: [dpdk-dev] Why only rx queue "0" can receive network
> > > > packet
> > > by
> > > > i40e NIC
> > > >
> > > > Hi Thomas,
> > > >
> > > > Thanks very much for your reminder, you give me many help in this
> > > > mail
> > > list.
> > > >
> > > > The issue with detailed information just as below. but I don't
> > > > know who is
> > > the
> > > > dpdk i40e maintainers? is maintainers@dpdk.org?
> > > >
> > > > Hardware list:
> > > > 2 i40e 40G NICs
> > > > Xeon E5-2670 v2(10 cores)
> > > > 32G memory
> > > >
> > > > I loopback 2 i40e NICs by QSFP cable, one NIC send UDP network
> > > > packet by DPDK, and another for receiving. I bind 4 processor's
> > > > logical cores with 4
> > > rx
> > > > queue "0,1,2,3" on receiving NIC, when I start to send packet,
> > > > only rx
> > > queue
> > > > "0"
> > > > can receive
> > > > the UDP packet, the others queue always receive nothing. but it is
> > > > work
> > > well on
> > > > ixgbe 10G NICs, I can receive network packet from all rx queues.
> > > > does
> > > anyone
> > > > kindly know why?
> > > Could you help to list the DPDK version you are using now?
> > > Two possible reasons:
> > > 1. UDP rss is not enabled on your board correctly.
> > > I40e has different rss flags from ixgbe, so I am wondering if you
> > > use it correctly.
> > > In addition, this will be unified from 2.0. So I care about the DPDK version.
> > > 2. The UDP stream is occasionally hit the hash key of queue 0.
> > > You'd better to try to send your UDP stream with random 5-tuples,
> > > to get the
> > > hash value hit different queues randomly.
> > >
> > > Regards,
> > > Helin
> > >
> > > >
> > > >
> > > > Regards,
> > > > Haifeng
> > > >
> > > > -----Original Message-----
> > > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > > Sent: Friday, February 27, 2015 6:55 PM
> > > > To: lhffjzh
> > > > Cc: dev@dpdk.org
> > > > Subject: Re: Why only rx queue "0" can receive network packet by
> > > > i40e NIC
> > > >
> > > > 2015-02-27 16:47, lhffjzh:
> > > > > Hi All,
> > > > >
> > > > > We use 4 cores loop 4 rx queues on one i40e port, but only rx queue "0"
> > > > can
> > > > > receive network packet, do anyone kindly know why? BTW, all of
> > > > > network packet has same destination ip address but has more than
> > > > > 200 different source ip address.
> > > >
> > > > It's possible that you don't have any answer for 2 reasons:
> > > > - you replied in a thread dedicated to Cisco enic questions
> > > > - you didn't describe your usage enough to understand your problem
> > > >
> > > > I suggest to use the button "new email" instead of "reply all" to
> > > > start a new question with enough details.
> > > >
> > > > Did you noticed you put some Cisco guys in CC instead of putting
> > > > the Intel responsible for i40e (see MAINTAINERS file)?
> > > >
> > >
^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2015-07-31 15:35 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-11-25 17:26 [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 1/6] enicpmd: License text Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 2/6] enicpmd: Makefile Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 3/6] enicpmd: VNIC common code partially shared with ENIC kernel mode driver Sujith Sankar
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 4/6] enicpmd: pmd specific code Sujith Sankar
2014-11-27 14:49 ` Wodkowski, PawelX
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 5/6] enicpmd: DPDK-ENIC PMD interface Sujith Sankar
2014-12-29 8:15 ` Wu, Jingjing
2014-12-30 4:45 ` Sujith Sankar (ssujith)
2015-01-06 9:41 ` Thomas Monjalon
2015-01-30 8:53 ` Wu, Jingjing
2014-11-25 17:26 ` [dpdk-dev] [PATCH v6 6/6] enicpmd: DPDK changes for accommodating ENIC PMD Sujith Sankar
2014-11-25 19:51 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD David Marchand
2014-11-26 22:11 ` Thomas Monjalon
2014-11-27 4:27 ` Sujith Sankar (ssujith)
2014-11-27 15:31 ` Thomas Monjalon
2015-01-20 11:25 ` David Marchand
2015-01-21 5:03 ` Sujith Sankar (ssujith)
2015-02-26 11:49 ` Thomas Monjalon
2015-02-26 13:08 ` Wiles, Keith
2015-02-27 8:09 ` Sujith Sankar (ssujith)
2015-02-27 8:47 ` [dpdk-dev] Why only rx queue "0" can receive network packet by i40e NIC lhffjzh
2015-02-27 9:03 ` lhffjzh
2015-02-27 10:55 ` Thomas Monjalon
2015-02-28 1:47 ` lhffjzh
2015-02-28 3:17 ` Zhang, Helin
2015-02-28 4:33 ` lhffjzh
2015-02-28 14:33 ` Zhang, Helin
2015-07-23 0:47 ` Jeff Venable, Sr.
2015-07-23 0:56 ` Zhang, Helin
2015-07-30 1:58 ` Jeff Venable, Sr.
2015-07-31 15:35 ` Zhang, Helin
2015-02-27 10:46 ` [dpdk-dev] [PATCH v6 0/6] enicpmd: Cisco Systems Inc. VIC Ethernet PMD Thomas Monjalon
2015-03-11 9:05 ` Sujith Sankar (ssujith)
2015-05-11 9:25 ` Thomas Monjalon
2014-11-25 20:11 ` Neil Horman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).