* [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support
@ 2014-05-19 13:56 Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation Olivier Matz
` (18 more replies)
0 siblings, 19 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
This series add TSO support in ixgbe DPDK driver. This is the second
version of the series. The first version (RFC) was posted 10 days ago
and discussed on the list [1]. The list of changes is provided at the
end of this cover letter.
Also, as discussed previously on the list [2], one problem is that there
is not enough room in rte_mbuf today to store the required information
to implement this feature:
- a new ol_flag
- the MSS
- the L4 header len
A solution would be to increase the size of the mbuf to 2 cache lines
but it could have a bad impact on performance. This series proposes some
rework to drastically reduce the size of the rte_mbuf structures before
implementing the TSO, avoiding to change the mbuf size to 128 bytes.
After the rework of mbuf structures, the size of rte_mbuf structure is
reduced by 9 bytes. The implementation of TSO requires to double the
size of ol_flags (16 to 32 bits) and to double the size of offload
information in order to add the mss and the l4 header length (32 to 64
bits). At the end of the whole series, sizeof(rte_mbuf) is still 64
bytes and 4 bytes are available for future use.
This rework causes a lot of modifications in the mbuf structure,
implying some changes in the applications that directly use the mbuf
structure fields instead of using the API functions (sometimes there is
no function). In my opinion, it's the proper moment for this evolution
as the 1.7.0 window is open.
About TSO, the new fields in mbuf try to be generic enough to apply to
other hardware in the future. To delegate the TCP segmentation to the
hardware, the user has to:
- set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
PKT_TX_IP_CKSUM and PKT_TX_TCP_CKSUM)
- fill the mbuf->hw_offload information: l2_len, l3_len, l4_len, mss
- calculate the pseudo header checksum and set it in the TCP header,
as required when doing hardware TCP checksum offload
- set the IP checksum to 0
Compilation of DPDK and examples is tested for the following
targets: x86_64-*-linuxapp-gcc, i686-*-linuxapp-gcc, x86_64-*-bsdapp-gcc
The mbuf rework series is validated with autotests:
cd dpdk.org/
make install T=x86_64-default-linuxapp-gcc
cd x86_64-default-linuxapp-gcc/
modprobe uio
insmod kmod/igb_uio.ko
python ../tools/igb_uio_bind.py -b igb_uio 0000:02:00.0
echo 0 > /proc/sys/kernel/randomize_va_space
echo 1000 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 1000 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
mount -t hugetlbfs none /mnt/huge
make test
TSO is validated with IPv4 and IPv6 with testpmd (see the commit log of
patch "ixgbe: support TCP segmentation offload" for details).
The performance non-regression has been tested with 6WINDGate fast path,
and with test-pmd [3].
[1] http://dpdk.org/ml/archives/dev/2014-May/002322.html
[2] http://dpdk.org/ml/archives/dev/2013-October/thread.html#572
[3] http://dpdk.org/ml/archives/dev/2014-May/002516.html
Changes included in v2:
- rebase on head, resolve conflicts in 3 test-pmd engines
- add more comments in rte_mbuf.h about how to use TSO
- ixgbe: advertise the TSO feature
- ixgbe: small optimization, move some tso code in the if (tx_ol_req)
- split "mbuf: replace data pointer by an offset" (cosmetics vs functional)
- split "ixgbe/mbuf: add TSO support" (ixgbe vs generic changes)
- modifications of external PMDs (memnic, virtio, vmxnet3)
- I did not add the new checksum flags PKT_RX_L4_CKSUM_GOOD and
PKT_RX_IP_CKSUM_GOOD as proposed on the list. The patch is ready but
I don't think it should be included in this series.
Olivier Matz (17):
dpdk igb/ixgbe: fix IP checksum calculation
dpdk mbuf: rename RTE_MBUF_SCATTER_GATHER into
dpdk mbuf: remove rte_ctrlmbuf
dpdk mbuf: remove the rte_pktmbuf structure
dpdk mbuf: merge physaddr and buf_len in a bitfield
dpdk mbuf: cosmetic changes in rte_mbuf structure
dpdk mbuf: replace data pointer by an offset
dpdk mbuf: add functions to get the name of an ol_flag
dpdk mbuf: change ol_flags to 32 bits
dpdk mbuf: rename vlan_macip_len in hw_offload and increase
dpdk testpmd: modify source address to validate checksum
dpdk mbuf: generic support of TCP segmentation offload
dpdk ixgbe: support TCP segmentation offload
virtio-net-pmd pmd: adapt to new rte_mbuf structure
vmxnet3-usermap pmd: remove support of old dpdk versions
vmxnet3-usermap pmd: adapt to new rte_mbuf structure
memnic pmd: adapt to new rte_mbuf structure
Diffstat:
dpdk.org/app/test-pmd/cmdline.c | 60 -
dpdk.org/app/test-pmd/config.c | 18
dpdk.org/app/test-pmd/csumonly.c | 56
dpdk.org/app/test-pmd/flowgen.c | 26
dpdk.org/app/test-pmd/icmpecho.c | 6
dpdk.org/app/test-pmd/ieee1588fwd.c | 6
dpdk.org/app/test-pmd/macfwd-retry.c | 4
dpdk.org/app/test-pmd/macfwd.c | 16
dpdk.org/app/test-pmd/macswap.c | 16
dpdk.org/app/test-pmd/rxonly.c | 53
dpdk.org/app/test-pmd/testpmd.c | 14
dpdk.org/app/test-pmd/testpmd.h | 15
dpdk.org/app/test-pmd/txonly.c | 59 -
dpdk.org/app/test/commands.c | 2
dpdk.org/app/test/test_mbuf.c | 106 -
dpdk.org/app/test/test_sched.c | 4
dpdk.org/config/defconfig_i686-default-linuxapp-gcc | 2
dpdk.org/config/defconfig_i686-default-linuxapp-icc | 2
dpdk.org/config/defconfig_x86_64-default-bsdapp-gcc | 2
dpdk.org/config/defconfig_x86_64-default-linuxapp-gcc | 2
dpdk.org/config/defconfig_x86_64-default-linuxapp-icc | 2
dpdk.org/doc/doxy-api.conf | 2
dpdk.org/examples/dpdk_qat/crypto.c | 22
dpdk.org/examples/dpdk_qat/main.c | 2
dpdk.org/examples/exception_path/main.c | 13
dpdk.org/examples/ip_reassembly/ipv4_rsmbl.h | 30
dpdk.org/examples/ip_reassembly/main.c | 10
dpdk.org/examples/ipv4_frag/Makefile | 4
dpdk.org/examples/ipv4_frag/main.c | 4
dpdk.org/examples/ipv4_frag/rte_ipv4_frag.h | 42
dpdk.org/examples/ipv4_multicast/Makefile | 4
dpdk.org/examples/ipv4_multicast/main.c | 16
dpdk.org/examples/l3fwd-power/main.c | 2
dpdk.org/examples/l3fwd-vf/main.c | 2
dpdk.org/examples/l3fwd/main.c | 10
dpdk.org/examples/load_balancer/runtime.c | 2
dpdk.org/examples/multi_process/client_server_mp/mp_client/client.c | 2
dpdk.org/examples/quota_watermark/qw/main.c | 4
dpdk.org/examples/vhost/main.c | 43
dpdk.org/examples/vhost_xen/main.c | 24
dpdk.org/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h | 2
dpdk.org/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h | 2
dpdk.org/lib/librte_mbuf/rte_mbuf.c | 105 -
dpdk.org/lib/librte_mbuf/rte_mbuf.h | 580 ++++------
dpdk.org/lib/librte_pmd_e1000/em_rxtx.c | 163 +-
dpdk.org/lib/librte_pmd_e1000/igb_rxtx.c | 212 +--
dpdk.org/lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 3
dpdk.org/lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 419 ++++---
dpdk.org/lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 10
dpdk.org/lib/librte_pmd_pcap/rte_eth_pcap.c | 14
dpdk.org/lib/librte_pmd_virtio/virtio_rxtx.c | 20
dpdk.org/lib/librte_pmd_virtio/virtqueue.h | 11
dpdk.org/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 37
dpdk.org/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c | 14
dpdk.org/lib/librte_pmd_xenvirt/virtqueue.h | 4
dpdk.org/lib/librte_sched/rte_sched.c | 14
dpdk.org/lib/librte_sched/rte_sched.h | 10
memnic/pmd/pmd_memnic.c | 14
vmxnet3-usermap/pmd/vmxnet3.c | 119 --
virtio-net-pmd/virtio_user.c | 44
60 files changed, 1217 insertions(+), 1289 deletions(-)
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 02/17] mbuf: rename RTE_MBUF_SCATTER_GATHER into RTE_MBUF_REFCNT Olivier Matz
` (17 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
According to Intel® 82599 10 GbE Controller Datasheet (Table 7-38), both
L2 and L3 lengths are needed to offload the IP checksum.
Note that the e1000 driver does not need to be patched as it already
contains the fix.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
lib/librte_pmd_e1000/igb_rxtx.c | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 4608595..b3c8149 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -233,7 +233,7 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
if (ol_flags & PKT_TX_IP_CKSUM) {
type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
- cmp_mask |= TX_MAC_LEN_CMP_MASK;
+ cmp_mask |= TX_MACIP_LEN_CMP_MASK;
}
/* Specify which HW CTX to upload. */
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 37d02aa..f217ca1 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -367,7 +367,7 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
if (ol_flags & PKT_TX_IP_CKSUM) {
type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
- cmp_mask |= TX_MAC_LEN_CMP_MASK;
+ cmp_mask |= TX_MACIP_LEN_CMP_MASK;
}
/* Specify which HW CTX to upload. */
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 02/17] mbuf: rename RTE_MBUF_SCATTER_GATHER into RTE_MBUF_REFCNT
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 03/17] mbuf: remove rte_ctrlmbuf Olivier Matz
` (16 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
It seems that RTE_MBUF_SCATTER_GATHER is not the proper name for the
feature it provides. "Scatter gather" means that data is stored using
several buffers. RTE_MBUF_REFCNT seems to be a better name for that
feature as it provides a reference counter for mbufs.
The macro RTE_MBUF_SCATTER_GATHER is poisoned to ensure this
modification is seen by drivers or applications using it.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/test_mbuf.c | 16 +++++++-------
config/defconfig_i686-default-linuxapp-gcc | 2 +-
config/defconfig_i686-default-linuxapp-icc | 2 +-
config/defconfig_x86_64-default-bsdapp-gcc | 2 +-
config/defconfig_x86_64-default-linuxapp-gcc | 2 +-
config/defconfig_x86_64-default-linuxapp-icc | 2 +-
doc/doxy-api.conf | 2 +-
examples/ipv4_frag/Makefile | 4 ++--
examples/ipv4_multicast/Makefile | 4 ++--
lib/librte_mbuf/rte_mbuf.c | 2 +-
lib/librte_mbuf/rte_mbuf.h | 31 +++++++++++++++-------------
11 files changed, 36 insertions(+), 33 deletions(-)
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 6ee350d..bfcac05 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -82,7 +82,7 @@
static struct rte_mempool *pktmbuf_pool = NULL;
static struct rte_mempool *ctrlmbuf_pool = NULL;
-#if defined RTE_MBUF_SCATTER_GATHER && defined RTE_MBUF_REFCNT_ATOMIC
+#if defined RTE_MBUF_REFCNT && defined RTE_MBUF_REFCNT_ATOMIC
static struct rte_mempool *refcnt_pool = NULL;
static struct rte_ring *refcnt_mbuf_ring = NULL;
@@ -365,7 +365,7 @@ fail:
static int
testclone_testupdate_testdetach(void)
{
-#ifndef RTE_MBUF_SCATTER_GATHER
+#ifndef RTE_MBUF_REFCNT
return 0;
#else
struct rte_mbuf *mc = NULL;
@@ -406,7 +406,7 @@ fail:
if (mc)
rte_pktmbuf_free(mc);
return -1;
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
}
#undef GOTO_FAIL
@@ -439,7 +439,7 @@ test_pktmbuf_pool(void)
printf("Error pool not empty");
ret = -1;
}
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
extra = rte_pktmbuf_clone(m[0], pktmbuf_pool);
if(extra != NULL) {
printf("Error pool not empty");
@@ -548,11 +548,11 @@ test_pktmbuf_free_segment(void)
/*
* Stress test for rte_mbuf atomic refcnt.
* Implies that:
- * RTE_MBUF_SCATTER_GATHER and RTE_MBUF_REFCNT_ATOMIC are both defined.
+ * RTE_MBUF_REFCNT and RTE_MBUF_REFCNT_ATOMIC are both defined.
* For more efficency, recomended to run with RTE_LIBRTE_MBUF_DEBUG defined.
*/
-#if defined RTE_MBUF_SCATTER_GATHER && defined RTE_MBUF_REFCNT_ATOMIC
+#if defined RTE_MBUF_REFCNT && defined RTE_MBUF_REFCNT_ATOMIC
static int
test_refcnt_slave(__attribute__((unused)) void *arg)
@@ -657,7 +657,7 @@ test_refcnt_master(void)
static int
test_refcnt_mbuf(void)
{
-#if defined RTE_MBUF_SCATTER_GATHER && defined RTE_MBUF_REFCNT_ATOMIC
+#if defined RTE_MBUF_REFCNT && defined RTE_MBUF_REFCNT_ATOMIC
unsigned lnum, master, slave, tref;
@@ -808,7 +808,7 @@ test_failing_mbuf_sanity_check(void)
return -1;
}
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
badbuf = *buf;
badbuf.refcnt = 0;
if (verify_mbuf_check_panics(&badbuf)) {
diff --git a/config/defconfig_i686-default-linuxapp-gcc b/config/defconfig_i686-default-linuxapp-gcc
index 14bd3d1..dd0f0d0 100644
--- a/config/defconfig_i686-default-linuxapp-gcc
+++ b/config/defconfig_i686-default-linuxapp-gcc
@@ -235,7 +235,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
#
CONFIG_RTE_LIBRTE_MBUF=y
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
-CONFIG_RTE_MBUF_SCATTER_GATHER=y
+CONFIG_RTE_MBUF_REFCNT=y
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/config/defconfig_i686-default-linuxapp-icc b/config/defconfig_i686-default-linuxapp-icc
index ec3386e..ef11051 100644
--- a/config/defconfig_i686-default-linuxapp-icc
+++ b/config/defconfig_i686-default-linuxapp-icc
@@ -234,7 +234,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
#
CONFIG_RTE_LIBRTE_MBUF=y
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
-CONFIG_RTE_MBUF_SCATTER_GATHER=y
+CONFIG_RTE_MBUF_REFCNT=y
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/config/defconfig_x86_64-default-bsdapp-gcc b/config/defconfig_x86_64-default-bsdapp-gcc
index d960e1d..f5f2140 100644
--- a/config/defconfig_x86_64-default-bsdapp-gcc
+++ b/config/defconfig_x86_64-default-bsdapp-gcc
@@ -210,7 +210,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
#
CONFIG_RTE_LIBRTE_MBUF=y
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
-CONFIG_RTE_MBUF_SCATTER_GATHER=y
+CONFIG_RTE_MBUF_REFCNT=y
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/config/defconfig_x86_64-default-linuxapp-gcc b/config/defconfig_x86_64-default-linuxapp-gcc
index f11ffbf..25a7e1a 100644
--- a/config/defconfig_x86_64-default-linuxapp-gcc
+++ b/config/defconfig_x86_64-default-linuxapp-gcc
@@ -237,7 +237,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
#
CONFIG_RTE_LIBRTE_MBUF=y
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
-CONFIG_RTE_MBUF_SCATTER_GATHER=y
+CONFIG_RTE_MBUF_REFCNT=y
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/config/defconfig_x86_64-default-linuxapp-icc b/config/defconfig_x86_64-default-linuxapp-icc
index 4eaca4c..d209874 100644
--- a/config/defconfig_x86_64-default-linuxapp-icc
+++ b/config/defconfig_x86_64-default-linuxapp-icc
@@ -234,7 +234,7 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n
#
CONFIG_RTE_LIBRTE_MBUF=y
CONFIG_RTE_LIBRTE_MBUF_DEBUG=n
-CONFIG_RTE_MBUF_SCATTER_GATHER=y
+CONFIG_RTE_MBUF_REFCNT=y
CONFIG_RTE_MBUF_REFCNT_ATOMIC=y
CONFIG_RTE_PKTMBUF_HEADROOM=128
diff --git a/doc/doxy-api.conf b/doc/doxy-api.conf
index 642f77a..92e48c3 100644
--- a/doc/doxy-api.conf
+++ b/doc/doxy-api.conf
@@ -49,7 +49,7 @@ FILE_PATTERNS = rte_*.h \
cmdline.h
PREDEFINED = __DOXYGEN__ \
__attribute__(x)= \
- RTE_MBUF_SCATTER_GATHER
+ RTE_MBUF_REFCNT
OPTIMIZE_OUTPUT_FOR_C = YES
ENABLE_PREPROCESSING = YES
diff --git a/examples/ipv4_frag/Makefile b/examples/ipv4_frag/Makefile
index 97fa452..b964017 100644
--- a/examples/ipv4_frag/Makefile
+++ b/examples/ipv4_frag/Makefile
@@ -39,8 +39,8 @@ RTE_TARGET ?= x86_64-default-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
-ifneq ($(CONFIG_RTE_MBUF_SCATTER_GATHER),y)
-$(error This application requires RTE_MBUF_SCATTER_GATHER to be enabled)
+ifneq ($(CONFIG_RTE_MBUF_REFCNT),y)
+$(error This application requires RTE_MBUF_REFCNT to be enabled)
endif
# binary name
diff --git a/examples/ipv4_multicast/Makefile b/examples/ipv4_multicast/Makefile
index f76f8d8..5a9612d 100644
--- a/examples/ipv4_multicast/Makefile
+++ b/examples/ipv4_multicast/Makefile
@@ -39,8 +39,8 @@ RTE_TARGET ?= x86_64-default-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
-ifneq ($(CONFIG_RTE_MBUF_SCATTER_GATHER),y)
-$(error This application requires RTE_MBUF_SCATTER_GATHER to be enabled)
+ifneq ($(CONFIG_RTE_MBUF_REFCNT),y)
+$(error This application requires RTE_MBUF_REFCNT to be enabled)
endif
# binary name
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index a72e9b8..c56206b 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -160,7 +160,7 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, enum rte_mbuf_type t,
if (m->buf_addr == NULL)
rte_panic("bad virt addr\n");
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
uint16_t cnt = rte_mbuf_refcnt_read(m);
if ((cnt == 0) || (cnt == UINT16_MAX))
rte_panic("bad ref cnt\n");
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 4a9ab41..36328ec 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -67,6 +67,9 @@
extern "C" {
#endif
+/* deprecated feature, renamed in RTE_MBUF_REFCNT */
+#pragma GCC poison RTE_MBUF_SCATTER_GATHER
+
/**
* A control message buffer.
*/
@@ -177,7 +180,7 @@ struct rte_mbuf {
void *buf_addr; /**< Virtual address of segment buffer. */
phys_addr_t buf_physaddr; /**< Physical address of segment buffer. */
uint16_t buf_len; /**< Length of segment buffer. */
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
/**
* 16-bit Reference counter.
* It should only be accessed using the following functions:
@@ -265,7 +268,7 @@ if (!(exp)) { \
#endif /* RTE_LIBRTE_MBUF_DEBUG */
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
#ifdef RTE_MBUF_REFCNT_ATOMIC
/**
@@ -347,14 +350,14 @@ rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
rte_prefetch0(m); \
} while (0)
-#else /* ! RTE_MBUF_SCATTER_GATHER */
+#else /* ! RTE_MBUF_REFCNT */
/** Mbuf prefetch */
#define RTE_MBUF_PREFETCH_TO_FREE(m) do { } while(0)
#define rte_mbuf_refcnt_set(m,v) do { } while(0)
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
/**
@@ -393,10 +396,10 @@ static inline struct rte_mbuf *__rte_mbuf_raw_alloc(struct rte_mempool *mp)
if (rte_mempool_get(mp, &mb) < 0)
return NULL;
m = (struct rte_mbuf *)mb;
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
rte_mbuf_refcnt_set(m, 1);
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
return (m);
}
@@ -411,9 +414,9 @@ static inline struct rte_mbuf *__rte_mbuf_raw_alloc(struct rte_mempool *mp)
static inline void __attribute__((always_inline))
__rte_mbuf_raw_free(struct rte_mbuf *m)
{
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
rte_mempool_put(m->pool, m);
}
@@ -590,7 +593,7 @@ static inline struct rte_mbuf *rte_pktmbuf_alloc(struct rte_mempool *mp)
return (m);
}
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
/**
* Attach packet mbuf to another packet mbuf.
@@ -658,7 +661,7 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
m->pkt.data_len = 0;
}
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
static inline struct rte_mbuf* __attribute__((always_inline))
@@ -666,7 +669,7 @@ __rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
{
__rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
if (likely (rte_mbuf_refcnt_read(m) == 1) ||
likely (rte_mbuf_refcnt_update(m, -1) == 0)) {
struct rte_mbuf *md = RTE_MBUF_FROM_BADDR(m->buf_addr);
@@ -684,7 +687,7 @@ __rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
}
#endif
return(m);
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
}
return (NULL);
#endif
@@ -728,7 +731,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
}
}
-#ifdef RTE_MBUF_SCATTER_GATHER
+#ifdef RTE_MBUF_REFCNT
/**
* Creates a "clone" of the given packet mbuf.
@@ -804,7 +807,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
} while ((m = m->pkt.next) != NULL);
}
-#endif /* RTE_MBUF_SCATTER_GATHER */
+#endif /* RTE_MBUF_REFCNT */
/**
* Get the headroom in a packet mbuf.
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 03/17] mbuf: remove rte_ctrlmbuf
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 02/17] mbuf: rename RTE_MBUF_SCATTER_GATHER into RTE_MBUF_REFCNT Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 04/17] mbuf: remove the rte_pktmbuf structure Olivier Matz
` (15 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The initial role of rte_ctrlmbuf is to carry generic messages (data
pointer + data length) but it's not used by the DPDK or it applications.
Keeping it implies:
- loosing 1 byte in the rte_mbuf structure
- having some dead code rte_mbuf.[ch]
This patch removes this feature. Thanks to it, it is now possible to
simplify the rte_mbuf structure by merging the rte_pktmbuf structure
in it. This is done in next commit.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/cmdline.c | 1 -
app/test-pmd/flowgen.c | 2 +-
app/test-pmd/testpmd.c | 2 -
app/test-pmd/txonly.c | 2 +-
app/test/commands.c | 1 -
app/test/test_mbuf.c | 72 +------------
examples/ipv4_multicast/main.c | 2 +-
lib/librte_mbuf/rte_mbuf.c | 65 +++---------
lib/librte_mbuf/rte_mbuf.h | 175 ++++++-------------------------
lib/librte_pmd_e1000/em_rxtx.c | 2 +-
lib/librte_pmd_e1000/igb_rxtx.c | 2 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 4 +-
lib/librte_pmd_virtio/virtio_rxtx.c | 2 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 2 +-
lib/librte_pmd_xenvirt/rte_eth_xenvirt.c | 2 +-
15 files changed, 55 insertions(+), 281 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index b3824f9..790239a 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -5055,7 +5055,6 @@ dump_struct_sizes(void)
#define DUMP_SIZE(t) printf("sizeof(" #t ") = %u\n", (unsigned)sizeof(t));
DUMP_SIZE(struct rte_mbuf);
DUMP_SIZE(struct rte_pktmbuf);
- DUMP_SIZE(struct rte_ctrlmbuf);
DUMP_SIZE(struct rte_mempool);
DUMP_SIZE(struct rte_ring);
#undef DUMP_SIZE
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 04911ff..a8f2a65 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -96,7 +96,7 @@ tx_mbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index bb4972c..cf5e133 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -398,13 +398,11 @@ testpmd_mbuf_ctor(struct rte_mempool *mp,
mb_ctor_arg = (struct mbuf_ctor_arg *) opaque_arg;
mb = (struct rte_mbuf *) raw_mbuf;
- mb->type = RTE_MBUF_PKT;
mb->pool = mp;
mb->buf_addr = (void *) ((char *)mb + mb_ctor_arg->seg_buf_offset);
mb->buf_physaddr = (uint64_t) (rte_mempool_virt2phy(mp, mb) +
mb_ctor_arg->seg_buf_offset);
mb->buf_len = mb_ctor_arg->seg_buf_size;
- mb->type = RTE_MBUF_PKT;
mb->ol_flags = 0;
mb->pkt.data = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;
mb->pkt.nb_segs = 1;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 5bbd34f..9d2829d 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -93,7 +93,7 @@ tx_mbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/app/test/commands.c b/app/test/commands.c
index efa8566..1595b81 100644
--- a/app/test/commands.c
+++ b/app/test/commands.c
@@ -262,7 +262,6 @@ dump_struct_sizes(void)
#define DUMP_SIZE(t) printf("sizeof(" #t ") = %u\n", (unsigned)sizeof(t));
DUMP_SIZE(struct rte_mbuf);
DUMP_SIZE(struct rte_pktmbuf);
- DUMP_SIZE(struct rte_ctrlmbuf);
DUMP_SIZE(struct rte_mempool);
DUMP_SIZE(struct rte_ring);
#undef DUMP_SIZE
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index bfcac05..4e3a198 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -80,7 +80,6 @@
#define MAKE_STRING(x) # x
static struct rte_mempool *pktmbuf_pool = NULL;
-static struct rte_mempool *ctrlmbuf_pool = NULL;
#if defined RTE_MBUF_REFCNT && defined RTE_MBUF_REFCNT_ATOMIC
@@ -272,8 +271,8 @@ test_one_pktmbuf(void)
GOTO_FAIL("Buffer should be continuous");
memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
- rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
- rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
+ rte_mbuf_sanity_check(m, 1);
+ rte_mbuf_sanity_check(m, 0);
rte_pktmbuf_dump(stdout, m, 0);
/* this prepend should fail */
@@ -320,48 +319,6 @@ fail:
return -1;
}
-/*
- * test control mbuf
- */
-static int
-test_one_ctrlmbuf(void)
-{
- struct rte_mbuf *m = NULL;
- char message[] = "This is a message carried by a ctrlmbuf";
-
- printf("Test ctrlmbuf API\n");
-
- /* alloc a mbuf */
-
- m = rte_ctrlmbuf_alloc(ctrlmbuf_pool);
- if (m == NULL)
- GOTO_FAIL("Cannot allocate mbuf");
- if (rte_ctrlmbuf_len(m) != 0)
- GOTO_FAIL("Bad length");
-
- /* set data */
- rte_ctrlmbuf_data(m) = &message;
- rte_ctrlmbuf_len(m) = sizeof(message);
-
- /* read data */
- if (rte_ctrlmbuf_data(m) != message)
- GOTO_FAIL("Invalid data pointer");
- if (rte_ctrlmbuf_len(m) != sizeof(message))
- GOTO_FAIL("Invalid len");
-
- rte_mbuf_sanity_check(m, RTE_MBUF_CTRL, 0);
-
- /* free mbuf */
- rte_ctrlmbuf_free(m);
- m = NULL;
- return 0;
-
-fail:
- if (m)
- rte_ctrlmbuf_free(m);
- return -1;
-}
-
static int
testclone_testupdate_testdetach(void)
{
@@ -744,7 +701,7 @@ verify_mbuf_check_panics(struct rte_mbuf *buf)
pid = fork();
if (pid == 0) {
- rte_mbuf_sanity_check(buf, RTE_MBUF_PKT, 1); /* should panic */
+ rte_mbuf_sanity_check(buf, 1); /* should panic */
exit(0); /* return normally if it doesn't panic */
} else if (pid < 0){
printf("Fork Failed\n");
@@ -781,13 +738,6 @@ test_failing_mbuf_sanity_check(void)
}
badbuf = *buf;
- badbuf.type = (uint8_t)-1;
- if (verify_mbuf_check_panics(&badbuf)) {
- printf("Error with bad-type mbuf test\n");
- return -1;
- }
-
- badbuf = *buf;
badbuf.pool = NULL;
if (verify_mbuf_check_panics(&badbuf)) {
printf("Error with bad-pool mbuf test\n");
@@ -889,22 +839,6 @@ test_mbuf(void)
return -1;
}
- /* create ctrlmbuf pool if it does not exist */
- if (ctrlmbuf_pool == NULL) {
- ctrlmbuf_pool =
- rte_mempool_create("test_ctrlmbuf_pool", NB_MBUF,
- sizeof(struct rte_mbuf), 32, 0,
- NULL, NULL,
- rte_ctrlmbuf_init, NULL,
- SOCKET_ID_ANY, 0);
- }
-
- /* test control mbuf */
- if (test_one_ctrlmbuf() < 0) {
- printf("test_one_ctrlmbuf() failed\n");
- return -1;
- }
-
/* test free pktmbuf segment one by one */
if (test_pktmbuf_free_segment() < 0) {
printf("test_pktmbuf_free_segment() failed.\n");
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 3bd37e4..3967d7a 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -343,7 +343,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
hdr->ol_flags = pkt->ol_flags;
- __rte_mbuf_sanity_check(hdr, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(hdr, 1);
return (hdr);
}
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index c56206b..b703b1c 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -60,32 +60,6 @@
#include <rte_hexdump.h>
/*
- * ctrlmbuf constructor, given as a callback function to
- * rte_mempool_create()
- */
-void
-rte_ctrlmbuf_init(struct rte_mempool *mp,
- __attribute__((unused)) void *opaque_arg,
- void *_m,
- __attribute__((unused)) unsigned i)
-{
- struct rte_mbuf *m = _m;
-
- memset(m, 0, mp->elt_size);
-
- /* start of buffer is just after mbuf structure */
- m->buf_addr = (char *)m + sizeof(struct rte_mbuf);
- m->buf_physaddr = rte_mempool_virt2phy(mp, m) +
- sizeof(struct rte_mbuf);
- m->buf_len = (uint16_t) (mp->elt_size - sizeof(struct rte_mbuf));
-
- /* init some constant fields */
- m->type = RTE_MBUF_CTRL;
- m->ctrl.data = (char *)m->buf_addr;
- m->pool = (struct rte_mempool *)mp;
-}
-
-/*
* pktmbuf pool constructor, given as a callback function to
* rte_mempool_create()
*/
@@ -133,7 +107,6 @@ rte_pktmbuf_init(struct rte_mempool *mp,
m->pkt.data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);
/* init some constant fields */
- m->type = RTE_MBUF_PKT;
m->pool = mp;
m->pkt.nb_segs = 1;
m->pkt.in_port = 0xff;
@@ -141,16 +114,13 @@ rte_pktmbuf_init(struct rte_mempool *mp,
/* do some sanity checks on a mbuf: panic if it fails */
void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, enum rte_mbuf_type t,
- int is_header)
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
{
const struct rte_mbuf *m_seg;
unsigned nb_segs;
if (m == NULL)
rte_panic("mbuf is NULL\n");
- if (m->type != (uint8_t)t)
- rte_panic("bad mbuf type\n");
/* generic checks */
if (m->pool == NULL)
@@ -166,29 +136,18 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, enum rte_mbuf_type t,
rte_panic("bad ref cnt\n");
#endif
- /* nothing to check for ctrl messages */
- if (m->type == RTE_MBUF_CTRL)
+ /* nothing to check for sub-segments */
+ if (is_header == 0)
return;
- /* check pkt consistency */
- else if (m->type == RTE_MBUF_PKT) {
-
- /* nothing to check for sub-segments */
- if (is_header == 0)
- return;
-
- nb_segs = m->pkt.nb_segs;
- m_seg = m;
- while (m_seg && nb_segs != 0) {
- m_seg = m_seg->pkt.next;
- nb_segs --;
- }
- if (nb_segs != 0)
- rte_panic("bad nb_segs\n");
- return;
+ nb_segs = m->pkt.nb_segs;
+ m_seg = m;
+ while (m_seg && nb_segs != 0) {
+ m_seg = m_seg->pkt.next;
+ nb_segs --;
}
-
- rte_panic("unknown mbuf type\n");
+ if (nb_segs != 0)
+ rte_panic("bad nb_segs\n");
}
/* dump a mbuf on console */
@@ -198,7 +157,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
unsigned int len;
unsigned nb_segs;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
@@ -208,7 +167,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
nb_segs = m->pkt.nb_segs;
while (m && nb_segs != 0) {
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check(m, 0);
fprintf(f, " segment at 0x%p, data=0x%p, data_len=%u\n",
m, m->pkt.data, (unsigned)m->pkt.data_len);
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 36328ec..a7792a3 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -43,18 +43,13 @@
* buffers. The message buffers are stored in a mempool, using the
* RTE mempool library.
*
- * This library provide an API to allocate/free mbufs, manipulate
- * control message buffer (ctrlmbuf), which are generic message
- * buffers, and packet buffers (pktmbuf), which are used to carry
- * network packets.
+ * This library provide an API to allocate/free packet mbufs, which are
+ * used to carry network packets.
*
* To understand the concepts of packet buffers or mbufs, you
* should read "TCP/IP Illustrated, Volume 2: The Implementation,
* Addison-Wesley, 1995, ISBN 0-201-63354-X from Richard Stevens"
* http://www.kohala.com/start/tcpipiv2.html
- *
- * The main modification of this implementation is the use of mbuf for
- * transports other than packets. mbufs can have other types.
*/
#include <stdint.h>
@@ -70,15 +65,6 @@ extern "C" {
/* deprecated feature, renamed in RTE_MBUF_REFCNT */
#pragma GCC poison RTE_MBUF_SCATTER_GATHER
-/**
- * A control message buffer.
- */
-struct rte_ctrlmbuf {
- void *data; /**< Pointer to data. */
- uint32_t data_len; /**< Length of data. */
-};
-
-
/*
* Packet Offload Features Flags. It also carry packet type information.
* Critical resources. Both rx/tx shared these bits. Be cautious on any change
@@ -165,15 +151,7 @@ struct rte_pktmbuf {
};
/**
- * This enum indicates the mbuf type.
- */
-enum rte_mbuf_type {
- RTE_MBUF_CTRL, /**< Control mbuf. */
- RTE_MBUF_PKT, /**< Packet mbuf. */
-};
-
-/**
- * The generic rte_mbuf, containing a packet mbuf or a control mbuf.
+ * The generic rte_mbuf, containing a packet mbuf.
*/
struct rte_mbuf {
struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
@@ -196,14 +174,10 @@ struct rte_mbuf {
#else
uint16_t refcnt_reserved; /**< Do not use this field */
#endif
- uint8_t type; /**< Type of mbuf. */
- uint8_t reserved; /**< Unused field. Required for padding. */
+ uint16_t reserved; /**< Unused field. Required for padding. */
uint16_t ol_flags; /**< Offload features. */
- union {
- struct rte_ctrlmbuf ctrl;
- struct rte_pktmbuf pkt;
- };
+ struct rte_pktmbuf pkt;
} __rte_cache_aligned;
/**
@@ -241,12 +215,12 @@ struct rte_pktmbuf_pool_private {
#ifdef RTE_LIBRTE_MBUF_DEBUG
/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, t, is_h) rte_mbuf_sanity_check(m, t, is_h)
+#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
/** check mbuf type in debug mode if mbuf pointer is not null */
-#define __rte_mbuf_sanity_check_raw(m, t, is_h) do { \
+#define __rte_mbuf_sanity_check_raw(m, is_h) do { \
if ((m) != NULL) \
- rte_mbuf_sanity_check(m, t, is_h); \
+ rte_mbuf_sanity_check(m, is_h); \
} while (0)
/** MBUF asserts in debug mode */
@@ -258,10 +232,10 @@ if (!(exp)) { \
#else /* RTE_LIBRTE_MBUF_DEBUG */
/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, t, is_h) do { } while(0)
+#define __rte_mbuf_sanity_check(m, is_h) do { } while(0)
/** check mbuf type in debug mode if mbuf pointer is not null */
-#define __rte_mbuf_sanity_check_raw(m, t, is_h) do { } while(0)
+#define __rte_mbuf_sanity_check_raw(m, is_h) do { } while(0)
/** MBUF asserts in debug mode */
#define RTE_MBUF_ASSERT(exp) do { } while(0)
@@ -368,20 +342,17 @@ rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
*
* @param m
* The mbuf to be checked.
- * @param t
- * The expected type of the mbuf.
* @param is_header
* True if the mbuf is a packet header, false if it is a sub-segment
* of a packet (in this case, some fields like nb_segs are not checked)
*/
void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, enum rte_mbuf_type t,
- int is_header);
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
/**
* @internal Allocate a new mbuf from mempool *mp*.
* The use of that function is reserved for RTE internal needs.
- * Please use either rte_ctrlmbuf_alloc() or rte_pktmbuf_alloc().
+ * Please use rte_pktmbuf_alloc().
*
* @param mp
* The mempool from which mbuf is allocated.
@@ -406,7 +377,7 @@ static inline struct rte_mbuf *__rte_mbuf_raw_alloc(struct rte_mempool *mp)
/**
* @internal Put mbuf back into its original mempool.
* The use of that function is reserved for RTE internal needs.
- * Please use either rte_ctrlmbuf_free() or rte_pktmbuf_free().
+ * Please use rte_pktmbuf_free().
*
* @param m
* The mbuf to be freed.
@@ -420,95 +391,11 @@ __rte_mbuf_raw_free(struct rte_mbuf *m)
rte_mempool_put(m->pool, m);
}
-/* Operations on ctrl mbuf */
-
-/**
- * The control mbuf constructor.
- *
- * This function initializes some fields in an mbuf structure that are
- * not modified by the user once created (mbuf type, origin pool, buffer
- * start address, and so on). This function is given as a callback function
- * to rte_mempool_create() at pool creation time.
- *
- * @param mp
- * The mempool from which the mbuf is allocated.
- * @param opaque_arg
- * A pointer that can be used by the user to retrieve useful information
- * for mbuf initialization. This pointer comes from the ``init_arg``
- * parameter of rte_mempool_create().
- * @param m
- * The mbuf to initialize.
- * @param i
- * The index of the mbuf in the pool table.
- */
-void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg,
- void *m, unsigned i);
-
-/**
- * Allocate a new mbuf (type is ctrl) from mempool *mp*.
- *
- * This new mbuf is initialized with data pointing to the beginning of
- * buffer, and with a length of zero.
- *
- * @param mp
- * The mempool from which the mbuf is allocated.
- * @return
- * - The pointer to the new mbuf on success.
- * - NULL if allocation failed.
- */
-static inline struct rte_mbuf *rte_ctrlmbuf_alloc(struct rte_mempool *mp)
-{
- struct rte_mbuf *m;
- if ((m = __rte_mbuf_raw_alloc(mp)) != NULL) {
- m->ctrl.data = m->buf_addr;
- m->ctrl.data_len = 0;
- __rte_mbuf_sanity_check(m, RTE_MBUF_CTRL, 0);
- }
- return (m);
-}
-
-/**
- * Free a control mbuf back into its original mempool.
- *
- * @param m
- * The control mbuf to be freed.
- */
-static inline void rte_ctrlmbuf_free(struct rte_mbuf *m)
-{
- __rte_mbuf_sanity_check(m, RTE_MBUF_CTRL, 0);
-#ifdef RTE_MBUF_SCATTER_GATHER
- if (rte_mbuf_refcnt_update(m, -1) == 0)
-#endif /* RTE_MBUF_SCATTER_GATHER */
- __rte_mbuf_raw_free(m);
-}
-
-/**
- * A macro that returns the pointer to the carried data.
- *
- * The value that can be read or assigned.
- *
- * @param m
- * The control mbuf.
- */
-#define rte_ctrlmbuf_data(m) ((m)->ctrl.data)
-
-/**
- * A macro that returns the length of the carried data.
- *
- * The value that can be read or assigned.
- *
- * @param m
- * The control mbuf.
- */
-#define rte_ctrlmbuf_len(m) ((m)->ctrl.data_len)
-
-/* Operations on pkt mbuf */
-
/**
* The packet mbuf constructor.
*
- * This function initializes some fields in the mbuf structure that are not
- * modified by the user once created (mbuf type, origin pool, buffer start
+ * This function initializes some fields in the mbuf structure that are
+ * not modified by the user once created (origin pool, buffer start
* address, and so on). This function is given as a callback function to
* rte_mempool_create() at pool creation time.
*
@@ -569,11 +456,11 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->pkt.data = (char*) m->buf_addr + buf_ofs;
m->pkt.data_len = 0;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
}
/**
- * Allocate a new mbuf (type is pkt) from a mempool.
+ * Allocate a new mbuf from a mempool.
*
* This new mbuf contains one segment, which has a length of 0. The pointer
* to data is initialized to have some bytes of headroom in the buffer
@@ -629,8 +516,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
mi->pkt.nb_segs = 1;
mi->ol_flags = md->ol_flags;
- __rte_mbuf_sanity_check(mi, RTE_MBUF_PKT, 1);
- __rte_mbuf_sanity_check(md, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check(mi, 1);
+ __rte_mbuf_sanity_check(md, 0);
}
/**
@@ -667,7 +554,7 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
static inline struct rte_mbuf* __attribute__((always_inline))
__rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check(m, 0);
#ifdef RTE_MBUF_REFCNT
if (likely (rte_mbuf_refcnt_read(m) == 1) ||
@@ -722,7 +609,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
{
struct rte_mbuf *m_next;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
while (m != NULL) {
m_next = m->pkt.next;
@@ -783,7 +670,7 @@ static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,
return (NULL);
}
- __rte_mbuf_sanity_check(mc, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(mc, 1);
return (mc);
}
@@ -800,7 +687,7 @@ static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,
*/
static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
do {
rte_mbuf_refcnt_update(m, v);
@@ -819,7 +706,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
*/
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
return (uint16_t) ((char*) m->pkt.data - (char*) m->buf_addr);
}
@@ -833,7 +720,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
*/
static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
m->pkt.data_len);
}
@@ -850,7 +737,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
{
struct rte_mbuf *m2 = (struct rte_mbuf *)m;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
while (m2->pkt.next != NULL)
m2 = m2->pkt.next;
return m2;
@@ -908,7 +795,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
uint16_t len)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
@@ -940,7 +827,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
void *tail;
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -968,7 +855,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
*/
static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
if (unlikely(len > m->pkt.data_len))
return NULL;
@@ -997,7 +884,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
{
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > m_last->pkt.data_len))
@@ -1019,7 +906,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
*/
static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 1);
+ __rte_mbuf_sanity_check(m, 1);
return !!(m->pkt.nb_segs == 1);
}
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 4f98a3f..e55da0b 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -85,7 +85,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index b3c8149..62ff7bc 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -79,7 +79,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index f217ca1..45eb6d0 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -88,7 +88,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
@@ -987,7 +987,6 @@ ixgbe_rx_alloc_bufs(struct igb_rx_queue *rxq)
/* populate the static rte mbuf fields */
mb = rxep[i].mbuf;
rte_mbuf_refcnt_set(mb, 1);
- mb->type = RTE_MBUF_PKT;
mb->pkt.next = NULL;
mb->pkt.data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;
mb->pkt.nb_segs = 1;
@@ -3084,7 +3083,6 @@ ixgbe_alloc_rx_queue_mbufs(struct igb_rx_queue *rxq)
}
rte_mbuf_refcnt_set(mbuf, 1);
- mbuf->type = RTE_MBUF_PKT;
mbuf->pkt.next = NULL;
mbuf->pkt.data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;
mbuf->pkt.nb_segs = 1;
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
index 2dd2102..199b3c6 100644
--- a/lib/librte_pmd_virtio/virtio_rxtx.c
+++ b/lib/librte_pmd_virtio/virtio_rxtx.c
@@ -66,7 +66,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 1072654..66d80b8 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -101,7 +101,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return (m);
}
diff --git a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
index da12199..ae039a0 100644
--- a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
+++ b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
@@ -80,7 +80,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check_raw(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check_raw(m, 0);
return m;
}
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 04/17] mbuf: remove the rte_pktmbuf structure
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (2 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 03/17] mbuf: remove rte_ctrlmbuf Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 05/17] mbuf: merge physaddr and buf_len in a bitfield Olivier Matz
` (14 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The rte_pktmbuf structure was initially included in the rte_mbuf
structure. This was needed when there was 2 types of mbuf (ctrl and
packet). As the control mbuf has been removed, we can merge the
rte_pktmbuf into the rte_mbuf structure.
Advantages of doing this:
- the access to mbuf fields is easier (ex: m->data instead of m->pkt.data)
- make the structure more consistent: for instance, there was no reason
to have the ol_flags field in rte_mbuf
- it will allow a deeper reorganization of the rte_mbuf structure in the
next commits, allowing to gain several bytes in it
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/cmdline.c | 1 -
app/test-pmd/csumonly.c | 6 +-
app/test-pmd/flowgen.c | 16 +--
app/test-pmd/icmpecho.c | 4 +-
app/test-pmd/ieee1588fwd.c | 6 +-
app/test-pmd/macfwd-retry.c | 2 +-
app/test-pmd/macfwd.c | 8 +-
app/test-pmd/macswap.c | 8 +-
app/test-pmd/rxonly.c | 12 +-
app/test-pmd/testpmd.c | 8 +-
app/test-pmd/testpmd.h | 2 +-
app/test-pmd/txonly.c | 42 +++----
app/test/commands.c | 1 -
app/test/test_mbuf.c | 12 +-
app/test/test_sched.c | 4 +-
examples/dpdk_qat/crypto.c | 22 ++--
examples/dpdk_qat/main.c | 2 +-
examples/exception_path/main.c | 10 +-
examples/ip_reassembly/ipv4_rsmbl.h | 20 +--
examples/ip_reassembly/main.c | 6 +-
examples/ipv4_frag/main.c | 4 +-
examples/ipv4_frag/rte_ipv4_frag.h | 42 +++----
examples/ipv4_multicast/main.c | 14 +--
examples/l3fwd-power/main.c | 2 +-
examples/l3fwd-vf/main.c | 2 +-
examples/l3fwd/main.c | 10 +-
examples/load_balancer/runtime.c | 2 +-
.../client_server_mp/mp_client/client.c | 2 +-
examples/quota_watermark/qw/main.c | 4 +-
examples/vhost/main.c | 22 ++--
examples/vhost_xen/main.c | 22 ++--
lib/librte_mbuf/rte_mbuf.c | 26 ++--
lib/librte_mbuf/rte_mbuf.h | 140 ++++++++++-----------
lib/librte_pmd_e1000/em_rxtx.c | 64 +++++-----
lib/librte_pmd_e1000/igb_rxtx.c | 68 +++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 100 +++++++--------
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 2 +-
lib/librte_pmd_pcap/rte_eth_pcap.c | 14 +--
lib/librte_pmd_virtio/virtio_rxtx.c | 16 +--
lib/librte_pmd_virtio/virtqueue.h | 6 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 26 ++--
lib/librte_pmd_xenvirt/rte_eth_xenvirt.c | 12 +-
lib/librte_pmd_xenvirt/virtqueue.h | 4 +-
lib/librte_sched/rte_sched.c | 14 +--
lib/librte_sched/rte_sched.h | 10 +-
45 files changed, 408 insertions(+), 412 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 790239a..a51eee5 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -5054,7 +5054,6 @@ dump_struct_sizes(void)
{
#define DUMP_SIZE(t) printf("sizeof(" #t ") = %u\n", (unsigned)sizeof(t));
DUMP_SIZE(struct rte_mbuf);
- DUMP_SIZE(struct rte_pktmbuf);
DUMP_SIZE(struct rte_mempool);
DUMP_SIZE(struct rte_ring);
#undef DUMP_SIZE
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 3568ba0..ee82eb6 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -263,7 +263,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
pkt_ol_flags = mb->ol_flags;
ol_flags = (uint16_t) (pkt_ol_flags & (~PKT_TX_L4_MASK));
- eth_hdr = (struct ether_hdr *) mb->pkt.data;
+ eth_hdr = (struct ether_hdr *) mb->data;
eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
if (eth_type == ETHER_TYPE_VLAN) {
/* Only allow single VLAN label here */
@@ -430,8 +430,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
}
/* Combine the packet header write. VLAN is not consider here */
- mb->pkt.vlan_macip.f.l2_len = l2_len;
- mb->pkt.vlan_macip.f.l3_len = l3_len;
+ mb->vlan_macip.f.l2_len = l2_len;
+ mb->vlan_macip.f.l3_len = l3_len;
mb->ol_flags = ol_flags;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index a8f2a65..17dbf83 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -171,11 +171,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
if (!pkt)
break;
- pkt->pkt.data_len = pkt_size;
- pkt->pkt.next = NULL;
+ pkt->data_len = pkt_size;
+ pkt->next = NULL;
/* Initialize Ethernet header. */
- eth_hdr = (struct ether_hdr *)pkt->pkt.data;
+ eth_hdr = (struct ether_hdr *)pkt->data;
ether_addr_copy(&cfg_ether_dst, ð_hdr->d_addr);
ether_addr_copy(&cfg_ether_src, ð_hdr->s_addr);
eth_hdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
@@ -205,12 +205,12 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
udp_hdr->dgram_len = RTE_CPU_TO_BE_16(pkt_size -
sizeof(*eth_hdr) -
sizeof(*ip_hdr));
- pkt->pkt.nb_segs = 1;
- pkt->pkt.pkt_len = pkt_size;
+ pkt->nb_segs = 1;
+ pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
- pkt->pkt.vlan_macip.f.vlan_tci = vlan_tci;
- pkt->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- pkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ pkt->vlan_macip.f.vlan_tci = vlan_tci;
+ pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+ pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
next_flow = (next_flow + 1) % cfg_n_flows;
diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c
index c28ff5a..4a277b8 100644
--- a/app/test-pmd/icmpecho.c
+++ b/app/test-pmd/icmpecho.c
@@ -330,12 +330,12 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
nb_replies = 0;
for (i = 0; i < nb_rx; i++) {
pkt = pkts_burst[i];
- eth_h = (struct ether_hdr *) pkt->pkt.data;
+ eth_h = (struct ether_hdr *) pkt->data;
eth_type = RTE_BE_TO_CPU_16(eth_h->ether_type);
l2_len = sizeof(struct ether_hdr);
if (verbose_level > 0) {
printf("\nPort %d pkt-len=%u nb-segs=%u\n",
- fs->rx_port, pkt->pkt.pkt_len, pkt->pkt.nb_segs);
+ fs->rx_port, pkt->pkt_len, pkt->nb_segs);
ether_addr_dump(" ETH: src=", ð_h->s_addr);
ether_addr_dump(" dst=", ð_h->d_addr);
}
diff --git a/app/test-pmd/ieee1588fwd.c b/app/test-pmd/ieee1588fwd.c
index 44f0a89..4f18183 100644
--- a/app/test-pmd/ieee1588fwd.c
+++ b/app/test-pmd/ieee1588fwd.c
@@ -546,7 +546,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
* Check that the received packet is a PTP packet that was detected
* by the hardware.
*/
- eth_hdr = (struct ether_hdr *)mb->pkt.data;
+ eth_hdr = (struct ether_hdr *)mb->data;
eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
if (! (mb->ol_flags & PKT_RX_IEEE1588_PTP)) {
if (eth_type == ETHER_TYPE_1588) {
@@ -557,7 +557,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
printf("Port %u Received non PTP packet type=0x%4x "
"len=%u\n",
(unsigned) fs->rx_port, eth_type,
- (unsigned) mb->pkt.pkt_len);
+ (unsigned) mb->pkt_len);
}
rte_pktmbuf_free(mb);
return;
@@ -574,7 +574,7 @@ ieee1588_packet_fwd(struct fwd_stream *fs)
* Check that the received PTP packet is a PTP V2 packet of type
* PTP_SYNC_MESSAGE.
*/
- ptp_hdr = (struct ptpv2_msg *) ((char *) mb->pkt.data +
+ ptp_hdr = (struct ptpv2_msg *) ((char *) mb->data +
sizeof(struct ether_hdr));
if (ptp_hdr->version != 0x02) {
printf("Port %u Received PTP V2 Ethernet frame with wrong PTP"
diff --git a/app/test-pmd/macfwd-retry.c b/app/test-pmd/macfwd-retry.c
index 98fc037..687ff8d 100644
--- a/app/test-pmd/macfwd-retry.c
+++ b/app/test-pmd/macfwd-retry.c
@@ -119,7 +119,7 @@ pkt_burst_mac_retry_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->pkt.data;
+ eth_hdr = (struct ether_hdr *) mb->data;
ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
ð_hdr->d_addr);
ether_addr_copy(&ports[fs->tx_port].eth_addr,
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 3099792..8d7612c 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,15 +110,15 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->pkt.data;
+ eth_hdr = (struct ether_hdr *) mb->data;
ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
ð_hdr->d_addr);
ether_addr_copy(&ports[fs->tx_port].eth_addr,
ð_hdr->s_addr);
mb->ol_flags = txp->tx_ol_flags;
- mb->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- mb->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
- mb->pkt.vlan_macip.f.vlan_tci = txp->tx_vlan_id;
+ mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+ mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 4ed6096..731f487 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,7 +110,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->pkt.data;
+ eth_hdr = (struct ether_hdr *) mb->data;
/* Swap dest and src mac addresses. */
ether_addr_copy(ð_hdr->d_addr, &addr);
@@ -118,9 +118,9 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
ether_addr_copy(&addr, ð_hdr->s_addr);
mb->ol_flags = txp->tx_ol_flags;
- mb->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- mb->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
- mb->pkt.vlan_macip.f.vlan_tci = txp->tx_vlan_id;
+ mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+ mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 30f8195..b77c8ce 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -149,24 +149,24 @@ pkt_burst_receive(struct fwd_stream *fs)
rte_pktmbuf_free(mb);
continue;
}
- eth_hdr = (struct ether_hdr *) mb->pkt.data;
+ eth_hdr = (struct ether_hdr *) mb->data;
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
- eth_type, (unsigned) mb->pkt.pkt_len,
- (int)mb->pkt.nb_segs);
+ eth_type, (unsigned) mb->pkt_len,
+ (int)mb->nb_segs);
if (ol_flags & PKT_RX_RSS_HASH) {
- printf(" - RSS hash=0x%x", (unsigned) mb->pkt.hash.rss);
+ printf(" - RSS hash=0x%x", (unsigned) mb->hash.rss);
printf(" - RSS queue=0x%x",(unsigned) fs->rx_queue);
}
else if (ol_flags & PKT_RX_FDIR)
printf(" - FDIR hash=0x%x - FDIR id=0x%x ",
- mb->pkt.hash.fdir.hash, mb->pkt.hash.fdir.id);
+ mb->hash.fdir.hash, mb->hash.fdir.id);
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x",
- mb->pkt.vlan_macip.f.vlan_tci);
+ mb->vlan_macip.f.vlan_tci);
printf("\n");
if (ol_flags != 0) {
int rxf;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index cf5e133..ccbb8bc 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -404,10 +404,10 @@ testpmd_mbuf_ctor(struct rte_mempool *mp,
mb_ctor_arg->seg_buf_offset);
mb->buf_len = mb_ctor_arg->seg_buf_size;
mb->ol_flags = 0;
- mb->pkt.data = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;
- mb->pkt.nb_segs = 1;
- mb->pkt.vlan_macip.data = 0;
- mb->pkt.hash.rss = 0;
+ mb->data = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mb->nb_segs = 1;
+ mb->vlan_macip.data = 0;
+ mb->hash.rss = 0;
}
static void
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 2bdb1a2..c80ea09 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -60,7 +60,7 @@ int main(int argc, char **argv);
* The maximum number of segments per packet is used when creating
* scattered transmit packets composed of a list of mbufs.
*/
-#define RTE_MAX_SEGS_PER_PKT 255 /**< pkt.nb_segs is a 8-bit unsigned char. */
+#define RTE_MAX_SEGS_PER_PKT 255 /**< nb_segs is a 8-bit unsigned char. */
#define MAX_PKT_BURST 512
#define DEF_PKT_BURST 16
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 9d2829d..96c2844 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -106,18 +106,18 @@ copy_buf_to_pkt_segs(void* buf, unsigned len, struct rte_mbuf *pkt,
unsigned copy_len;
seg = pkt;
- while (offset >= seg->pkt.data_len) {
- offset -= seg->pkt.data_len;
- seg = seg->pkt.next;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
}
- copy_len = seg->pkt.data_len - offset;
- seg_buf = ((char *) seg->pkt.data + offset);
+ copy_len = seg->data_len - offset;
+ seg_buf = ((char *) seg->data + offset);
while (len > copy_len) {
rte_memcpy(seg_buf, buf, (size_t) copy_len);
len -= copy_len;
buf = ((char*) buf + copy_len);
- seg = seg->pkt.next;
- seg_buf = seg->pkt.data;
+ seg = seg->next;
+ seg_buf = seg->data;
}
rte_memcpy(seg_buf, buf, (size_t) len);
}
@@ -125,8 +125,8 @@ copy_buf_to_pkt_segs(void* buf, unsigned len, struct rte_mbuf *pkt,
static inline void
copy_buf_to_pkt(void* buf, unsigned len, struct rte_mbuf *pkt, unsigned offset)
{
- if (offset + len <= pkt->pkt.data_len) {
- rte_memcpy(((char *) pkt->pkt.data + offset), buf, (size_t) len);
+ if (offset + len <= pkt->data_len) {
+ rte_memcpy(((char *) pkt->data + offset), buf, (size_t) len);
return;
}
copy_buf_to_pkt_segs(buf, len, pkt, offset);
@@ -225,19 +225,19 @@ pkt_burst_transmit(struct fwd_stream *fs)
return;
break;
}
- pkt->pkt.data_len = tx_pkt_seg_lengths[0];
+ pkt->data_len = tx_pkt_seg_lengths[0];
pkt_seg = pkt;
for (i = 1; i < tx_pkt_nb_segs; i++) {
- pkt_seg->pkt.next = tx_mbuf_alloc(mbp);
- if (pkt_seg->pkt.next == NULL) {
- pkt->pkt.nb_segs = i;
+ pkt_seg->next = tx_mbuf_alloc(mbp);
+ if (pkt_seg->next == NULL) {
+ pkt->nb_segs = i;
rte_pktmbuf_free(pkt);
goto nomore_mbuf;
}
- pkt_seg = pkt_seg->pkt.next;
- pkt_seg->pkt.data_len = tx_pkt_seg_lengths[i];
+ pkt_seg = pkt_seg->next;
+ pkt_seg->data_len = tx_pkt_seg_lengths[i];
}
- pkt_seg->pkt.next = NULL; /* Last segment of packet. */
+ pkt_seg->next = NULL; /* Last segment of packet. */
/*
* Initialize Ethernet header.
@@ -260,12 +260,12 @@ pkt_burst_transmit(struct fwd_stream *fs)
* Complete first mbuf of packet and append it to the
* burst of packets to be transmitted.
*/
- pkt->pkt.nb_segs = tx_pkt_nb_segs;
- pkt->pkt.pkt_len = tx_pkt_length;
+ pkt->nb_segs = tx_pkt_nb_segs;
+ pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->pkt.vlan_macip.f.vlan_tci = vlan_tci;
- pkt->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- pkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ pkt->vlan_macip.f.vlan_tci = vlan_tci;
+ pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+ pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt);
diff --git a/app/test/commands.c b/app/test/commands.c
index 1595b81..d36aa85 100644
--- a/app/test/commands.c
+++ b/app/test/commands.c
@@ -261,7 +261,6 @@ dump_struct_sizes(void)
{
#define DUMP_SIZE(t) printf("sizeof(" #t ") = %u\n", (unsigned)sizeof(t));
DUMP_SIZE(struct rte_mbuf);
- DUMP_SIZE(struct rte_pktmbuf);
DUMP_SIZE(struct rte_mempool);
DUMP_SIZE(struct rte_ring);
#undef DUMP_SIZE
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 4e3a198..097e75d 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -344,8 +344,8 @@ testclone_testupdate_testdetach(void)
GOTO_FAIL("cannot clone data\n");
rte_pktmbuf_free(clone);
- mc->pkt.next = rte_pktmbuf_alloc(pktmbuf_pool);
- if(mc->pkt.next == NULL)
+ mc->next = rte_pktmbuf_alloc(pktmbuf_pool);
+ if(mc->next == NULL)
GOTO_FAIL("Next Pkt Null\n");
clone = rte_pktmbuf_clone(mc, pktmbuf_pool);
@@ -432,7 +432,7 @@ test_pktmbuf_pool_ptr(void)
printf("rte_pktmbuf_alloc() failed (%u)\n", i);
ret = -1;
}
- m[i]->pkt.data = RTE_PTR_ADD(m[i]->pkt.data, 64);
+ m[i]->data = RTE_PTR_ADD(m[i]->data, 64);
}
/* free them */
@@ -451,8 +451,8 @@ test_pktmbuf_pool_ptr(void)
printf("rte_pktmbuf_alloc() failed (%u)\n", i);
ret = -1;
}
- if (m[i]->pkt.data != RTE_PTR_ADD(m[i]->buf_addr, RTE_PKTMBUF_HEADROOM)) {
- printf ("pkt.data pointer not set properly\n");
+ if (m[i]->data != RTE_PTR_ADD(m[i]->buf_addr, RTE_PKTMBUF_HEADROOM)) {
+ printf ("data pointer not set properly\n");
ret = -1;
}
}
@@ -493,7 +493,7 @@ test_pktmbuf_free_segment(void)
mb = m[i];
while(mb != NULL) {
mt = mb;
- mb = mb->pkt.next;
+ mb = mb->next;
rte_pktmbuf_free_seg(mt);
}
}
diff --git a/app/test/test_sched.c b/app/test/test_sched.c
index 0de5b1c..f729ade 100644
--- a/app/test/test_sched.c
+++ b/app/test/test_sched.c
@@ -148,8 +148,8 @@ prepare_pkt(struct rte_mbuf *mbuf)
rte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE, e_RTE_METER_YELLOW);
/* 64 byte packet */
- mbuf->pkt.pkt_len = 60;
- mbuf->pkt.data_len = 60;
+ mbuf->pkt_len = 60;
+ mbuf->data_len = 60;
}
diff --git a/examples/dpdk_qat/crypto.c b/examples/dpdk_qat/crypto.c
index 7606d3d..e519e25 100644
--- a/examples/dpdk_qat/crypto.c
+++ b/examples/dpdk_qat/crypto.c
@@ -183,7 +183,7 @@ struct glob_keys g_crypto_hash_keys = {
*
*/
#define PACKET_DATA_START_PHYS(p) \
- ((p)->buf_physaddr + ((char *)p->pkt.data - (char *)p->buf_addr))
+ ((p)->buf_physaddr + ((char *)p->data - (char *)p->buf_addr))
/*
* A fixed offset to where the crypto is to be performed, which is the first
@@ -773,7 +773,7 @@ enum crypto_result
crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
{
CpaCySymDpOpData *opData =
- (CpaCySymDpOpData *) ((char *) (rte_buff->pkt.data)
+ (CpaCySymDpOpData *) ((char *) (rte_buff->data)
+ CRYPTO_OFFSET_TO_OPDATA);
uint32_t lcore_id;
@@ -785,7 +785,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
bzero(opData, sizeof(CpaCySymDpOpData));
opData->srcBuffer = opData->dstBuffer = PACKET_DATA_START_PHYS(rte_buff);
- opData->srcBufferLen = opData->dstBufferLen = rte_buff->pkt.data_len;
+ opData->srcBufferLen = opData->dstBufferLen = rte_buff->data_len;
opData->sessionCtx = qaCoreConf[lcore_id].encryptSessionHandleTbl[c][h];
opData->thisPhys = PACKET_DATA_START_PHYS(rte_buff)
+ CRYPTO_OFFSET_TO_OPDATA;
@@ -805,7 +805,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
opData->ivLenInBytes = IV_LENGTH_8_BYTES;
opData->cryptoStartSrcOffsetInBytes = CRYPTO_START_OFFSET;
- opData->messageLenToCipherInBytes = rte_buff->pkt.data_len
+ opData->messageLenToCipherInBytes = rte_buff->data_len
- CRYPTO_START_OFFSET;
/*
* Work around for padding, message length has to be a multiple of
@@ -818,7 +818,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
if (NO_HASH != h) {
opData->hashStartSrcOffsetInBytes = HASH_START_OFFSET;
- opData->messageLenToHashInBytes = rte_buff->pkt.data_len
+ opData->messageLenToHashInBytes = rte_buff->data_len
- HASH_START_OFFSET;
/*
* Work around for padding, message length has to be a multiple of block
@@ -831,7 +831,7 @@ crypto_encrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
* Assumption: Ok ignore the passed digest pointer and place HMAC at end
* of packet.
*/
- opData->digestResult = rte_buff->buf_physaddr + rte_buff->pkt.data_len;
+ opData->digestResult = rte_buff->buf_physaddr + rte_buff->data_len;
}
if (CPA_STATUS_SUCCESS != enqueueOp(opData, lcore_id)) {
@@ -848,7 +848,7 @@ enum crypto_result
crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
{
- CpaCySymDpOpData *opData = (void*) (((char *) rte_buff->pkt.data)
+ CpaCySymDpOpData *opData = (void*) (((char *) rte_buff->data)
+ CRYPTO_OFFSET_TO_OPDATA);
uint32_t lcore_id;
@@ -860,7 +860,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
bzero(opData, sizeof(CpaCySymDpOpData));
opData->dstBuffer = opData->srcBuffer = PACKET_DATA_START_PHYS(rte_buff);
- opData->dstBufferLen = opData->srcBufferLen = rte_buff->pkt.data_len;
+ opData->dstBufferLen = opData->srcBufferLen = rte_buff->data_len;
opData->thisPhys = PACKET_DATA_START_PHYS(rte_buff)
+ CRYPTO_OFFSET_TO_OPDATA;
opData->sessionCtx = qaCoreConf[lcore_id].decryptSessionHandleTbl[c][h];
@@ -880,7 +880,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
opData->ivLenInBytes = IV_LENGTH_8_BYTES;
opData->cryptoStartSrcOffsetInBytes = CRYPTO_START_OFFSET;
- opData->messageLenToCipherInBytes = rte_buff->pkt.data_len
+ opData->messageLenToCipherInBytes = rte_buff->data_len
- CRYPTO_START_OFFSET;
/*
@@ -892,7 +892,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
}
if (NO_HASH != h) {
opData->hashStartSrcOffsetInBytes = HASH_START_OFFSET;
- opData->messageLenToHashInBytes = rte_buff->pkt.data_len
+ opData->messageLenToHashInBytes = rte_buff->data_len
- HASH_START_OFFSET;
/*
* Work around for padding, message length has to be a multiple of block
@@ -900,7 +900,7 @@ crypto_decrypt(struct rte_mbuf *rte_buff, enum cipher_alg c, enum hash_alg h)
*/
opData->messageLenToHashInBytes -= opData->messageLenToHashInBytes
% HASH_BLOCK_DEFAULT_SIZE;
- opData->digestResult = rte_buff->buf_physaddr + rte_buff->pkt.data_len;
+ opData->digestResult = rte_buff->buf_physaddr + rte_buff->data_len;
}
if (CPA_STATUS_SUCCESS != enqueueOp(opData, lcore_id)) {
diff --git a/examples/dpdk_qat/main.c b/examples/dpdk_qat/main.c
index cdf6832..1401d6b 100644
--- a/examples/dpdk_qat/main.c
+++ b/examples/dpdk_qat/main.c
@@ -384,7 +384,7 @@ main_loop(__attribute__((unused)) void *dummy)
}
}
- port = dst_ports[pkt->pkt.in_port];
+ port = dst_ports[pkt->in_port];
/* Transmit the packet */
nic_tx_send_packet(pkt, (uint8_t)port);
diff --git a/examples/exception_path/main.c b/examples/exception_path/main.c
index 0bc149d..d9a85b5 100644
--- a/examples/exception_path/main.c
+++ b/examples/exception_path/main.c
@@ -302,16 +302,16 @@ main_loop(__attribute__((unused)) void *arg)
if (m == NULL)
continue;
- ret = read(tap_fd, m->pkt.data, MAX_PACKET_SZ);
+ ret = read(tap_fd, m->data, MAX_PACKET_SZ);
lcore_stats[lcore_id].rx++;
if (unlikely(ret < 0)) {
FATAL_ERROR("Reading from %s interface failed",
tap_name);
}
- m->pkt.nb_segs = 1;
- m->pkt.next = NULL;
- m->pkt.pkt_len = (uint16_t)ret;
- m->pkt.data_len = (uint16_t)ret;
+ m->nb_segs = 1;
+ m->next = NULL;
+ m->pkt_len = (uint16_t)ret;
+ m->data_len = (uint16_t)ret;
ret = rte_eth_tx_burst(port_ids[lcore_id], 0, &m, 1);
if (unlikely(ret < 1)) {
rte_pktmbuf_free(m);
diff --git a/examples/ip_reassembly/ipv4_rsmbl.h b/examples/ip_reassembly/ipv4_rsmbl.h
index 58ec1ee..9b647fb 100644
--- a/examples/ip_reassembly/ipv4_rsmbl.h
+++ b/examples/ip_reassembly/ipv4_rsmbl.h
@@ -168,20 +168,20 @@ ipv4_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp)
struct rte_mbuf *ms;
/* adjust start of the last fragment data. */
- rte_pktmbuf_adj(mp, (uint16_t)(mp->pkt.vlan_macip.f.l2_len +
- mp->pkt.vlan_macip.f.l3_len));
+ rte_pktmbuf_adj(mp, (uint16_t)(mp->vlan_macip.f.l2_len +
+ mp->vlan_macip.f.l3_len));
/* chain two fragments. */
ms = rte_pktmbuf_lastseg(mn);
- ms->pkt.next = mp;
+ ms->next = mp;
/* accumulate number of segments and total length. */
- mn->pkt.nb_segs = (uint8_t)(mn->pkt.nb_segs + mp->pkt.nb_segs);
- mn->pkt.pkt_len += mp->pkt.pkt_len;
+ mn->nb_segs = (uint8_t)(mn->nb_segs + mp->nb_segs);
+ mn->pkt_len += mp->pkt_len;
/* reset pkt_len and nb_segs for chained fragment. */
- mp->pkt.pkt_len = mp->pkt.data_len;
- mp->pkt.nb_segs = 1;
+ mp->pkt_len = mp->data_len;
+ mp->nb_segs = 1;
}
/*
@@ -233,10 +233,10 @@ ipv4_frag_reassemble(const struct ipv4_frag_pkt *fp)
/* update ipv4 header for the reassmebled packet */
ip_hdr = (struct ipv4_hdr*)(rte_pktmbuf_mtod(m, uint8_t *) +
- m->pkt.vlan_macip.f.l2_len);
+ m->vlan_macip.f.l2_len);
ip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size +
- m->pkt.vlan_macip.f.l3_len));
+ m->vlan_macip.f.l3_len));
ip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset &
rte_cpu_to_be_16(IPV4_HDR_DF_FLAG));
ip_hdr->hdr_checksum = 0;
@@ -377,7 +377,7 @@ ipv4_frag_mbuf(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
ip_ofs *= IPV4_HDR_OFFSET_UNITS;
ip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) -
- mb->pkt.vlan_macip.f.l3_len);
+ mb->vlan_macip.f.l3_len);
IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
"mbuf: %p, tms: %" PRIu64
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 4880a5f..5c5626a 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -655,7 +655,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
#ifdef DO_RFC_1812_CHECKS
/* Check to make sure the packet is valid (RFC1812) */
- if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
rte_pktmbuf_free(m);
return;
}
@@ -680,8 +680,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dr = &qconf->death_row;
/* prepare mbuf: setup l2_len/l3_len. */
- m->pkt.vlan_macip.f.l2_len = sizeof(*eth_hdr);
- m->pkt.vlan_macip.f.l3_len = sizeof(*ipv4_hdr);
+ m->vlan_macip.f.l2_len = sizeof(*eth_hdr);
+ m->vlan_macip.f.l3_len = sizeof(*ipv4_hdr);
/* process this fragment. */
if ((mo = ipv4_frag_mbuf(tbl, dr, m, tms, ipv4_hdr,
diff --git a/examples/ipv4_frag/main.c b/examples/ipv4_frag/main.c
index 93664c8..b950b87 100644
--- a/examples/ipv4_frag/main.c
+++ b/examples/ipv4_frag/main.c
@@ -257,7 +257,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t port_in)
len = qconf->tx_mbufs[port_out].len;
/* if we don't need to do any fragmentation */
- if (likely (IPV4_MTU_DEFAULT >= m->pkt.pkt_len)) {
+ if (likely (IPV4_MTU_DEFAULT >= m->pkt_len)) {
qconf->tx_mbufs[port_out].m_table[len] = m;
len2 = 1;
} else {
@@ -283,7 +283,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t port_in)
rte_panic("No headroom in mbuf.\n");
}
- m->pkt.vlan_macip.f.l2_len = sizeof(struct ether_hdr);
+ m->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
ether_addr_copy(&remote_eth_addr, ð_hdr->d_addr);
ether_addr_copy(&ports_eth_addr[port_out], ð_hdr->s_addr);
diff --git a/examples/ipv4_frag/rte_ipv4_frag.h b/examples/ipv4_frag/rte_ipv4_frag.h
index 84fa9c9..6234224 100644
--- a/examples/ipv4_frag/rte_ipv4_frag.h
+++ b/examples/ipv4_frag/rte_ipv4_frag.h
@@ -145,9 +145,9 @@ static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
/* Fragment size should be a multiply of 8. */
RTE_IPV4_FRAG_ASSERT(IPV4_MAX_FRAGS_PER_PACKET * frag_size >=
- (uint16_t)(pkt_in->pkt.pkt_len - sizeof (struct ipv4_hdr)));
+ (uint16_t)(pkt_in->pkt_len - sizeof (struct ipv4_hdr)));
- in_hdr = (struct ipv4_hdr*) pkt_in->pkt.data;
+ in_hdr = (struct ipv4_hdr*) pkt_in->data;
flag_offset = rte_cpu_to_be_16(in_hdr->fragment_offset);
/* If Don't Fragment flag is set */
@@ -156,7 +156,7 @@ static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
/* Check that pkts_out is big enough to hold all fragments */
if (unlikely (frag_size * nb_pkts_out <
- (uint16_t)(pkt_in->pkt.pkt_len - sizeof (struct ipv4_hdr))))
+ (uint16_t)(pkt_in->pkt_len - sizeof (struct ipv4_hdr))))
return (-EINVAL);
in_seg = pkt_in;
@@ -178,8 +178,8 @@ static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
}
/* Reserve space for the IP header that will be built later */
- out_pkt->pkt.data_len = sizeof(struct ipv4_hdr);
- out_pkt->pkt.pkt_len = sizeof(struct ipv4_hdr);
+ out_pkt->data_len = sizeof(struct ipv4_hdr);
+ out_pkt->pkt_len = sizeof(struct ipv4_hdr);
out_seg_prev = out_pkt;
more_out_segs = 1;
@@ -194,30 +194,30 @@ static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
__free_fragments(pkts_out, out_pkt_pos);
return (-ENOMEM);
}
- out_seg_prev->pkt.next = out_seg;
+ out_seg_prev->next = out_seg;
out_seg_prev = out_seg;
/* Prepare indirect buffer */
rte_pktmbuf_attach(out_seg, in_seg);
- len = mtu_size - out_pkt->pkt.pkt_len;
- if (len > (in_seg->pkt.data_len - in_seg_data_pos)) {
- len = in_seg->pkt.data_len - in_seg_data_pos;
+ len = mtu_size - out_pkt->pkt_len;
+ if (len > (in_seg->data_len - in_seg_data_pos)) {
+ len = in_seg->data_len - in_seg_data_pos;
}
- out_seg->pkt.data = (char*) in_seg->pkt.data + (uint16_t)in_seg_data_pos;
- out_seg->pkt.data_len = (uint16_t)len;
- out_pkt->pkt.pkt_len = (uint16_t)(len +
- out_pkt->pkt.pkt_len);
- out_pkt->pkt.nb_segs += 1;
+ out_seg->data = (char*) in_seg->data + (uint16_t)in_seg_data_pos;
+ out_seg->data_len = (uint16_t)len;
+ out_pkt->pkt_len = (uint16_t)(len +
+ out_pkt->pkt_len);
+ out_pkt->nb_segs += 1;
in_seg_data_pos += len;
/* Current output packet (i.e. fragment) done ? */
- if (unlikely(out_pkt->pkt.pkt_len >= mtu_size)) {
+ if (unlikely(out_pkt->pkt_len >= mtu_size)) {
more_out_segs = 0;
}
/* Current input segment done ? */
- if (unlikely(in_seg_data_pos == in_seg->pkt.data_len)) {
- in_seg = in_seg->pkt.next;
+ if (unlikely(in_seg_data_pos == in_seg->data_len)) {
+ in_seg = in_seg->next;
in_seg_data_pos = 0;
if (unlikely(in_seg == NULL)) {
@@ -228,17 +228,17 @@ static inline int32_t rte_ipv4_fragmentation(struct rte_mbuf *pkt_in,
/* Build the IP header */
- out_hdr = (struct ipv4_hdr*) out_pkt->pkt.data;
+ out_hdr = (struct ipv4_hdr*) out_pkt->data;
__fill_ipv4hdr_frag(out_hdr, in_hdr,
- (uint16_t)out_pkt->pkt.pkt_len,
+ (uint16_t)out_pkt->pkt_len,
flag_offset, fragment_offset, more_in_segs);
fragment_offset = (uint16_t)(fragment_offset +
- out_pkt->pkt.pkt_len - sizeof(struct ipv4_hdr));
+ out_pkt->pkt_len - sizeof(struct ipv4_hdr));
out_pkt->ol_flags |= PKT_TX_IP_CKSUM;
- out_pkt->pkt.vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ out_pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
/* Write the fragment to the output list */
pkts_out[out_pkt_pos] = out_pkt;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 3967d7a..9c57ce0 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -329,17 +329,17 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
}
/* prepend new header */
- hdr->pkt.next = pkt;
+ hdr->next = pkt;
/* update header's fields */
- hdr->pkt.pkt_len = (uint16_t)(hdr->pkt.data_len + pkt->pkt.pkt_len);
- hdr->pkt.nb_segs = (uint8_t)(pkt->pkt.nb_segs + 1);
+ hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
+ hdr->nb_segs = (uint8_t)(pkt->nb_segs + 1);
/* copy metadata from source packet*/
- hdr->pkt.in_port = pkt->pkt.in_port;
- hdr->pkt.vlan_macip = pkt->pkt.vlan_macip;
- hdr->pkt.hash = pkt->pkt.hash;
+ hdr->in_port = pkt->in_port;
+ hdr->vlan_macip = pkt->vlan_macip;
+ hdr->hash = pkt->hash;
hdr->ol_flags = pkt->ol_flags;
@@ -412,7 +412,7 @@ mcast_forward(struct rte_mbuf *m, struct lcore_queue_conf *qconf)
/* Should we use rte_pktmbuf_clone() or not. */
use_clone = (port_num <= MCAST_CLONE_PORTS &&
- m->pkt.nb_segs <= MCAST_CLONE_SEGS);
+ m->nb_segs <= MCAST_CLONE_SEGS);
/* Mark all packet's segments as referenced port_num times */
if (use_clone == 0)
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 219f802..a991809 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -687,7 +687,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid,
#ifdef DO_RFC_1812_CHECKS
/* Check to make sure the packet is valid (RFC1812) */
- if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
rte_pktmbuf_free(m);
return;
}
diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index fb811fa..7420d89 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -489,7 +489,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, lookup_struct_t * l3fwd
#ifdef DO_RFC_1812_CHECKS
/* Check to make sure the packet is valid (RFC1812) */
- if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
rte_pktmbuf_free(m);
return;
}
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 1ba4ca2..3eff4ea 100755
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -705,19 +705,19 @@ simple_ipv4_fwd_4pkts(struct rte_mbuf* m[4], uint8_t portid, struct lcore_conf *
#ifdef DO_RFC_1812_CHECKS
/* Check to make sure the packet is valid (RFC1812) */
uint8_t valid_mask = MASK_ALL_PKTS;
- if (is_valid_ipv4_pkt(ipv4_hdr[0], m[0]->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr[0], m[0]->pkt_len) < 0) {
rte_pktmbuf_free(m[0]);
valid_mask &= EXECLUDE_1ST_PKT;
}
- if (is_valid_ipv4_pkt(ipv4_hdr[1], m[1]->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr[1], m[1]->pkt_len) < 0) {
rte_pktmbuf_free(m[1]);
valid_mask &= EXECLUDE_2ND_PKT;
}
- if (is_valid_ipv4_pkt(ipv4_hdr[2], m[2]->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr[2], m[2]->pkt_len) < 0) {
rte_pktmbuf_free(m[2]);
valid_mask &= EXECLUDE_3RD_PKT;
}
- if (is_valid_ipv4_pkt(ipv4_hdr[3], m[3]->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr[3], m[3]->pkt_len) < 0) {
rte_pktmbuf_free(m[3]);
valid_mask &= EXECLUDE_4TH_PKT;
}
@@ -905,7 +905,7 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, struct lcore_conf *qcon
#ifdef DO_RFC_1812_CHECKS
/* Check to make sure the packet is valid (RFC1812) */
- if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt.pkt_len) < 0) {
+ if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
rte_pktmbuf_free(m);
return;
}
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
index e85abdb..bfa7c58 100644
--- a/examples/load_balancer/runtime.c
+++ b/examples/load_balancer/runtime.c
@@ -540,7 +540,7 @@ app_lcore_worker(
ipv4_dst = rte_be_to_cpu_32(ipv4_hdr->dst_addr);
if (unlikely(rte_lpm_lookup(lp->lpm_table, ipv4_dst, &port) != 0)) {
- port = pkt->pkt.in_port;
+ port = pkt->in_port;
}
pos = lp->mbuf_out[port].n_mbufs;
diff --git a/examples/multi_process/client_server_mp/mp_client/client.c b/examples/multi_process/client_server_mp/mp_client/client.c
index 7543db4..187c80f 100644
--- a/examples/multi_process/client_server_mp/mp_client/client.c
+++ b/examples/multi_process/client_server_mp/mp_client/client.c
@@ -211,7 +211,7 @@ enqueue_packet(struct rte_mbuf *buf, uint8_t port)
static void
handle_packet(struct rte_mbuf *buf)
{
- const uint8_t in_port = buf->pkt.in_port;
+ const uint8_t in_port = buf->in_port;
const uint8_t out_port = output_ports[in_port];
enqueue_packet(buf, out_port);
diff --git a/examples/quota_watermark/qw/main.c b/examples/quota_watermark/qw/main.c
index 21e0fc7..421b43d 100644
--- a/examples/quota_watermark/qw/main.c
+++ b/examples/quota_watermark/qw/main.c
@@ -104,8 +104,8 @@ static void send_pause_frame(uint8_t port_id, uint16_t duration)
pause_frame->opcode = rte_cpu_to_be_16(0x0001);
pause_frame->param = rte_cpu_to_be_16(duration);
- mbuf->pkt.pkt_len = 60;
- mbuf->pkt.data_len = 60;
+ mbuf->pkt_len = 60;
+ mbuf->data_len = 60;
rte_eth_tx_burst(port_id, 0, &mbuf, 1);
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 816a71a..26cfc8e 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -815,7 +815,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)
vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
/* Copy mbuf data to buffer */
- rte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->pkt.data, rte_pktmbuf_data_len(buff));
+ rte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->data, rte_pktmbuf_data_len(buff));
res_cur_idx++;
packet_success++;
@@ -877,7 +877,7 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)
int i, ret;
/* Learn MAC address of guest device from packet */
- pkt_hdr = (struct ether_hdr *)m->pkt.data;
+ pkt_hdr = (struct ether_hdr *)m->data;
dev_ll = ll_root_used;
@@ -965,7 +965,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)
struct ether_hdr *pkt_hdr;
uint64_t ret = 0;
- pkt_hdr = (struct ether_hdr *)m->pkt.data;
+ pkt_hdr = (struct ether_hdr *)m->data;
/*get the used devices list*/
dev_ll = ll_root_used;
@@ -1038,22 +1038,22 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *
return;
}
- mbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN;
- mbuf->pkt.pkt_len = mbuf->pkt.data_len;
+ mbuf->data_len = m->data_len + VLAN_HLEN;
+ mbuf->pkt_len = mbuf->data_len;
/* Copy ethernet header to mbuf. */
- rte_memcpy((void*)mbuf->pkt.data, (const void*)m->pkt.data, ETH_HLEN);
+ rte_memcpy((void*)mbuf->data, (const void*)m->data, ETH_HLEN);
/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/
- vlan_hdr = (struct vlan_ethhdr *) mbuf->pkt.data;
+ vlan_hdr = (struct vlan_ethhdr *) mbuf->data;
vlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;
vlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);
vlan_hdr->h_vlan_TCI = htons(vlan_tag);
/* Copy the remaining packet contents to the mbuf. */
- rte_memcpy((void*) ((uint8_t*)mbuf->pkt.data + VLAN_ETH_HLEN),
- (const void*) ((uint8_t*)m->pkt.data + ETH_HLEN), (m->pkt.data_len - ETH_HLEN));
+ rte_memcpy((void*) ((uint8_t*)mbuf->data + VLAN_ETH_HLEN),
+ (const void*) ((uint8_t*)m->data + ETH_HLEN), (m->data_len - ETH_HLEN));
tx_q->m_table[len] = mbuf;
len++;
if (enable_stats) {
@@ -1143,8 +1143,8 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)
vq->used->ring[used_idx].len = 0;
/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */
- m.pkt.data_len = desc->len;
- m.pkt.data = (void*)(uintptr_t)buff_addr;
+ m.data_len = desc->len;
+ m.data = (void*)(uintptr_t)buff_addr;
PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
index eafc0aa..2cf0029 100644
--- a/examples/vhost_xen/main.c
+++ b/examples/vhost_xen/main.c
@@ -677,7 +677,7 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)
vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
/* Copy mbuf data to buffer */
- rte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->pkt.data, rte_pktmbuf_data_len(buff));
+ rte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->data, rte_pktmbuf_data_len(buff));
res_cur_idx++;
packet_success++;
@@ -808,7 +808,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)
struct ether_hdr *pkt_hdr;
uint64_t ret = 0;
- pkt_hdr = (struct ether_hdr *)m->pkt.data;
+ pkt_hdr = (struct ether_hdr *)m->data;
/*get the used devices list*/
dev_ll = ll_root_used;
@@ -879,22 +879,22 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *
if(!mbuf)
return;
- mbuf->pkt.data_len = m->pkt.data_len + VLAN_HLEN;
- mbuf->pkt.pkt_len = mbuf->pkt.data_len;
+ mbuf->data_len = m->data_len + VLAN_HLEN;
+ mbuf->pkt_len = mbuf->data_len;
/* Copy ethernet header to mbuf. */
- rte_memcpy((void*)mbuf->pkt.data, (const void*)m->pkt.data, ETH_HLEN);
+ rte_memcpy((void*)mbuf->data, (const void*)m->data, ETH_HLEN);
/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/
- vlan_hdr = (struct vlan_ethhdr *) mbuf->pkt.data;
+ vlan_hdr = (struct vlan_ethhdr *) mbuf->data;
vlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;
vlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);
vlan_hdr->h_vlan_TCI = htons(vlan_tag);
/* Copy the remaining packet contents to the mbuf. */
- rte_memcpy((void*) ((uint8_t*)mbuf->pkt.data + VLAN_ETH_HLEN),
- (const void*) ((uint8_t*)m->pkt.data + ETH_HLEN), (m->pkt.data_len - ETH_HLEN));
+ rte_memcpy((void*) ((uint8_t*)mbuf->data + VLAN_ETH_HLEN),
+ (const void*) ((uint8_t*)m->data + ETH_HLEN), (m->data_len - ETH_HLEN));
tx_q->m_table[len] = mbuf;
len++;
if (enable_stats) {
@@ -980,9 +980,9 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)
rte_prefetch0((void*)(uintptr_t)buff_addr);
/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */
- m.pkt.data_len = desc->len;
- m.pkt.data = (void*)(uintptr_t)buff_addr;
- m.pkt.nb_segs = 1;
+ m.data_len = desc->len;
+ m.data = (void*)(uintptr_t)buff_addr;
+ m.nb_segs = 1;
virtio_tx_route(dev, &m, mbuf_pool, 0);
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index b703b1c..bc9d462 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -104,12 +104,12 @@ rte_pktmbuf_init(struct rte_mempool *mp,
m->buf_len = (uint16_t)buf_len;
/* keep some headroom between start of buffer and data */
- m->pkt.data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);
+ m->data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);
/* init some constant fields */
m->pool = mp;
- m->pkt.nb_segs = 1;
- m->pkt.in_port = 0xff;
+ m->nb_segs = 1;
+ m->in_port = 0xff;
}
/* do some sanity checks on a mbuf: panic if it fails */
@@ -140,10 +140,10 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
if (is_header == 0)
return;
- nb_segs = m->pkt.nb_segs;
+ nb_segs = m->nb_segs;
m_seg = m;
while (m_seg && nb_segs != 0) {
- m_seg = m_seg->pkt.next;
+ m_seg = m_seg->next;
nb_segs --;
}
if (nb_segs != 0)
@@ -162,22 +162,22 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
fprintf(f, " pkt_len=%"PRIu32", ol_flags=%"PRIx16", nb_segs=%u, "
- "in_port=%u\n", m->pkt.pkt_len, m->ol_flags,
- (unsigned)m->pkt.nb_segs, (unsigned)m->pkt.in_port);
- nb_segs = m->pkt.nb_segs;
+ "in_port=%u\n", m->pkt_len, m->ol_flags,
+ (unsigned)m->nb_segs, (unsigned)m->in_port);
+ nb_segs = m->nb_segs;
while (m && nb_segs != 0) {
__rte_mbuf_sanity_check(m, 0);
fprintf(f, " segment at 0x%p, data=0x%p, data_len=%u\n",
- m, m->pkt.data, (unsigned)m->pkt.data_len);
+ m, m->data, (unsigned)m->data_len);
len = dump_len;
- if (len > m->pkt.data_len)
- len = m->pkt.data_len;
+ if (len > m->data_len)
+ len = m->data_len;
if (len != 0)
- rte_hexdump(f, NULL, m->pkt.data, len);
+ rte_hexdump(f, NULL, m->data, len);
dump_len -= len;
- m = m->pkt.next;
+ m = m->next;
nb_segs --;
}
}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index a7792a3..0743506 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -125,32 +125,6 @@ union rte_vlan_macip {
#define TX_MACIP_LEN_CMP_MASK (TX_MAC_LEN_CMP_MASK | TX_IP_LEN_CMP_MASK)
/**
- * A packet message buffer.
- */
-struct rte_pktmbuf {
- /* valid for any segment */
- struct rte_mbuf *next; /**< Next segment of scattered packet. */
- void* data; /**< Start address of data in segment buffer. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
-
- /* these fields are valid for first segment only */
- uint8_t nb_segs; /**< Number of segments. */
- uint8_t in_port; /**< Input port. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
-
- /* offload features */
- union rte_vlan_macip vlan_macip;
- union {
- uint32_t rss; /**< RSS hash result if RSS enabled */
- struct {
- uint16_t hash;
- uint16_t id;
- } fdir; /**< Filter identifier if FDIR enabled */
- uint32_t sched; /**< Hierarchical scheduler */
- } hash; /**< hash information */
-};
-
-/**
* The generic rte_mbuf, containing a packet mbuf.
*/
struct rte_mbuf {
@@ -177,7 +151,26 @@ struct rte_mbuf {
uint16_t reserved; /**< Unused field. Required for padding. */
uint16_t ol_flags; /**< Offload features. */
- struct rte_pktmbuf pkt;
+ /* valid for any segment */
+ struct rte_mbuf *next; /**< Next segment of scattered packet. */
+ void* data; /**< Start address of data in segment buffer. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+
+ /* these fields are valid for first segment only */
+ uint8_t nb_segs; /**< Number of segments. */
+ uint8_t in_port; /**< Input port. */
+ uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+
+ /* offload features, valid for first segment only */
+ union rte_vlan_macip vlan_macip;
+ union {
+ uint32_t rss; /**< RSS hash result if RSS enabled */
+ struct {
+ uint16_t hash;
+ uint16_t id;
+ } fdir; /**< Filter identifier if FDIR enabled */
+ uint32_t sched; /**< Hierarchical scheduler */
+ } hash; /**< hash information */
} __rte_cache_aligned;
/**
@@ -444,18 +437,18 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
{
uint32_t buf_ofs;
- m->pkt.next = NULL;
- m->pkt.pkt_len = 0;
- m->pkt.vlan_macip.data = 0;
- m->pkt.nb_segs = 1;
- m->pkt.in_port = 0xff;
+ m->next = NULL;
+ m->pkt_len = 0;
+ m->vlan_macip.data = 0;
+ m->nb_segs = 1;
+ m->in_port = 0xff;
m->ol_flags = 0;
buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
- m->pkt.data = (char*) m->buf_addr + buf_ofs;
+ m->data = (char*) m->buf_addr + buf_ofs;
- m->pkt.data_len = 0;
+ m->data_len = 0;
__rte_mbuf_sanity_check(m, 1);
}
@@ -509,11 +502,16 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
mi->buf_addr = md->buf_addr;
mi->buf_len = md->buf_len;
- mi->pkt = md->pkt;
+ mi->next = md->next;
+ mi->data = md->data;
+ mi->data_len = md->data_len;
+ mi->in_port = md->in_port;
+ mi->vlan_macip = md->vlan_macip;
+ mi->hash = md->hash;
- mi->pkt.next = NULL;
- mi->pkt.pkt_len = mi->pkt.data_len;
- mi->pkt.nb_segs = 1;
+ mi->next = NULL;
+ mi->pkt_len = mi->data_len;
+ mi->nb_segs = 1;
mi->ol_flags = md->ol_flags;
__rte_mbuf_sanity_check(mi, 1);
@@ -543,9 +541,9 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
- m->pkt.data = (char*) m->buf_addr + buf_ofs;
+ m->data = (char*) m->buf_addr + buf_ofs;
- m->pkt.data_len = 0;
+ m->data_len = 0;
}
#endif /* RTE_MBUF_REFCNT */
@@ -612,7 +610,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
__rte_mbuf_sanity_check(m, 1);
while (m != NULL) {
- m_next = m->pkt.next;
+ m_next = m->next;
rte_pktmbuf_free_seg(m);
m = m_next;
}
@@ -648,21 +646,21 @@ static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,
return (NULL);
mi = mc;
- prev = &mi->pkt.next;
- pktlen = md->pkt.pkt_len;
+ prev = &mi->next;
+ pktlen = md->pkt_len;
nseg = 0;
do {
nseg++;
rte_pktmbuf_attach(mi, md);
*prev = mi;
- prev = &mi->pkt.next;
- } while ((md = md->pkt.next) != NULL &&
+ prev = &mi->next;
+ } while ((md = md->next) != NULL &&
(mi = rte_pktmbuf_alloc(mp)) != NULL);
*prev = NULL;
- mc->pkt.nb_segs = nseg;
- mc->pkt.pkt_len = pktlen;
+ mc->nb_segs = nseg;
+ mc->pkt_len = pktlen;
/* Allocation of new indirect segment failed */
if (unlikely (mi == NULL)) {
@@ -691,7 +689,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
do {
rte_mbuf_refcnt_update(m, v);
- } while ((m = m->pkt.next) != NULL);
+ } while ((m = m->next) != NULL);
}
#endif /* RTE_MBUF_REFCNT */
@@ -707,7 +705,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
__rte_mbuf_sanity_check(m, 1);
- return (uint16_t) ((char*) m->pkt.data - (char*) m->buf_addr);
+ return (uint16_t) ((char*) m->data - (char*) m->buf_addr);
}
/**
@@ -722,7 +720,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
{
__rte_mbuf_sanity_check(m, 1);
return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
- m->pkt.data_len);
+ m->data_len);
}
/**
@@ -738,8 +736,8 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
struct rte_mbuf *m2 = (struct rte_mbuf *)m;
__rte_mbuf_sanity_check(m, 1);
- while (m2->pkt.next != NULL)
- m2 = m2->pkt.next;
+ while (m2->next != NULL)
+ m2 = m2->next;
return m2;
}
@@ -755,7 +753,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
* @param t
* The type to cast the result into.
*/
-#define rte_pktmbuf_mtod(m, t) ((t)((m)->pkt.data))
+#define rte_pktmbuf_mtod(m, t) ((t)((m)->data))
/**
* A macro that returns the length of the packet.
@@ -765,7 +763,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
* @param m
* The packet mbuf.
*/
-#define rte_pktmbuf_pkt_len(m) ((m)->pkt.pkt_len)
+#define rte_pktmbuf_pkt_len(m) ((m)->pkt_len)
/**
* A macro that returns the length of the segment.
@@ -775,7 +773,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
* @param m
* The packet mbuf.
*/
-#define rte_pktmbuf_data_len(m) ((m)->pkt.data_len)
+#define rte_pktmbuf_data_len(m) ((m)->data_len)
/**
* Prepend len bytes to an mbuf data area.
@@ -800,11 +798,11 @@ static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
- m->pkt.data = (char*) m->pkt.data - len;
- m->pkt.data_len = (uint16_t)(m->pkt.data_len + len);
- m->pkt.pkt_len = (m->pkt.pkt_len + len);
+ m->data = (char*) m->data - len;
+ m->data_len = (uint16_t)(m->data_len + len);
+ m->pkt_len = (m->pkt_len + len);
- return (char*) m->pkt.data;
+ return (char*) m->data;
}
/**
@@ -833,9 +831,9 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
return NULL;
- tail = (char*) m_last->pkt.data + m_last->pkt.data_len;
- m_last->pkt.data_len = (uint16_t)(m_last->pkt.data_len + len);
- m->pkt.pkt_len = (m->pkt.pkt_len + len);
+ tail = (char*) m_last->data + m_last->data_len;
+ m_last->data_len = (uint16_t)(m_last->data_len + len);
+ m->pkt_len = (m->pkt_len + len);
return (char*) tail;
}
@@ -857,13 +855,13 @@ static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
{
__rte_mbuf_sanity_check(m, 1);
- if (unlikely(len > m->pkt.data_len))
+ if (unlikely(len > m->data_len))
return NULL;
- m->pkt.data_len = (uint16_t)(m->pkt.data_len - len);
- m->pkt.data = ((char*) m->pkt.data + len);
- m->pkt.pkt_len = (m->pkt.pkt_len - len);
- return (char*) m->pkt.data;
+ m->data_len = (uint16_t)(m->data_len - len);
+ m->data = ((char*) m->data + len);
+ m->pkt_len = (m->pkt_len - len);
+ return (char*) m->data;
}
/**
@@ -887,11 +885,11 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
__rte_mbuf_sanity_check(m, 1);
m_last = rte_pktmbuf_lastseg(m);
- if (unlikely(len > m_last->pkt.data_len))
+ if (unlikely(len > m_last->data_len))
return -1;
- m_last->pkt.data_len = (uint16_t)(m_last->pkt.data_len - len);
- m->pkt.pkt_len = (m->pkt.pkt_len - len);
+ m_last->data_len = (uint16_t)(m_last->data_len - len);
+ m->pkt_len = (m->pkt_len - len);
return 0;
}
@@ -907,7 +905,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
{
__rte_mbuf_sanity_check(m, 1);
- return !!(m->pkt.nb_segs == 1);
+ return !!(m->nb_segs == 1);
}
/**
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index e55da0b..ea72fe3 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -91,7 +91,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
(uint64_t) ((mb)->buf_physaddr + \
- (uint64_t) ((char *)((mb)->pkt.data) - (char *)(mb)->buf_addr))
+ (uint64_t) ((char *)((mb)->data) - (char *)(mb)->buf_addr))
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)
@@ -421,7 +421,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_ol_req = (uint16_t)(ol_flags & (PKT_TX_IP_CKSUM |
PKT_TX_L4_MASK));
if (tx_ol_req) {
- hdrlen = tx_pkt->pkt.vlan_macip;
+ hdrlen = tx_pkt->vlan_macip;
/* If new context to be built or reuse the exist ctx. */
ctx = what_ctx_update(txq, tx_ol_req, hdrlen);
@@ -434,7 +434,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* This will always be the number of segments + the number of
* Context descriptors required to transmit the packet
*/
- nb_used = (uint16_t)(tx_pkt->pkt.nb_segs + new_ctx);
+ nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
/*
* The number of descriptors that must be allocated for a
@@ -454,7 +454,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
" tx_first=%u tx_last=%u\n",
(unsigned) txq->port_id,
(unsigned) txq->queue_id,
- (unsigned) tx_pkt->pkt.pkt_len,
+ (unsigned) tx_pkt->pkt_len,
(unsigned) tx_id,
(unsigned) tx_last);
@@ -516,7 +516,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* Set VLAN Tag offload fields. */
if (ol_flags & PKT_TX_VLAN_PKT) {
cmd_type_len |= E1000_TXD_CMD_VLE;
- popts_spec = tx_pkt->pkt.vlan_macip.f.vlan_tci <<
+ popts_spec = tx_pkt->vlan_macip.f.vlan_tci <<
E1000_TXD_VLAN_SHIFT;
}
@@ -566,7 +566,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/*
* Set up Transmit Data Descriptor.
*/
- slen = m_seg->pkt.data_len;
+ slen = m_seg->data_len;
buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
@@ -576,7 +576,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- m_seg = m_seg->pkt.next;
+ m_seg = m_seg->next;
} while (m_seg != NULL);
/*
@@ -771,20 +771,20 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.length) -
rxq->crc_len);
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->pkt.data);
- rxm->pkt.nb_segs = 1;
- rxm->pkt.next = NULL;
- rxm->pkt.pkt_len = pkt_len;
- rxm->pkt.data_len = pkt_len;
- rxm->pkt.in_port = rxq->port_id;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch(rxm->data);
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->in_port = rxq->port_id;
rxm->ol_flags = rx_desc_status_to_pkt_flags(status);
rxm->ol_flags = (uint16_t)(rxm->ol_flags |
rx_desc_error_to_pkt_flags(rxd.errors));
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->pkt.vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
/*
* Store the mbuf address into the next entry of the array
@@ -940,8 +940,8 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* Set data length & data buffer address of mbuf.
*/
data_len = rte_le_to_cpu_16(rxd.length);
- rxm->pkt.data_len = data_len;
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_len = data_len;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -953,12 +953,12 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
if (first_seg == NULL) {
first_seg = rxm;
- first_seg->pkt.pkt_len = data_len;
- first_seg->pkt.nb_segs = 1;
+ first_seg->pkt_len = data_len;
+ first_seg->nb_segs = 1;
} else {
- first_seg->pkt.pkt_len += data_len;
- first_seg->pkt.nb_segs++;
- last_seg->pkt.next = rxm;
+ first_seg->pkt_len += data_len;
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
}
/*
@@ -981,18 +981,18 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* mbuf, subtract the length of that CRC part from the
* data length of the previous mbuf.
*/
- rxm->pkt.next = NULL;
+ rxm->next = NULL;
if (unlikely(rxq->crc_len > 0)) {
- first_seg->pkt.pkt_len -= ETHER_CRC_LEN;
+ first_seg->pkt_len -= ETHER_CRC_LEN;
if (data_len <= ETHER_CRC_LEN) {
rte_pktmbuf_free_seg(rxm);
- first_seg->pkt.nb_segs--;
- last_seg->pkt.data_len = (uint16_t)
- (last_seg->pkt.data_len -
+ first_seg->nb_segs--;
+ last_seg->data_len = (uint16_t)
+ (last_seg->data_len -
(ETHER_CRC_LEN - data_len));
- last_seg->pkt.next = NULL;
+ last_seg->next = NULL;
} else
- rxm->pkt.data_len =
+ rxm->data_len =
(uint16_t) (data_len - ETHER_CRC_LEN);
}
@@ -1003,17 +1003,17 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - IP checksum flag,
* - error flags.
*/
- first_seg->pkt.in_port = rxq->port_id;
+ first_seg->in_port = rxq->port_id;
first_seg->ol_flags = rx_desc_status_to_pkt_flags(status);
first_seg->ol_flags = (uint16_t)(first_seg->ol_flags |
rx_desc_error_to_pkt_flags(rxd.errors));
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->pkt.vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->pkt.data);
+ rte_packet_prefetch(first_seg->data);
/*
* Store the mbuf address into the next entry of the array
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 62ff7bc..da33171 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -85,7 +85,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
(uint64_t) ((mb)->buf_physaddr + \
- (uint64_t) ((char *)((mb)->pkt.data) - \
+ (uint64_t) ((char *)((mb)->data) - \
(char *)(mb)->buf_addr))
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
@@ -354,7 +354,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
tx_pkt = *tx_pkts++;
- pkt_len = tx_pkt->pkt.pkt_len;
+ pkt_len = tx_pkt->pkt_len;
RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
@@ -366,10 +366,10 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* for the packet, starting from the current position (tx_id)
* in the ring.
*/
- tx_last = (uint16_t) (tx_id + tx_pkt->pkt.nb_segs - 1);
+ tx_last = (uint16_t) (tx_id + tx_pkt->nb_segs - 1);
ol_flags = tx_pkt->ol_flags;
- vlan_macip_lens = tx_pkt->pkt.vlan_macip.data;
+ vlan_macip_lens = tx_pkt->vlan_macip.data;
tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
/* If a Context Descriptor need be built . */
@@ -516,7 +516,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/*
* Set up transmit descriptor.
*/
- slen = (uint16_t) m_seg->pkt.data_len;
+ slen = (uint16_t) m_seg->data_len;
buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
txd->read.buffer_addr =
rte_cpu_to_le_64(buf_dma_addr);
@@ -527,7 +527,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- m_seg = m_seg->pkt.next;
+ m_seg = m_seg->next;
} while (m_seg != NULL);
/*
@@ -742,18 +742,18 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -
rxq->crc_len);
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->pkt.data);
- rxm->pkt.nb_segs = 1;
- rxm->pkt.next = NULL;
- rxm->pkt.pkt_len = pkt_len;
- rxm->pkt.data_len = pkt_len;
- rxm->pkt.in_port = rxq->port_id;
-
- rxm->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch(rxm->data);
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->in_port = rxq->port_id;
+
+ rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->pkt.vlan_macip.f.vlan_tci =
+ rxm->vlan_macip.f.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -918,8 +918,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* Set data length & data buffer address of mbuf.
*/
data_len = rte_le_to_cpu_16(rxd.wb.upper.length);
- rxm->pkt.data_len = data_len;
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_len = data_len;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -931,12 +931,12 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
if (first_seg == NULL) {
first_seg = rxm;
- first_seg->pkt.pkt_len = data_len;
- first_seg->pkt.nb_segs = 1;
+ first_seg->pkt_len = data_len;
+ first_seg->nb_segs = 1;
} else {
- first_seg->pkt.pkt_len += data_len;
- first_seg->pkt.nb_segs++;
- last_seg->pkt.next = rxm;
+ first_seg->pkt_len += data_len;
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
}
/*
@@ -959,18 +959,18 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* mbuf, subtract the length of that CRC part from the
* data length of the previous mbuf.
*/
- rxm->pkt.next = NULL;
+ rxm->next = NULL;
if (unlikely(rxq->crc_len > 0)) {
- first_seg->pkt.pkt_len -= ETHER_CRC_LEN;
+ first_seg->pkt_len -= ETHER_CRC_LEN;
if (data_len <= ETHER_CRC_LEN) {
rte_pktmbuf_free_seg(rxm);
- first_seg->pkt.nb_segs--;
- last_seg->pkt.data_len = (uint16_t)
- (last_seg->pkt.data_len -
+ first_seg->nb_segs--;
+ last_seg->data_len = (uint16_t)
+ (last_seg->data_len -
(ETHER_CRC_LEN - data_len));
- last_seg->pkt.next = NULL;
+ last_seg->next = NULL;
} else
- rxm->pkt.data_len =
+ rxm->data_len =
(uint16_t) (data_len - ETHER_CRC_LEN);
}
@@ -983,14 +983,14 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - VLAN TCI, if any,
* - error flags.
*/
- first_seg->pkt.in_port = rxq->port_id;
- first_seg->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;
+ first_seg->in_port = rxq->port_id;
+ first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
/*
* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- first_seg->pkt.vlan_macip.f.vlan_tci =
+ first_seg->vlan_macip.f.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -1001,7 +1001,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->ol_flags = pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->pkt.data);
+ rte_packet_prefetch(first_seg->data);
/*
* Store the mbuf address into the next entry of the array
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 45eb6d0..1f95f30 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -170,7 +170,7 @@ tx4(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
for (i = 0; i < 4; ++i, ++txdp, ++pkts) {
buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts);
- pkt_len = (*pkts)->pkt.data_len;
+ pkt_len = (*pkts)->data_len;
/* write data to descriptor */
txdp->read.buffer_addr = buf_dma_addr;
@@ -189,7 +189,7 @@ tx1(volatile union ixgbe_adv_tx_desc *txdp, struct rte_mbuf **pkts)
uint32_t pkt_len;
buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(*pkts);
- pkt_len = (*pkts)->pkt.data_len;
+ pkt_len = (*pkts)->data_len;
/* write data to descriptor */
txdp->read.buffer_addr = buf_dma_addr;
@@ -562,7 +562,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
new_ctx = 0;
tx_pkt = *tx_pkts++;
- pkt_len = tx_pkt->pkt.pkt_len;
+ pkt_len = tx_pkt->pkt_len;
RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);
@@ -571,7 +571,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
- vlan_macip_lens = tx_pkt->pkt.vlan_macip.data;
+ vlan_macip_lens = tx_pkt->vlan_macip.data;
/* If hardware offload required */
tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
@@ -589,7 +589,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* This will always be the number of segments + the number of
* Context descriptors required to transmit the packet
*/
- nb_used = (uint16_t)(tx_pkt->pkt.nb_segs + new_ctx);
+ nb_used = (uint16_t)(tx_pkt->nb_segs + new_ctx);
/*
* The number of descriptors that must be allocated for a
@@ -749,7 +749,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/*
* Set up Transmit Data Descriptor.
*/
- slen = m_seg->pkt.data_len;
+ slen = m_seg->data_len;
buf_dma_addr = RTE_MBUF_DATA_DMA_ADDR(m_seg);
txd->read.buffer_addr =
rte_cpu_to_le_64(buf_dma_addr);
@@ -760,7 +760,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
txe->last_id = tx_last;
tx_id = txe->next_id;
txe = txn;
- m_seg = m_seg->pkt.next;
+ m_seg = m_seg->next;
} while (m_seg != NULL);
/*
@@ -929,10 +929,10 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb = rxep[j].mbuf;
pkt_len = (uint16_t)(rxdp[j].wb.upper.length -
rxq->crc_len);
- mb->pkt.data_len = pkt_len;
- mb->pkt.pkt_len = pkt_len;
- mb->pkt.vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan;
- mb->pkt.hash.rss = rxdp[j].wb.lower.hi_dword.rss;
+ mb->data_len = pkt_len;
+ mb->pkt_len = pkt_len;
+ mb->vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan;
+ mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
/* convert descriptor fields to rte mbuf flags */
mb->ol_flags = rx_desc_hlen_type_rss_to_pkt_flags(
@@ -987,10 +987,10 @@ ixgbe_rx_alloc_bufs(struct igb_rx_queue *rxq)
/* populate the static rte mbuf fields */
mb = rxep[i].mbuf;
rte_mbuf_refcnt_set(mb, 1);
- mb->pkt.next = NULL;
- mb->pkt.data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;
- mb->pkt.nb_segs = 1;
- mb->pkt.in_port = rxq->port_id;
+ mb->next = NULL;
+ mb->data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mb->nb_segs = 1;
+ mb->in_port = rxq->port_id;
/* populate the descriptors */
dma_addr = (uint64_t)mb->buf_physaddr + RTE_PKTMBUF_HEADROOM;
@@ -1239,17 +1239,17 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -
rxq->crc_len);
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->pkt.data);
- rxm->pkt.nb_segs = 1;
- rxm->pkt.next = NULL;
- rxm->pkt.pkt_len = pkt_len;
- rxm->pkt.data_len = pkt_len;
- rxm->pkt.in_port = rxq->port_id;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch(rxm->data);
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = pkt_len;
+ rxm->data_len = pkt_len;
+ rxm->in_port = rxq->port_id;
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->pkt.vlan_macip.f.vlan_tci =
+ rxm->vlan_macip.f.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -1260,12 +1260,12 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->ol_flags = pkt_flags;
if (likely(pkt_flags & PKT_RX_RSS_HASH))
- rxm->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;
+ rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
else if (pkt_flags & PKT_RX_FDIR) {
- rxm->pkt.hash.fdir.hash =
+ rxm->hash.fdir.hash =
(uint16_t)((rxd.wb.lower.hi_dword.csum_ip.csum)
& IXGBE_ATR_HASH_MASK);
- rxm->pkt.hash.fdir.id = rxd.wb.lower.hi_dword.csum_ip.ip_id;
+ rxm->hash.fdir.id = rxd.wb.lower.hi_dword.csum_ip.ip_id;
}
/*
* Store the mbuf address into the next entry of the array
@@ -1422,8 +1422,8 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* Set data length & data buffer address of mbuf.
*/
data_len = rte_le_to_cpu_16(rxd.wb.upper.length);
- rxm->pkt.data_len = data_len;
- rxm->pkt.data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_len = data_len;
+ rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -1435,13 +1435,13 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
if (first_seg == NULL) {
first_seg = rxm;
- first_seg->pkt.pkt_len = data_len;
- first_seg->pkt.nb_segs = 1;
+ first_seg->pkt_len = data_len;
+ first_seg->nb_segs = 1;
} else {
- first_seg->pkt.pkt_len = (uint16_t)(first_seg->pkt.pkt_len
+ first_seg->pkt_len = (uint16_t)(first_seg->pkt_len
+ data_len);
- first_seg->pkt.nb_segs++;
- last_seg->pkt.next = rxm;
+ first_seg->nb_segs++;
+ last_seg->next = rxm;
}
/*
@@ -1464,18 +1464,18 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* mbuf, subtract the length of that CRC part from the
* data length of the previous mbuf.
*/
- rxm->pkt.next = NULL;
+ rxm->next = NULL;
if (unlikely(rxq->crc_len > 0)) {
- first_seg->pkt.pkt_len -= ETHER_CRC_LEN;
+ first_seg->pkt_len -= ETHER_CRC_LEN;
if (data_len <= ETHER_CRC_LEN) {
rte_pktmbuf_free_seg(rxm);
- first_seg->pkt.nb_segs--;
- last_seg->pkt.data_len = (uint16_t)
- (last_seg->pkt.data_len -
+ first_seg->nb_segs--;
+ last_seg->data_len = (uint16_t)
+ (last_seg->data_len -
(ETHER_CRC_LEN - data_len));
- last_seg->pkt.next = NULL;
+ last_seg->next = NULL;
} else
- rxm->pkt.data_len =
+ rxm->data_len =
(uint16_t) (data_len - ETHER_CRC_LEN);
}
@@ -1488,13 +1488,13 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* - VLAN TCI, if any,
* - error flags.
*/
- first_seg->pkt.in_port = rxq->port_id;
+ first_seg->in_port = rxq->port_id;
/*
* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- first_seg->pkt.vlan_macip.f.vlan_tci =
+ first_seg->vlan_macip.f.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -1505,17 +1505,17 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->ol_flags = pkt_flags;
if (likely(pkt_flags & PKT_RX_RSS_HASH))
- first_seg->pkt.hash.rss = rxd.wb.lower.hi_dword.rss;
+ first_seg->hash.rss = rxd.wb.lower.hi_dword.rss;
else if (pkt_flags & PKT_RX_FDIR) {
- first_seg->pkt.hash.fdir.hash =
+ first_seg->hash.fdir.hash =
(uint16_t)((rxd.wb.lower.hi_dword.csum_ip.csum)
& IXGBE_ATR_HASH_MASK);
- first_seg->pkt.hash.fdir.id =
+ first_seg->hash.fdir.id =
rxd.wb.lower.hi_dword.csum_ip.ip_id;
}
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->pkt.data);
+ rte_packet_prefetch(first_seg->data);
/*
* Store the mbuf address into the next entry of the array
@@ -3083,10 +3083,10 @@ ixgbe_alloc_rx_queue_mbufs(struct igb_rx_queue *rxq)
}
rte_mbuf_refcnt_set(mbuf, 1);
- mbuf->pkt.next = NULL;
- mbuf->pkt.data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;
- mbuf->pkt.nb_segs = 1;
- mbuf->pkt.in_port = rxq->port_id;
+ mbuf->next = NULL;
+ mbuf->data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mbuf->nb_segs = 1;
+ mbuf->in_port = rxq->port_id;
dma_addr =
rte_cpu_to_le_64(RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mbuf));
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index 05921fe..b959c9b 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -45,7 +45,7 @@
#endif
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \
+ (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
(char *)(mb)->buf_addr))
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
diff --git a/lib/librte_pmd_pcap/rte_eth_pcap.c b/lib/librte_pmd_pcap/rte_eth_pcap.c
index 680dfdc..aa80478 100644
--- a/lib/librte_pmd_pcap/rte_eth_pcap.c
+++ b/lib/librte_pmd_pcap/rte_eth_pcap.c
@@ -151,9 +151,9 @@ eth_pcap_rx(void *queue,
if (header.len <= buf_size) {
/* pcap packet will fit in the mbuf, go ahead and copy */
- rte_memcpy(mbuf->pkt.data, packet, header.len);
- mbuf->pkt.data_len = (uint16_t)header.len;
- mbuf->pkt.pkt_len = mbuf->pkt.data_len;
+ rte_memcpy(mbuf->data, packet, header.len);
+ mbuf->data_len = (uint16_t)header.len;
+ mbuf->pkt_len = mbuf->data_len;
bufs[i] = mbuf;
num_rx++;
} else {
@@ -200,9 +200,9 @@ eth_pcap_tx_dumper(void *queue,
for (i = 0; i < nb_pkts; i++) {
mbuf = bufs[i];
calculate_timestamp(&header.ts);
- header.len = mbuf->pkt.data_len;
+ header.len = mbuf->data_len;
header.caplen = header.len;
- pcap_dump((u_char*) dumper_q->dumper, &header, mbuf->pkt.data);
+ pcap_dump((u_char*) dumper_q->dumper, &header, mbuf->data);
rte_pktmbuf_free(mbuf);
num_tx++;
}
@@ -237,8 +237,8 @@ eth_pcap_tx(void *queue,
for (i = 0; i < nb_pkts; i++) {
mbuf = bufs[i];
- ret = pcap_sendpacket(tx_queue->pcap, (u_char*) mbuf->pkt.data,
- mbuf->pkt.data_len);
+ ret = pcap_sendpacket(tx_queue->pcap, (u_char*) mbuf->data,
+ mbuf->data_len);
if(likely(!ret))
num_tx++;
rte_pktmbuf_free(mbuf);
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
index 199b3c6..2af2c49 100644
--- a/lib/librte_pmd_virtio/virtio_rxtx.c
+++ b/lib/librte_pmd_virtio/virtio_rxtx.c
@@ -267,13 +267,13 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
hw->eth_stats.ierrors++;
continue;
}
- rxm->pkt.in_port = rxvq->port_id;
- rxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rxm->pkt.nb_segs = 1;
- rxm->pkt.next = NULL;
- rxm->pkt.pkt_len = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));
- rxm->pkt.data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));
- VIRTIO_DUMP_PACKET(rxm, rxm->pkt.data_len);
+ rxm->in_port = rxvq->port_id;
+ rxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));
+ rxm->data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));
+ VIRTIO_DUMP_PACKET(rxm, rxm->data_len);
rx_pkts[nb_rx++] = rxm;
hw->eth_stats.ibytes += len[i] - sizeof(struct virtio_net_hdr);
}
@@ -346,7 +346,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
break;
}
nb_tx++;
- hw->eth_stats.obytes += txm->pkt.data_len;
+ hw->eth_stats.obytes += txm->data_len;
} else {
PMD_TX_LOG(ERR, "No free tx descriptors to transmit\n");
break;
diff --git a/lib/librte_pmd_virtio/virtqueue.h b/lib/librte_pmd_virtio/virtqueue.h
index b67c223..210944e 100644
--- a/lib/librte_pmd_virtio/virtqueue.h
+++ b/lib/librte_pmd_virtio/virtqueue.h
@@ -59,7 +59,7 @@
#define VIRTQUEUE_MAX_NAME_SZ 32
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \
+ (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
(char *)(mb)->buf_addr))
#define VTNET_SQ_RQ_QUEUE_IDX 0
@@ -330,7 +330,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
start_dp[idx].flags = VRING_DESC_F_NEXT;
idx = start_dp[idx].next;
start_dp[idx].addr = RTE_MBUF_DATA_DMA_ADDR(cookie);
- start_dp[idx].len = cookie->pkt.data_len;
+ start_dp[idx].len = cookie->data_len;
start_dp[idx].flags = 0;
idx = start_dp[idx].next;
txvq->vq_desc_head_idx = idx;
@@ -363,7 +363,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts, uint
break;
}
rte_prefetch0(cookie);
- rte_packet_prefetch(cookie->pkt.data);
+ rte_packet_prefetch(cookie->data);
rx_pkts[i] = cookie;
vq->vq_used_cons_idx++;
vq_ring_free_chain(vq, desc_idx);
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 66d80b8..6db766a 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -80,7 +80,7 @@
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->pkt.data) - \
+ (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
(char *)(mb)->buf_addr))
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
@@ -276,7 +276,7 @@ vmxnet3_xmit_pkts( void *tx_queue, struct rte_mbuf **tx_pkts,
txm = tx_pkts[nb_tx];
/* Don't support scatter packets yet, free them if met */
- if (txm->pkt.nb_segs != 1) {
+ if (txm->nb_segs != 1) {
PMD_TX_LOG(DEBUG, "Don't support scatter packets yet, drop!\n");
rte_pktmbuf_free(tx_pkts[nb_tx]);
txq->stats.drop_total++;
@@ -286,7 +286,7 @@ vmxnet3_xmit_pkts( void *tx_queue, struct rte_mbuf **tx_pkts,
}
/* Needs to minus ether header len */
- if(txm->pkt.data_len > (hw->cur_mtu + ETHER_HDR_LEN)) {
+ if(txm->data_len > (hw->cur_mtu + ETHER_HDR_LEN)) {
PMD_TX_LOG(DEBUG, "Packet data_len higher than MTU\n");
rte_pktmbuf_free(tx_pkts[nb_tx]);
txq->stats.drop_total++;
@@ -301,7 +301,7 @@ vmxnet3_xmit_pkts( void *tx_queue, struct rte_mbuf **tx_pkts,
tbi = txq->cmd_ring.buf_info + txq->cmd_ring.next2fill;
tbi->bufPA = RTE_MBUF_DATA_DMA_ADDR(txm);
txd->addr = tbi->bufPA;
- txd->len = txm->pkt.data_len;
+ txd->len = txm->data_len;
/* Mark the last descriptor as End of Packet. */
txd->cq = 1;
@@ -537,21 +537,21 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_pktmbuf_mtod(rxm, void *));
#endif
//Copy vlan tag in packet buffer
- rxm->pkt.vlan_macip.f.vlan_tci =
+ rxm->vlan_macip.f.vlan_tci =
rte_le_to_cpu_16((uint16_t)rcd->tci);
} else
rxm->ol_flags = 0;
/* Initialize newly received packet buffer */
- rxm->pkt.in_port = rxq->port_id;
- rxm->pkt.nb_segs = 1;
- rxm->pkt.next = NULL;
- rxm->pkt.pkt_len = (uint16_t)rcd->len;
- rxm->pkt.data_len = (uint16_t)rcd->len;
- rxm->pkt.in_port = rxq->port_id;
- rxm->pkt.vlan_macip.f.vlan_tci = 0;
- rxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->in_port = rxq->port_id;
+ rxm->nb_segs = 1;
+ rxm->next = NULL;
+ rxm->pkt_len = (uint16_t)rcd->len;
+ rxm->data_len = (uint16_t)rcd->len;
+ rxm->in_port = rxq->port_id;
+ rxm->vlan_macip.f.vlan_tci = 0;
+ rxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
rx_pkts[nb_rx++] = rxm;
diff --git a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
index ae039a0..4f8b712 100644
--- a/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
+++ b/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c
@@ -108,12 +108,12 @@ eth_xenvirt_rx(void *q, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
for (i = 0; i < num ; i ++) {
rxm = rx_pkts[i];
PMD_RX_LOG(DEBUG, "packet len:%d\n", len[i]);
- rxm->pkt.next = NULL;
- rxm->pkt.data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rxm->pkt.data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));
- rxm->pkt.nb_segs = 1;
- rxm->pkt.in_port = pi->port_id;
- rxm->pkt.pkt_len = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));
+ rxm->next = NULL;
+ rxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_len = (uint16_t)(len[i] - sizeof(struct virtio_net_hdr));
+ rxm->nb_segs = 1;
+ rxm->in_port = pi->port_id;
+ rxm->pkt_len = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));
}
/* allocate new mbuf for the used descriptor */
while (likely(!virtqueue_full(rxvq))) {
diff --git a/lib/librte_pmd_xenvirt/virtqueue.h b/lib/librte_pmd_xenvirt/virtqueue.h
index 3844448..f36030f 100644
--- a/lib/librte_pmd_xenvirt/virtqueue.h
+++ b/lib/librte_pmd_xenvirt/virtqueue.h
@@ -54,7 +54,7 @@
* rather than gpa<->hva in virito spec.
*/
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- ((uint64_t)((mb)->pkt.data))
+ ((uint64_t)((mb)->data))
enum { VTNET_RQ = 0, VTNET_TQ = 1, VTNET_CQ = 2 };
@@ -238,7 +238,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie)
start_dp[idx].addr = (uintptr_t)NULL;
idx = start_dp[idx].next;
start_dp[idx].addr = RTE_MBUF_DATA_DMA_ADDR(cookie);
- start_dp[idx].len = cookie->pkt.data_len;
+ start_dp[idx].len = cookie->data_len;
start_dp[idx].flags = 0;
idx = start_dp[idx].next;
txvq->vq_desc_head_idx = idx;
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 24e8bdf..7f0bf67 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -1032,7 +1032,7 @@ rte_sched_port_update_subport_stats(struct rte_sched_port *port, uint32_t qindex
{
struct rte_sched_subport *s = port->subport + (qindex / rte_sched_port_queues_per_subport(port));
uint32_t tc_index = (qindex >> 2) & 0x3;
- uint32_t pkt_len = pkt->pkt.pkt_len;
+ uint32_t pkt_len = pkt->pkt_len;
s->stats.n_pkts_tc[tc_index] += 1;
s->stats.n_bytes_tc[tc_index] += pkt_len;
@@ -1043,7 +1043,7 @@ rte_sched_port_update_subport_stats_on_drop(struct rte_sched_port *port, uint32_
{
struct rte_sched_subport *s = port->subport + (qindex / rte_sched_port_queues_per_subport(port));
uint32_t tc_index = (qindex >> 2) & 0x3;
- uint32_t pkt_len = pkt->pkt.pkt_len;
+ uint32_t pkt_len = pkt->pkt_len;
s->stats.n_pkts_tc_dropped[tc_index] += 1;
s->stats.n_bytes_tc_dropped[tc_index] += pkt_len;
@@ -1053,7 +1053,7 @@ static inline void
rte_sched_port_update_queue_stats(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)
{
struct rte_sched_queue_extra *qe = port->queue_extra + qindex;
- uint32_t pkt_len = pkt->pkt.pkt_len;
+ uint32_t pkt_len = pkt->pkt_len;
qe->stats.n_pkts += 1;
qe->stats.n_bytes += pkt_len;
@@ -1063,7 +1063,7 @@ static inline void
rte_sched_port_update_queue_stats_on_drop(struct rte_sched_port *port, uint32_t qindex, struct rte_mbuf *pkt)
{
struct rte_sched_queue_extra *qe = port->queue_extra + qindex;
- uint32_t pkt_len = pkt->pkt.pkt_len;
+ uint32_t pkt_len = pkt->pkt_len;
qe->stats.n_pkts_dropped += 1;
qe->stats.n_bytes_dropped += pkt_len;
@@ -1580,7 +1580,7 @@ grinder_credits_check(struct rte_sched_port *port, uint32_t pos)
struct rte_sched_pipe *pipe = grinder->pipe;
struct rte_mbuf *pkt = grinder->pkt;
uint32_t tc_index = grinder->tc_index;
- uint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;
+ uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
uint32_t subport_tb_credits = subport->tb_credits;
uint32_t subport_tc_credits = subport->tc_credits[tc_index];
uint32_t pipe_tb_credits = pipe->tb_credits;
@@ -1616,7 +1616,7 @@ grinder_credits_check(struct rte_sched_port *port, uint32_t pos)
struct rte_sched_pipe *pipe = grinder->pipe;
struct rte_mbuf *pkt = grinder->pkt;
uint32_t tc_index = grinder->tc_index;
- uint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;
+ uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
uint32_t subport_tb_credits = subport->tb_credits;
uint32_t subport_tc_credits = subport->tc_credits[tc_index];
uint32_t pipe_tb_credits = pipe->tb_credits;
@@ -1657,7 +1657,7 @@ grinder_schedule(struct rte_sched_port *port, uint32_t pos)
struct rte_sched_grinder *grinder = port->grinder + pos;
struct rte_sched_queue *queue = grinder->queue[grinder->qpos];
struct rte_mbuf *pkt = grinder->pkt;
- uint32_t pkt_len = pkt->pkt.pkt_len + port->frame_overhead;
+ uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
#if RTE_SCHED_TS_CREDITS_CHECK
if (!grinder_credits_check(port, pos)) {
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 1c4ebc5..5c4c0cd 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -106,7 +106,7 @@ extern "C" {
2. Start of Frame Delimiter (SFD): 1 byte;
3. Frame Check Sequence (FCS): 4 bytes;
4. Inter Frame Gap (IFG): 12 bytes.
-The FCS is considered overhead only if not included in the packet length (field pkt.pkt_len
+The FCS is considered overhead only if not included in the packet length (field pkt_len
of struct rte_mbuf). */
#ifndef RTE_SCHED_FRAME_OVERHEAD_DEFAULT
#define RTE_SCHED_FRAME_OVERHEAD_DEFAULT 24
@@ -196,7 +196,7 @@ struct rte_sched_port_params {
};
/** Path through the scheduler hierarchy used by the scheduler enqueue operation to
-identify the destination queue for the current packet. Stored in the field pkt.hash.sched
+identify the destination queue for the current packet. Stored in the field hash.sched
of struct rte_mbuf of each packet, typically written by the classification stage and read by
scheduler enqueue.*/
struct rte_sched_port_hierarchy {
@@ -352,7 +352,7 @@ static inline void
rte_sched_port_pkt_write(struct rte_mbuf *pkt,
uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue, enum rte_meter_color color)
{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;
+ struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
sched->color = (uint32_t) color;
sched->subport = subport;
@@ -381,7 +381,7 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
static inline void
rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint32_t *pipe, uint32_t *traffic_class, uint32_t *queue)
{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;
+ struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
*subport = sched->subport;
*pipe = sched->pipe;
@@ -392,7 +392,7 @@ rte_sched_port_pkt_read_tree_path(struct rte_mbuf *pkt, uint32_t *subport, uint3
static inline enum rte_meter_color
rte_sched_port_pkt_read_color(struct rte_mbuf *pkt)
{
- struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->pkt.hash.sched;
+ struct rte_sched_port_hierarchy *sched = (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
return (enum rte_meter_color) sched->color;
}
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 05/17] mbuf: merge physaddr and buf_len in a bitfield
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (3 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 04/17] mbuf: remove the rte_pktmbuf structure Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 06/17] mbuf: cosmetic changes in rte_mbuf structure Olivier Matz
` (13 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The physical address is never greater than (1 << 48) = 256 TB.
We can win 2 bytes in the mbuf structure by merging the physical
address and the buffer length in the same bitfield.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_mbuf/rte_mbuf.c | 3 ++-
lib/librte_mbuf/rte_mbuf.h | 7 ++++---
2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index bc9d462..9c4077a 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -104,7 +104,8 @@ rte_pktmbuf_init(struct rte_mempool *mp,
m->buf_len = (uint16_t)buf_len;
/* keep some headroom between start of buffer and data */
- m->data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM, m->buf_len);
+ m->data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM,
+ (uint16_t)m->buf_len);
/* init some constant fields */
m->pool = mp;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 0743506..1777b8b 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -130,8 +130,8 @@ union rte_vlan_macip {
struct rte_mbuf {
struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
void *buf_addr; /**< Virtual address of segment buffer. */
- phys_addr_t buf_physaddr; /**< Physical address of segment buffer. */
- uint16_t buf_len; /**< Length of segment buffer. */
+ uint64_t buf_physaddr:48; /**< Physical address of segment buffer. */
+ uint64_t buf_len:16; /**< Length of segment buffer. */
#ifdef RTE_MBUF_REFCNT
/**
* 16-bit Reference counter.
@@ -148,8 +148,9 @@ struct rte_mbuf {
#else
uint16_t refcnt_reserved; /**< Do not use this field */
#endif
- uint16_t reserved; /**< Unused field. Required for padding. */
+
uint16_t ol_flags; /**< Offload features. */
+ uint32_t reserved; /**< Unused field. Required for padding. */
/* valid for any segment */
struct rte_mbuf *next; /**< Next segment of scattered packet. */
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 06/17] mbuf: cosmetic changes in rte_mbuf structure
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (4 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 05/17] mbuf: merge physaddr and buf_len in a bitfield Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 07/17] mbuf: replace data pointer by an offset Olivier Matz
` (12 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
Fix indentation of comments: avoid to exceed 80 columns, and try to
start all comments at the same offset.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_mbuf/rte_mbuf.h | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 1777b8b..5191eec 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -142,36 +142,36 @@ struct rte_mbuf {
* config option.
*/
union {
- rte_atomic16_t refcnt_atomic; /**< Atomically accessed refcnt */
- uint16_t refcnt; /**< Non-atomically accessed refcnt */
+ rte_atomic16_t refcnt_atomic; /**< Atomically accessed refcnt */
+ uint16_t refcnt; /**< Non-atomically accessed refcnt */
};
#else
- uint16_t refcnt_reserved; /**< Do not use this field */
+ uint16_t refcnt_reserved; /**< Do not use this field */
#endif
- uint16_t ol_flags; /**< Offload features. */
- uint32_t reserved; /**< Unused field. Required for padding. */
+ uint16_t ol_flags; /**< Offload features. */
+ uint32_t reserved; /**< Unused field. Required for padding. */
/* valid for any segment */
- struct rte_mbuf *next; /**< Next segment of scattered packet. */
- void* data; /**< Start address of data in segment buffer. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
+ struct rte_mbuf *next; /**< Next segment of scattered packet. */
+ void* data; /**< Start address of data in segment. */
+ uint16_t data_len; /**< Amount of data in segment buffer. */
/* these fields are valid for first segment only */
- uint8_t nb_segs; /**< Number of segments. */
- uint8_t in_port; /**< Input port. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
+ uint8_t nb_segs; /**< Number of segments. */
+ uint8_t in_port; /**< Input port. */
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
/* offload features, valid for first segment only */
union rte_vlan_macip vlan_macip;
union {
- uint32_t rss; /**< RSS hash result if RSS enabled */
+ uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
uint16_t hash;
uint16_t id;
- } fdir; /**< Filter identifier if FDIR enabled */
- uint32_t sched; /**< Hierarchical scheduler */
- } hash; /**< hash information */
+ } fdir; /**< Filter identifier if FDIR enabled */
+ uint32_t sched; /**< Hierarchical scheduler */
+ } hash; /**< hash information */
} __rte_cache_aligned;
/**
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 07/17] mbuf: replace data pointer by an offset
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (5 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 06/17] mbuf: cosmetic changes in rte_mbuf structure Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 08/17] mbuf: add functions to get the name of an ol_flag Olivier Matz
` (11 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
never longer than 64KB. We gain 6 bytes in the structure thanks to
this modification.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/csumonly.c | 2 +-
app/test-pmd/flowgen.c | 2 +-
app/test-pmd/icmpecho.c | 2 +-
app/test-pmd/macfwd-retry.c | 2 +-
app/test-pmd/macfwd.c | 2 +-
app/test-pmd/macswap.c | 2 +-
app/test-pmd/rxonly.c | 2 +-
app/test-pmd/testpmd.c | 2 +-
app/test-pmd/txonly.c | 7 +++---
app/test/test_mbuf.c | 6 ++---
examples/exception_path/main.c | 3 ++-
examples/vhost/main.c | 21 ++++++++++-------
examples/vhost_xen/main.c | 2 +-
lib/librte_mbuf/rte_mbuf.c | 7 +++---
lib/librte_mbuf/rte_mbuf.h | 44 ++++++++++++++++-------------------
lib/librte_pmd_e1000/em_rxtx.c | 12 +++++-----
lib/librte_pmd_e1000/igb_rxtx.c | 13 +++++------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 13 ++++++-----
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 3 +--
lib/librte_pmd_virtio/virtio_rxtx.c | 2 +-
lib/librte_pmd_virtio/virtqueue.h | 5 ++--
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 5 ++--
22 files changed, 79 insertions(+), 80 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index ee82eb6..3313b87 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -263,7 +263,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
pkt_ol_flags = mb->ol_flags;
ol_flags = (uint16_t) (pkt_ol_flags & (~PKT_TX_L4_MASK));
- eth_hdr = (struct ether_hdr *) mb->data;
+ eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
if (eth_type == ETHER_TYPE_VLAN) {
/* Only allow single VLAN label here */
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 17dbf83..d69b2b8 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -175,7 +175,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->next = NULL;
/* Initialize Ethernet header. */
- eth_hdr = (struct ether_hdr *)pkt->data;
+ eth_hdr = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
ether_addr_copy(&cfg_ether_dst, ð_hdr->d_addr);
ether_addr_copy(&cfg_ether_src, ð_hdr->s_addr);
eth_hdr->ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c
index 4a277b8..7fd4b6d 100644
--- a/app/test-pmd/icmpecho.c
+++ b/app/test-pmd/icmpecho.c
@@ -330,7 +330,7 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs)
nb_replies = 0;
for (i = 0; i < nb_rx; i++) {
pkt = pkts_burst[i];
- eth_h = (struct ether_hdr *) pkt->data;
+ eth_h = rte_pktmbuf_mtod(pkt, struct ether_hdr *);
eth_type = RTE_BE_TO_CPU_16(eth_h->ether_type);
l2_len = sizeof(struct ether_hdr);
if (verbose_level > 0) {
diff --git a/app/test-pmd/macfwd-retry.c b/app/test-pmd/macfwd-retry.c
index 687ff8d..7749c9e 100644
--- a/app/test-pmd/macfwd-retry.c
+++ b/app/test-pmd/macfwd-retry.c
@@ -119,7 +119,7 @@ pkt_burst_mac_retry_forward(struct fwd_stream *fs)
fs->rx_packets += nb_rx;
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->data;
+ eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
ð_hdr->d_addr);
ether_addr_copy(&ports[fs->tx_port].eth_addr,
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 8d7612c..ab74d0c 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -110,7 +110,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->data;
+ eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
ð_hdr->d_addr);
ether_addr_copy(&ports[fs->tx_port].eth_addr,
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index 731f487..d274b36 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -110,7 +110,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
for (i = 0; i < nb_rx; i++) {
mb = pkts_burst[i];
- eth_hdr = (struct ether_hdr *) mb->data;
+ eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
/* Swap dest and src mac addresses. */
ether_addr_copy(ð_hdr->d_addr, &addr);
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index b77c8ce..5751b0b 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -149,7 +149,7 @@ pkt_burst_receive(struct fwd_stream *fs)
rte_pktmbuf_free(mb);
continue;
}
- eth_hdr = (struct ether_hdr *) mb->data;
+ eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
ol_flags = mb->ol_flags;
print_ether_addr(" src=", ð_hdr->s_addr);
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index ccbb8bc..92e2729 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -404,7 +404,7 @@ testpmd_mbuf_ctor(struct rte_mempool *mp,
mb_ctor_arg->seg_buf_offset);
mb->buf_len = mb_ctor_arg->seg_buf_size;
mb->ol_flags = 0;
- mb->data = (char *) mb->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mb->data_off = RTE_PKTMBUF_HEADROOM;
mb->nb_segs = 1;
mb->vlan_macip.data = 0;
mb->hash.rss = 0;
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 96c2844..9d6cffb 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -111,13 +111,13 @@ copy_buf_to_pkt_segs(void* buf, unsigned len, struct rte_mbuf *pkt,
seg = seg->next;
}
copy_len = seg->data_len - offset;
- seg_buf = ((char *) seg->data + offset);
+ seg_buf = (rte_pktmbuf_mtod(seg, char *) + offset);
while (len > copy_len) {
rte_memcpy(seg_buf, buf, (size_t) copy_len);
len -= copy_len;
buf = ((char*) buf + copy_len);
seg = seg->next;
- seg_buf = seg->data;
+ seg_buf = rte_pktmbuf_mtod(seg, char *);
}
rte_memcpy(seg_buf, buf, (size_t) len);
}
@@ -126,7 +126,8 @@ static inline void
copy_buf_to_pkt(void* buf, unsigned len, struct rte_mbuf *pkt, unsigned offset)
{
if (offset + len <= pkt->data_len) {
- rte_memcpy(((char *) pkt->data + offset), buf, (size_t) len);
+ rte_memcpy((rte_pktmbuf_mtod(pkt, char *) + offset),
+ buf, (size_t) len);
return;
}
copy_buf_to_pkt_segs(buf, len, pkt, offset);
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 097e75d..ba2f2d9 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -432,7 +432,7 @@ test_pktmbuf_pool_ptr(void)
printf("rte_pktmbuf_alloc() failed (%u)\n", i);
ret = -1;
}
- m[i]->data = RTE_PTR_ADD(m[i]->data, 64);
+ m[i]->data_off += 64;
}
/* free them */
@@ -451,8 +451,8 @@ test_pktmbuf_pool_ptr(void)
printf("rte_pktmbuf_alloc() failed (%u)\n", i);
ret = -1;
}
- if (m[i]->data != RTE_PTR_ADD(m[i]->buf_addr, RTE_PKTMBUF_HEADROOM)) {
- printf ("data pointer not set properly\n");
+ if (m[i]->data_off != RTE_PKTMBUF_HEADROOM) {
+ printf ("invalid data_off\n");
ret = -1;
}
}
diff --git a/examples/exception_path/main.c b/examples/exception_path/main.c
index d9a85b5..ca65511 100644
--- a/examples/exception_path/main.c
+++ b/examples/exception_path/main.c
@@ -302,7 +302,8 @@ main_loop(__attribute__((unused)) void *arg)
if (m == NULL)
continue;
- ret = read(tap_fd, m->data, MAX_PACKET_SZ);
+ ret = read(tap_fd, rte_pktmbuf_mtod(m, void *),
+ MAX_PACKET_SZ);
lcore_stats[lcore_id].rx++;
if (unlikely(ret < 0)) {
FATAL_ERROR("Reading from %s interface failed",
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 26cfc8e..c65f24d 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -815,7 +815,9 @@ virtio_dev_rx(struct virtio_net *dev, struct rte_mbuf **pkts, uint32_t count)
vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
/* Copy mbuf data to buffer */
- rte_memcpy((void *)(uintptr_t)buff_addr, (const void*)buff->data, rte_pktmbuf_data_len(buff));
+ rte_memcpy((void *)(uintptr_t)buff_addr,
+ rte_pktmbuf_mtod(buff, const void *),
+ rte_pktmbuf_data_len(buff));
res_cur_idx++;
packet_success++;
@@ -877,7 +879,7 @@ link_vmdq(struct virtio_net *dev, struct rte_mbuf *m)
int i, ret;
/* Learn MAC address of guest device from packet */
- pkt_hdr = (struct ether_hdr *)m->data;
+ pkt_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
dev_ll = ll_root_used;
@@ -965,7 +967,7 @@ virtio_tx_local(struct virtio_net *dev, struct rte_mbuf *m)
struct ether_hdr *pkt_hdr;
uint64_t ret = 0;
- pkt_hdr = (struct ether_hdr *)m->data;
+ pkt_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
/*get the used devices list*/
dev_ll = ll_root_used;
@@ -1042,18 +1044,21 @@ virtio_tx_route(struct virtio_net* dev, struct rte_mbuf *m, struct rte_mempool *
mbuf->pkt_len = mbuf->data_len;
/* Copy ethernet header to mbuf. */
- rte_memcpy((void*)mbuf->data, (const void*)m->data, ETH_HLEN);
+ rte_memcpy(rte_pktmbuf_mtod(mbuf, void *),
+ rte_pktmbuf_mtod(m, const void *),
+ ETH_HLEN);
/* Setup vlan header. Bytes need to be re-ordered for network with htons()*/
- vlan_hdr = (struct vlan_ethhdr *) mbuf->data;
+ vlan_hdr = rte_pktmbuf_mtod(mbuf, struct vlan_ethhdr *);
vlan_hdr->h_vlan_encapsulated_proto = vlan_hdr->h_vlan_proto;
vlan_hdr->h_vlan_proto = htons(ETH_P_8021Q);
vlan_hdr->h_vlan_TCI = htons(vlan_tag);
/* Copy the remaining packet contents to the mbuf. */
- rte_memcpy((void*) ((uint8_t*)mbuf->data + VLAN_ETH_HLEN),
- (const void*) ((uint8_t*)m->data + ETH_HLEN), (m->data_len - ETH_HLEN));
+ rte_memcpy((void *)(rte_pktmbuf_mtod(mbuf, uint8_t *) + VLAN_ETH_HLEN),
+ (const void *)(rte_pktmbuf_mtod(m, uint8_t *) + ETH_HLEN),
+ (m->data_len - ETH_HLEN));
tx_q->m_table[len] = mbuf;
len++;
if (enable_stats) {
@@ -1144,7 +1149,7 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)
/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */
m.data_len = desc->len;
- m.data = (void*)(uintptr_t)buff_addr;
+ m.data_off = 0;
PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
index 2cf0029..6da1b22 100644
--- a/examples/vhost_xen/main.c
+++ b/examples/vhost_xen/main.c
@@ -981,7 +981,7 @@ virtio_dev_tx(struct virtio_net* dev, struct rte_mempool *mbuf_pool)
/* Setup dummy mbuf. This is copied to a real mbuf if transmitted out the physical port. */
m.data_len = desc->len;
- m.data = (void*)(uintptr_t)buff_addr;
+ m.data_off = 0;
m.nb_segs = 1;
virtio_tx_route(dev, &m, mbuf_pool, 0);
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index 9c4077a..e791aa6 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -104,8 +104,7 @@ rte_pktmbuf_init(struct rte_mempool *mp,
m->buf_len = (uint16_t)buf_len;
/* keep some headroom between start of buffer and data */
- m->data = (char*) m->buf_addr + RTE_MIN(RTE_PKTMBUF_HEADROOM,
- (uint16_t)m->buf_len);
+ m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, (uint16_t)m->buf_len);
/* init some constant fields */
m->pool = mp;
@@ -171,12 +170,12 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
__rte_mbuf_sanity_check(m, 0);
fprintf(f, " segment at 0x%p, data=0x%p, data_len=%u\n",
- m, m->data, (unsigned)m->data_len);
+ m, rte_pktmbuf_mtod(m, void *), (unsigned)m->data_len);
len = dump_len;
if (len > m->data_len)
len = m->data_len;
if (len != 0)
- rte_hexdump(f, NULL, m->data, len);
+ rte_hexdump(f, NULL, rte_pktmbuf_mtod(m, void *), len);
dump_len -= len;
m = m->next;
nb_segs --;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 5191eec..cb0a293 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -132,6 +132,13 @@ struct rte_mbuf {
void *buf_addr; /**< Virtual address of segment buffer. */
uint64_t buf_physaddr:48; /**< Physical address of segment buffer. */
uint64_t buf_len:16; /**< Length of segment buffer. */
+
+ /* valid for any segment */
+ struct rte_mbuf *next; /**< Next segment of scattered packet. */
+ uint16_t data_off;
+ uint16_t data_len; /**< Amount of data in segment buffer. */
+ uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+
#ifdef RTE_MBUF_REFCNT
/**
* 16-bit Reference counter.
@@ -149,18 +156,11 @@ struct rte_mbuf {
uint16_t refcnt_reserved; /**< Do not use this field */
#endif
- uint16_t ol_flags; /**< Offload features. */
- uint32_t reserved; /**< Unused field. Required for padding. */
-
- /* valid for any segment */
- struct rte_mbuf *next; /**< Next segment of scattered packet. */
- void* data; /**< Start address of data in segment. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
-
/* these fields are valid for first segment only */
uint8_t nb_segs; /**< Number of segments. */
uint8_t in_port; /**< Input port. */
- uint32_t pkt_len; /**< Total pkt len: sum of all segments. */
+ uint16_t ol_flags; /**< Offload features. */
+ uint16_t reserved; /**< Unused field. Required for padding. */
/* offload features, valid for first segment only */
union rte_vlan_macip vlan_macip;
@@ -172,6 +172,7 @@ struct rte_mbuf {
} fdir; /**< Filter identifier if FDIR enabled */
uint32_t sched; /**< Hierarchical scheduler */
} hash; /**< hash information */
+ uint64_t reserved2; /**< Unused field. Required for padding. */
} __rte_cache_aligned;
/**
@@ -436,8 +437,6 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
*/
static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
{
- uint32_t buf_ofs;
-
m->next = NULL;
m->pkt_len = 0;
m->vlan_macip.data = 0;
@@ -445,9 +444,8 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
m->in_port = 0xff;
m->ol_flags = 0;
- buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
+ m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
- m->data = (char*) m->buf_addr + buf_ofs;
m->data_len = 0;
__rte_mbuf_sanity_check(m, 1);
@@ -504,7 +502,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
mi->buf_len = md->buf_len;
mi->next = md->next;
- mi->data = md->data;
+ mi->data_off = md->data_off;
mi->data_len = md->data_len;
mi->in_port = md->in_port;
mi->vlan_macip = md->vlan_macip;
@@ -533,16 +531,14 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
{
const struct rte_mempool *mp = m->pool;
void *buf = RTE_MBUF_TO_BADDR(m);
- uint32_t buf_ofs;
uint32_t buf_len = mp->elt_size - sizeof(*m);
m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof (*m);
m->buf_addr = buf;
m->buf_len = (uint16_t)buf_len;
- buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
+ m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
RTE_PKTMBUF_HEADROOM : m->buf_len;
- m->data = (char*) m->buf_addr + buf_ofs;
m->data_len = 0;
}
@@ -706,7 +702,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
__rte_mbuf_sanity_check(m, 1);
- return (uint16_t) ((char*) m->data - (char*) m->buf_addr);
+ return m->data_off;
}
/**
@@ -754,7 +750,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
* @param t
* The type to cast the result into.
*/
-#define rte_pktmbuf_mtod(m, t) ((t)((m)->data))
+#define rte_pktmbuf_mtod(m, t) ((t)((char *)(m)->buf_addr + (m)->data_off))
/**
* A macro that returns the length of the packet.
@@ -799,11 +795,11 @@ static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
- m->data = (char*) m->data - len;
+ m->data_off -= len;
m->data_len = (uint16_t)(m->data_len + len);
m->pkt_len = (m->pkt_len + len);
- return (char*) m->data;
+ return (char*) m->buf_addr + m->data_off;
}
/**
@@ -832,7 +828,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
return NULL;
- tail = (char*) m_last->data + m_last->data_len;
+ tail = (char*) m_last->buf_addr + m_last->data_off + m_last->data_len;
m_last->data_len = (uint16_t)(m_last->data_len + len);
m->pkt_len = (m->pkt_len + len);
return (char*) tail;
@@ -860,9 +856,9 @@ static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
return NULL;
m->data_len = (uint16_t)(m->data_len - len);
- m->data = ((char*) m->data + len);
+ m->data_off += len;
m->pkt_len = (m->pkt_len - len);
- return (char*) m->data;
+ return (char *)m->buf_addr + m->data_off;
}
/**
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index ea72fe3..8c302b6 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -90,8 +90,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
}
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + \
- (uint64_t) ((char *)((mb)->data) - (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)
@@ -771,8 +770,8 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.length) -
rxq->crc_len);
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->data);
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
rxm->nb_segs = 1;
rxm->next = NULL;
rxm->pkt_len = pkt_len;
@@ -941,7 +940,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
data_len = rte_le_to_cpu_16(rxd.length);
rxm->data_len = data_len;
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -1013,7 +1012,8 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->data);
+ rte_packet_prefetch((char *)first_seg->buf_addr +
+ first_seg->data_off);
/*
* Store the mbuf address into the next entry of the array
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index da33171..ab0ff01 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -84,9 +84,7 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
}
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + \
- (uint64_t) ((char *)((mb)->data) - \
- (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)
@@ -742,8 +740,8 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -
rxq->crc_len);
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->data);
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
rxm->nb_segs = 1;
rxm->next = NULL;
rxm->pkt_len = pkt_len;
@@ -919,7 +917,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
data_len = rte_le_to_cpu_16(rxd.wb.upper.length);
rxm->data_len = data_len;
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -1001,7 +999,8 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->ol_flags = pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->data);
+ rte_packet_prefetch((char *)first_seg->buf_addr +
+ first_seg->data_off);
/*
* Store the mbuf address into the next entry of the array
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 1f95f30..acde64b 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -988,7 +988,7 @@ ixgbe_rx_alloc_bufs(struct igb_rx_queue *rxq)
mb = rxep[i].mbuf;
rte_mbuf_refcnt_set(mb, 1);
mb->next = NULL;
- mb->data = (char *)mb->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mb->data_off = RTE_PKTMBUF_HEADROOM;
mb->nb_segs = 1;
mb->in_port = rxq->port_id;
@@ -1239,8 +1239,8 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
pkt_len = (uint16_t) (rte_le_to_cpu_16(rxd.wb.upper.length) -
rxq->crc_len);
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
- rte_packet_prefetch(rxm->data);
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
+ rte_packet_prefetch((char *)rxm->buf_addr + rxm->data_off);
rxm->nb_segs = 1;
rxm->next = NULL;
rxm->pkt_len = pkt_len;
@@ -1423,7 +1423,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
*/
data_len = rte_le_to_cpu_16(rxd.wb.upper.length);
rxm->data_len = data_len;
- rxm->data = (char*) rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
/*
* If this is the first buffer of the received packet,
@@ -1515,7 +1515,8 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
}
/* Prefetch data of first segment, if configured to do so. */
- rte_packet_prefetch(first_seg->data);
+ rte_packet_prefetch((char *)first_seg->buf_addr +
+ first_seg->data_off);
/*
* Store the mbuf address into the next entry of the array
@@ -3084,7 +3085,7 @@ ixgbe_alloc_rx_queue_mbufs(struct igb_rx_queue *rxq)
rte_mbuf_refcnt_set(mbuf, 1);
mbuf->next = NULL;
- mbuf->data = (char *)mbuf->buf_addr + RTE_PKTMBUF_HEADROOM;
+ mbuf->data_off = RTE_PKTMBUF_HEADROOM;
mbuf->nb_segs = 1;
mbuf->in_port = rxq->port_id;
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index b959c9b..8cf66bf 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -45,8 +45,7 @@
#endif
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
- (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)
diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c
index 2af2c49..835e277 100644
--- a/lib/librte_pmd_virtio/virtio_rxtx.c
+++ b/lib/librte_pmd_virtio/virtio_rxtx.c
@@ -268,7 +268,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
continue;
}
rxm->in_port = rxvq->port_id;
- rxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
rxm->nb_segs = 1;
rxm->next = NULL;
rxm->pkt_len = (uint32_t)(len[i] - sizeof(struct virtio_net_hdr));
diff --git a/lib/librte_pmd_virtio/virtqueue.h b/lib/librte_pmd_virtio/virtqueue.h
index 210944e..11f908c 100644
--- a/lib/librte_pmd_virtio/virtqueue.h
+++ b/lib/librte_pmd_virtio/virtqueue.h
@@ -59,8 +59,7 @@
#define VIRTQUEUE_MAX_NAME_SZ 32
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
- (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#define VTNET_SQ_RQ_QUEUE_IDX 0
#define VTNET_SQ_TQ_QUEUE_IDX 1
@@ -363,7 +362,7 @@ virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts, uint
break;
}
rte_prefetch0(cookie);
- rte_packet_prefetch(cookie->data);
+ rte_packet_prefetch((char *)cookie->buf_addr + cookie->data_off);
rx_pkts[i] = cookie;
vq->vq_used_cons_idx++;
vq_ring_free_chain(vq, desc_idx);
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 6db766a..0845b1d 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -80,8 +80,7 @@
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + (uint64_t)((char *)((mb)->data) - \
- (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#define RTE_MBUF_DATA_DMA_ADDR_DEFAULT(mb) \
(uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM)
@@ -551,7 +550,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->data_len = (uint16_t)rcd->len;
rxm->in_port = rxq->port_id;
rxm->vlan_macip.f.vlan_tci = 0;
- rxm->data = (char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM;
+ rxm->data_off = RTE_PKTMBUF_HEADROOM;
rx_pkts[nb_rx++] = rxm;
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 08/17] mbuf: add functions to get the name of an ol_flag
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (6 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 07/17] mbuf: replace data pointer by an offset Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 09/17] mbuf: change ol_flags to 32 bits Olivier Matz
` (10 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
In test-pmd (rxonly.c), the code is able to dump the list of ol_flags.
The issue is that the list of flags in the application has to be
synchronized with the flags defined in rte_mbuf.h.
This patch introduces 2 new functions rte_get_rx_ol_flag_name()
and rte_get_tx_ol_flag_name() that returns the name of a flag from
its mask. It also fixes rxonly.c to use this new functions and to
display the proper flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/rxonly.c | 33 ++++++++++--------------------
lib/librte_mbuf/rte_mbuf.h | 50 ++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 58 insertions(+), 25 deletions(-)
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 5751b0b..94f71c7 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -69,23 +69,6 @@
#include "testpmd.h"
-#define MAX_PKT_RX_FLAGS 11
-static const char *pkt_rx_flag_names[MAX_PKT_RX_FLAGS] = {
- "VLAN_PKT",
- "RSS_HASH",
- "PKT_RX_FDIR",
- "IP_CKSUM",
- "IP_CKSUM_BAD",
-
- "IPV4_HDR",
- "IPV4_HDR_EXT",
- "IPV6_HDR",
- "IPV6_HDR_EXT",
-
- "IEEE1588_PTP",
- "IEEE1588_TMST",
-};
-
static inline void
print_ether_addr(const char *what, struct ether_addr *eth_addr)
{
@@ -169,12 +152,16 @@ pkt_burst_receive(struct fwd_stream *fs)
mb->vlan_macip.f.vlan_tci);
printf("\n");
if (ol_flags != 0) {
- int rxf;
-
- for (rxf = 0; rxf < MAX_PKT_RX_FLAGS; rxf++) {
- if (ol_flags & (1 << rxf))
- printf(" PKT_RX_%s\n",
- pkt_rx_flag_names[rxf]);
+ uint16_t rxf;
+ const char *name;
+
+ for (rxf = 0; rxf < sizeof(mb->ol_flags) * 8; rxf++) {
+ if ((ol_flags & (1 << rxf)) == 0)
+ continue;
+ name = rte_get_rx_ol_flag_name(1 << rxf);
+ if (name == NULL)
+ continue;
+ printf(" %s\n", name);
}
}
rte_pktmbuf_free(mb);
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index cb0a293..7556a86 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -99,9 +99,55 @@ extern "C" {
#define PKT_TX_IEEE1588_TMST 0x8000 /**< TX IEEE1588 packet to timestamp. */
/**
- * Bit Mask to indicate what bits required for building TX context
+ * Get the name of a RX offload flag
+ *
+ * @param mask
+ * The mask describing the flag. Usually only one bit must be set.
+ * Sevearl bits can be given if they bleong to the same mask.
+ * Ex: PKT_TX_L4_MASK.
+ * @return
+ * The name of this flag, or NULL if it's not a valid RX flag.
*/
-#define PKT_TX_OFFLOAD_MASK (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK)
+static inline const char *rte_get_rx_ol_flag_name(uint16_t mask)
+{
+ switch (mask) {
+ case PKT_RX_VLAN_PKT: return "PKT_RX_VLAN_PKT";
+ case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
+ case PKT_RX_FDIR: return "PKT_RX_FDIR";
+ case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
+ case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
+ case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
+ case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
+ case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
+ case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+ case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
+ case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+ default: return NULL;
+ }
+}
+
+/**
+ * Get the name of a TX offload flag
+ *
+ * @param mask
+ * The mask describing the flag. Usually only one bit must be set.
+ * Sevearl bits can be given if they bleong to the same mask.
+ * Ex: PKT_TX_L4_MASK.
+ * @return
+ * The name of this flag, or NULL if it's not a valid TX flag.
+ */
+static inline const char *rte_get_tx_ol_flag_name(uint16_t mask)
+{
+ switch (mask) {
+ case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
+ case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
+ case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
+ case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
+ case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
+ case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
+ default: return NULL;
+ }
+}
/** Offload features */
union rte_vlan_macip {
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 09/17] mbuf: change ol_flags to 32 bits
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (7 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 08/17] mbuf: add functions to get the name of an ol_flag Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size Olivier Matz
` (9 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
There is no room to add other offload flags in the current 16 bits
fields. Since we have more room in the mbuf structure, we can change
the ol_flags to 32 bits.
A next commit will add the support of TSO (TCP Segmentation Offload)
which require a new ol_flags, justifying this commit.
Thanks to this modification, another possible improvement (which is not
part of this series) could be to change the checksum flags from:
PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD
to:
PKT_RX_L4_CKSUM, PKT_RX_IP_CKSUM, PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD
in order to detect if the checksum has been processed by hw or not.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/cmdline.c | 13 +++-
app/test-pmd/config.c | 10 +--
app/test-pmd/csumonly.c | 26 ++++----
app/test-pmd/rxonly.c | 4 +-
app/test-pmd/testpmd.h | 11 +---
app/test-pmd/txonly.c | 2 +-
.../bsdapp/eal/include/exec-env/rte_kni_common.h | 2 +-
.../linuxapp/eal/include/exec-env/rte_kni_common.h | 2 +-
lib/librte_mbuf/rte_mbuf.c | 2 +-
lib/librte_mbuf/rte_mbuf.h | 52 +++++++--------
lib/librte_pmd_e1000/em_rxtx.c | 35 +++++-----
lib/librte_pmd_e1000/igb_rxtx.c | 71 ++++++++++----------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 77 +++++++++++-----------
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 2 +-
14 files changed, 157 insertions(+), 152 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index a51eee5..8f155e9 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2257,8 +2257,17 @@ cmd_tx_cksum_set_parsed(void *parsed_result,
__attribute__((unused)) void *data)
{
struct cmd_tx_cksum_set_result *res = parsed_result;
-
- tx_cksum_set(res->port_id, res->cksum_mask);
+ uint32_t ol_flags = 0;
+
+ if (res->cksum_mask & 0x1)
+ ol_flags |= PKT_TX_IP_CKSUM;
+ if (res->cksum_mask & 0x2)
+ ol_flags |= PKT_TX_TCP_CKSUM;
+ if (res->cksum_mask & 0x4)
+ ol_flags |= PKT_TX_UDP_CKSUM;
+ if (res->cksum_mask & 0x8)
+ ol_flags |= PKT_TX_SCTP_CKSUM;
+ tx_cksum_set(res->port_id, ol_flags);
}
cmdline_parse_token_string_t cmd_tx_cksum_set_tx_cksum =
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 20ad0a8..018a278 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1532,14 +1532,16 @@ set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
}
void
-tx_cksum_set(portid_t port_id, uint8_t cksum_mask)
+tx_cksum_set(portid_t port_id, uint32_t ol_flags)
{
- uint16_t tx_ol_flags;
+ uint32_t cksum_mask = PKT_TX_IP_CKSUM | PKT_TX_L4_MASK;
+
if (port_id_is_invalid(port_id))
return;
+
/* Clear last 4 bits and then set L3/4 checksum mask again */
- tx_ol_flags = (uint16_t) (ports[port_id].tx_ol_flags & 0xFFF0);
- ports[port_id].tx_ol_flags = (uint16_t) ((cksum_mask & 0xf) | tx_ol_flags);
+ ports[port_id].tx_ol_flags &= ~cksum_mask;
+ ports[port_id].tx_ol_flags |= (ol_flags & cksum_mask);
}
void
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 3313b87..69b90a7 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -217,9 +217,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint16_t nb_rx;
uint16_t nb_tx;
uint16_t i;
- uint16_t ol_flags;
- uint16_t pkt_ol_flags;
- uint16_t tx_ol_flags;
+ uint32_t ol_flags;
+ uint32_t pkt_ol_flags;
+ uint32_t tx_ol_flags;
uint16_t l4_proto;
uint16_t eth_type;
uint8_t l2_len;
@@ -261,7 +261,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
mb = pkts_burst[i];
l2_len = sizeof(struct ether_hdr);
pkt_ol_flags = mb->ol_flags;
- ol_flags = (uint16_t) (pkt_ol_flags & (~PKT_TX_L4_MASK));
+ ol_flags = (pkt_ol_flags & (~PKT_TX_L4_MASK));
eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *);
eth_type = rte_be_to_cpu_16(eth_hdr->ether_type);
@@ -274,8 +274,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
}
/* Update the L3/L4 checksum error packet count */
- rx_bad_ip_csum += (uint16_t) ((pkt_ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
- rx_bad_l4_csum += (uint16_t) ((pkt_ol_flags & PKT_RX_L4_CKSUM_BAD) != 0);
+ rx_bad_ip_csum += ((pkt_ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
+ rx_bad_l4_csum += ((pkt_ol_flags & PKT_RX_L4_CKSUM_BAD) != 0);
/*
* Try to figure out L3 packet type by SW.
@@ -308,7 +308,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
/* Do not delete, this is required by HW*/
ipv4_hdr->hdr_checksum = 0;
- if (tx_ol_flags & 0x1) {
+ if (tx_ol_flags & PKT_TX_IP_CKSUM) {
/* HW checksum */
ol_flags |= PKT_TX_IP_CKSUM;
}
@@ -321,7 +321,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (l4_proto == IPPROTO_UDP) {
udp_hdr = (struct udp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x2) {
+ if (tx_ol_flags & PKT_TX_UDP_CKSUM) {
/* HW Offload */
ol_flags |= PKT_TX_UDP_CKSUM;
/* Pseudo header sum need be set properly */
@@ -337,7 +337,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
else if (l4_proto == IPPROTO_TCP){
tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x4) {
+ if (tx_ol_flags & PKT_TX_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
tcp_hdr->cksum = get_ipv4_psd_sum(ipv4_hdr);
}
@@ -351,7 +351,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x8) {
+ if (tx_ol_flags & PKT_TX_SCTP_CKSUM) {
ol_flags |= PKT_TX_SCTP_CKSUM;
sctp_hdr->cksum = 0;
@@ -377,7 +377,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (l4_proto == IPPROTO_UDP) {
udp_hdr = (struct udp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x2) {
+ if (tx_ol_flags & PKT_TX_UDP_CKSUM) {
/* HW Offload */
ol_flags |= PKT_TX_UDP_CKSUM;
udp_hdr->dgram_cksum = get_ipv6_psd_sum(ipv6_hdr);
@@ -393,7 +393,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
else if (l4_proto == IPPROTO_TCP) {
tcp_hdr = (struct tcp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x4) {
+ if (tx_ol_flags & PKT_TX_TCP_CKSUM) {
ol_flags |= PKT_TX_TCP_CKSUM;
tcp_hdr->cksum = get_ipv6_psd_sum(ipv6_hdr);
}
@@ -407,7 +407,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
unsigned char *) + l2_len + l3_len);
- if (tx_ol_flags & 0x8) {
+ if (tx_ol_flags & PKT_TX_SCTP_CKSUM) {
ol_flags |= PKT_TX_SCTP_CKSUM;
sctp_hdr->cksum = 0;
/* Sanity check, only number of 4 bytes supported by HW */
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 94f71c7..0bf4440 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -92,7 +92,7 @@ pkt_burst_receive(struct fwd_stream *fs)
struct rte_mbuf *mb;
struct ether_hdr *eth_hdr;
uint16_t eth_type;
- uint16_t ol_flags;
+ uint32_t ol_flags;
uint16_t nb_rx;
uint16_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
@@ -152,7 +152,7 @@ pkt_burst_receive(struct fwd_stream *fs)
mb->vlan_macip.f.vlan_tci);
printf("\n");
if (ol_flags != 0) {
- uint16_t rxf;
+ uint32_t rxf;
const char *name;
for (rxf = 0; rxf < sizeof(mb->ol_flags) * 8; rxf++) {
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c80ea09..68eccfa 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -123,12 +123,7 @@ struct fwd_stream {
/**
* The data structure associated with each port.
- * tx_ol_flags is slightly different from ol_flags of rte_mbuf.
- * Bit 0: Insert IP checksum
- * Bit 1: Insert UDP checksum
- * Bit 2: Insert TCP checksum
- * Bit 3: Insert SCTP checksum
- * Bit 11: Insert VLAN Label
+ * tx_ol_flags use the same flags than ol_flags of rte_mbuf.
*/
struct rte_port {
struct rte_eth_dev_info dev_info; /**< PCI info + driver name */
@@ -139,7 +134,7 @@ struct rte_port {
struct fwd_stream *rx_stream; /**< Port RX stream, if unique */
struct fwd_stream *tx_stream; /**< Port TX stream, if unique */
unsigned int socket_id; /**< For NUMA support */
- uint16_t tx_ol_flags;/**< Offload Flags of TX packets. */
+ uint32_t tx_ol_flags;/**< Offload Flags of TX packets. */
uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
@@ -490,7 +485,7 @@ void tx_vlan_reset(portid_t port_id);
void set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value);
-void tx_cksum_set(portid_t port_id, uint8_t cksum_mask);
+void tx_cksum_set(portid_t port_id, uint32_t ol_flags);
void set_verbose_level(uint16_t vb_level);
void set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs);
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index 9d6cffb..b2d8dbd 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -203,7 +203,7 @@ pkt_burst_transmit(struct fwd_stream *fs)
uint16_t nb_tx;
uint16_t nb_pkt;
uint16_t vlan_tci;
- uint16_t ol_flags;
+ uint32_t ol_flags;
uint8_t i;
#ifdef RTE_TEST_PMD_RECORD_CORE_CYCLES
uint64_t start_tsc;
diff --git a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h
index e13afb2..66a32fe 100755
--- a/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h
@@ -111,7 +111,7 @@ struct rte_kni_mbuf {
void *pool;
void *buf_addr;
char pad0[14];
- uint16_t ol_flags; /**< Offload features. */
+ uint32_t ol_flags; /**< Offload features. */
void *next;
void *data; /**< Start address of data in segment buffer. */
uint16_t data_len; /**< Amount of data in segment buffer. */
diff --git a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
index e13afb2..66a32fe 100755
--- a/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
+++ b/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
@@ -111,7 +111,7 @@ struct rte_kni_mbuf {
void *pool;
void *buf_addr;
char pad0[14];
- uint16_t ol_flags; /**< Offload features. */
+ uint32_t ol_flags; /**< Offload features. */
void *next;
void *data; /**< Start address of data in segment buffer. */
uint16_t data_len; /**< Amount of data in segment buffer. */
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
index e791aa6..5519c76 100644
--- a/lib/librte_mbuf/rte_mbuf.c
+++ b/lib/librte_mbuf/rte_mbuf.c
@@ -161,7 +161,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
- fprintf(f, " pkt_len=%"PRIu32", ol_flags=%"PRIx16", nb_segs=%u, "
+ fprintf(f, " pkt_len=%"PRIu32", ol_flags=%"PRIx32", nb_segs=%u, "
"in_port=%u\n", m->pkt_len, m->ol_flags,
(unsigned)m->nb_segs, (unsigned)m->in_port);
nb_segs = m->nb_segs;
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 7556a86..540a62c 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -69,34 +69,33 @@ extern "C" {
* Packet Offload Features Flags. It also carry packet type information.
* Critical resources. Both rx/tx shared these bits. Be cautious on any change
*/
-#define PKT_RX_VLAN_PKT 0x0001 /**< RX packet is a 802.1q VLAN packet. */
-#define PKT_RX_RSS_HASH 0x0002 /**< RX packet with RSS hash result. */
-#define PKT_RX_FDIR 0x0004 /**< RX packet with FDIR infos. */
-#define PKT_RX_L4_CKSUM_BAD 0x0008 /**< L4 cksum of RX pkt. is not OK. */
-#define PKT_RX_IP_CKSUM_BAD 0x0010 /**< IP cksum of RX pkt. is not OK. */
-#define PKT_RX_IPV4_HDR 0x0020 /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT 0x0040 /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR 0x0080 /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT 0x0100 /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP 0x0200 /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST 0x0400 /**< RX IEEE1588 L2/L4 timestamped packet.*/
-
-#define PKT_TX_VLAN_PKT 0x0800 /**< TX packet is a 802.1q VLAN packet. */
-#define PKT_TX_IP_CKSUM 0x1000 /**< IP cksum of TX pkt. computed by NIC. */
+#define PKT_RX_VLAN_PKT 0x00000001 /**< RX packet is a 802.1q VLAN packet. */
+#define PKT_RX_RSS_HASH 0x00000002 /**< RX packet with RSS hash result. */
+#define PKT_RX_FDIR 0x00000004 /**< RX packet with FDIR infos. */
+#define PKT_RX_L4_CKSUM_BAD 0x00000008 /**< L4 cksum of RX pkt. is not OK. */
+#define PKT_RX_IP_CKSUM_BAD 0x00000010 /**< IP cksum of RX pkt. is not OK. */
+#define PKT_RX_IPV4_HDR 0x00000020 /**< RX packet with IPv4 header. */
+#define PKT_RX_IPV4_HDR_EXT 0x00000040 /**< RX packet with extended IPv4 header. */
+#define PKT_RX_IPV6_HDR 0x00000080 /**< RX packet with IPv6 header. */
+#define PKT_RX_IPV6_HDR_EXT 0x00000100 /**< RX packet with extended IPv6 header. */
+#define PKT_RX_IEEE1588_PTP 0x00000200 /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST 0x00000400 /**< RX IEEE1588 L2/L4 timestamped packet.*/
+
+#define PKT_TX_VLAN_PKT 0x00010000 /**< TX packet is a 802.1q VLAN packet. */
+#define PKT_TX_IP_CKSUM 0x00020000 /**< IP cksum of TX pkt. computed by NIC. */
/*
- * Bit 14~13 used for L4 packet type with checksum enabled.
+ * Bits used for L4 packet type with checksum enabled.
* 00: Reserved
* 01: TCP checksum
* 10: SCTP checksum
* 11: UDP checksum
*/
-#define PKT_TX_L4_MASK 0x6000 /**< Mask bits for L4 checksum offload request. */
-#define PKT_TX_L4_NO_CKSUM 0x0000 /**< Disable L4 cksum of TX pkt. */
-#define PKT_TX_TCP_CKSUM 0x2000 /**< TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM 0x4000 /**< SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM 0x6000 /**< UDP cksum of TX pkt. computed by NIC. */
-/* Bit 15 */
-#define PKT_TX_IEEE1588_TMST 0x8000 /**< TX IEEE1588 packet to timestamp. */
+#define PKT_TX_L4_MASK 0x000C0000 /**< Mask bits for L4 checksum offload request. */
+#define PKT_TX_L4_NO_CKSUM 0x00000000 /**< Disable L4 cksum of TX pkt. */
+#define PKT_TX_TCP_CKSUM 0x00040000 /**< TCP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_SCTP_CKSUM 0x00080000 /**< SCTP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_UDP_CKSUM 0x000C0000 /**< UDP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_IEEE1588_TMST 0x00100000 /**< TX IEEE1588 packet to timestamp. */
/**
* Get the name of a RX offload flag
@@ -108,7 +107,7 @@ extern "C" {
* @return
* The name of this flag, or NULL if it's not a valid RX flag.
*/
-static inline const char *rte_get_rx_ol_flag_name(uint16_t mask)
+static inline const char *rte_get_rx_ol_flag_name(uint32_t mask)
{
switch (mask) {
case PKT_RX_VLAN_PKT: return "PKT_RX_VLAN_PKT";
@@ -136,7 +135,7 @@ static inline const char *rte_get_rx_ol_flag_name(uint16_t mask)
* @return
* The name of this flag, or NULL if it's not a valid TX flag.
*/
-static inline const char *rte_get_tx_ol_flag_name(uint16_t mask)
+static inline const char *rte_get_tx_ol_flag_name(uint32_t mask)
{
switch (mask) {
case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
@@ -205,8 +204,7 @@ struct rte_mbuf {
/* these fields are valid for first segment only */
uint8_t nb_segs; /**< Number of segments. */
uint8_t in_port; /**< Input port. */
- uint16_t ol_flags; /**< Offload features. */
- uint16_t reserved; /**< Unused field. Required for padding. */
+ uint32_t ol_flags; /**< Offload features. */
/* offload features, valid for first segment only */
union rte_vlan_macip vlan_macip;
@@ -218,7 +216,7 @@ struct rte_mbuf {
} fdir; /**< Filter identifier if FDIR enabled */
uint32_t sched; /**< Hierarchical scheduler */
} hash; /**< hash information */
- uint64_t reserved2; /**< Unused field. Required for padding. */
+ uint64_t reserved; /**< Unused field. Required for padding. */
} __rte_cache_aligned;
/**
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 8c302b6..1a34340 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -147,7 +147,7 @@ enum {
* Structure to check if new context need be built
*/
struct em_ctx_info {
- uint16_t flags; /**< ol_flags related to context build. */
+ uint32_t flags; /**< ol_flags related to context build. */
uint32_t cmp_mask; /**< compare mask */
union rte_vlan_macip hdrlen; /**< L2 and L3 header lenghts */
};
@@ -217,7 +217,7 @@ struct em_tx_queue {
static inline void
em_set_xmit_ctx(struct em_tx_queue* txq,
volatile struct e1000_context_desc *ctx_txd,
- uint16_t flags,
+ uint32_t flags,
union rte_vlan_macip hdrlen)
{
uint32_t cmp_mask, cmd_len;
@@ -283,7 +283,7 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_ctx_update(struct em_tx_queue *txq, uint16_t flags,
+what_ctx_update(struct em_tx_queue *txq, uint32_t flags,
union rte_vlan_macip hdrlen)
{
/* If match with the current context */
@@ -356,7 +356,7 @@ em_xmit_cleanup(struct em_tx_queue *txq)
}
static inline uint32_t
-tx_desc_cksum_flags_to_upper(uint16_t ol_flags)
+tx_desc_cksum_flags_to_upper(uint32_t ol_flags)
{
static const uint32_t l4_olinfo[2] = {0, E1000_TXD_POPTS_TXSM << 8};
static const uint32_t l3_olinfo[2] = {0, E1000_TXD_POPTS_IXSM << 8};
@@ -382,12 +382,12 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t popts_spec;
uint32_t cmd_type_len;
uint16_t slen;
- uint16_t ol_flags;
+ uint32_t ol_flags;
uint16_t tx_id;
uint16_t tx_last;
uint16_t nb_tx;
uint16_t nb_used;
- uint16_t tx_ol_req;
+ uint32_t tx_ol_req;
uint32_t ctx;
uint32_t new_ctx;
union rte_vlan_macip hdrlen;
@@ -417,8 +417,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
ol_flags = tx_pkt->ol_flags;
/* If hardware offload required */
- tx_ol_req = (uint16_t)(ol_flags & (PKT_TX_IP_CKSUM |
- PKT_TX_L4_MASK));
+ tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
if (tx_ol_req) {
hdrlen = tx_pkt->vlan_macip;
/* If new context to be built or reuse the exist ctx. */
@@ -620,22 +619,22 @@ end_of_tx:
*
**********************************************************************/
-static inline uint16_t
+static inline uint32_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
{
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
/* Check if VLAN present */
- pkt_flags = (uint16_t)((rx_status & E1000_RXD_STAT_VP) ?
- PKT_RX_VLAN_PKT : 0);
+ pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
+ PKT_RX_VLAN_PKT : 0);
return pkt_flags;
}
-static inline uint16_t
+static inline uint32_t
rx_desc_error_to_pkt_flags(uint32_t rx_error)
{
- uint16_t pkt_flags = 0;
+ uint32_t pkt_flags = 0;
if (rx_error & E1000_RXD_ERR_IPE)
pkt_flags |= PKT_RX_IP_CKSUM_BAD;
@@ -779,8 +778,8 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->in_port = rxq->port_id;
rxm->ol_flags = rx_desc_status_to_pkt_flags(status);
- rxm->ol_flags = (uint16_t)(rxm->ol_flags |
- rx_desc_error_to_pkt_flags(rxd.errors));
+ rxm->ol_flags = rxm->ol_flags |
+ rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
@@ -1005,8 +1004,8 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->in_port = rxq->port_id;
first_seg->ol_flags = rx_desc_status_to_pkt_flags(status);
- first_seg->ol_flags = (uint16_t)(first_seg->ol_flags |
- rx_desc_error_to_pkt_flags(rxd.errors));
+ first_seg->ol_flags = first_seg->ol_flags |
+ rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index ab0ff01..322dfa0 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -144,7 +144,7 @@ enum igb_advctx_num {
* Strucutre to check if new context need be built
*/
struct igb_advctx_info {
- uint16_t flags; /**< ol_flags related to context build. */
+ uint32_t flags; /**< ol_flags related to context build. */
uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */
union rte_vlan_macip vlan_macip_lens; /**< vlan, mac & ip length. */
};
@@ -212,7 +212,7 @@ struct igb_tx_queue {
static inline void
igbe_set_xmit_ctx(struct igb_tx_queue* txq,
volatile struct e1000_adv_tx_context_desc *ctx_txd,
- uint16_t ol_flags, uint32_t vlan_macip_lens)
+ uint32_t ol_flags, uint32_t vlan_macip_lens)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx;
@@ -277,7 +277,7 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
+what_advctx_update(struct igb_tx_queue *txq, uint32_t flags,
uint32_t vlan_macip_lens)
{
/* If match with the current context */
@@ -300,7 +300,7 @@ what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
}
static inline uint32_t
-tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
+tx_desc_cksum_flags_to_olinfo(uint32_t ol_flags)
{
static const uint32_t l4_olinfo[2] = {0, E1000_ADVTXD_POPTS_TXSM};
static const uint32_t l3_olinfo[2] = {0, E1000_ADVTXD_POPTS_IXSM};
@@ -312,7 +312,7 @@ tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
}
static inline uint32_t
-tx_desc_vlan_flags_to_cmdtype(uint16_t ol_flags)
+tx_desc_vlan_flags_to_cmdtype(uint32_t ol_flags)
{
static uint32_t vlan_cmd[2] = {0, E1000_ADVTXD_DCMD_VLE};
return vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
@@ -334,12 +334,12 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t cmd_type_len;
uint32_t pkt_len;
uint16_t slen;
- uint16_t ol_flags;
+ uint32_t ol_flags;
uint16_t tx_end;
uint16_t tx_id;
uint16_t tx_last;
uint16_t nb_tx;
- uint16_t tx_ol_req;
+ uint32_t tx_ol_req;
uint32_t new_ctx = 0;
uint32_t ctx = 0;
uint32_t vlan_macip_lens;
@@ -368,7 +368,8 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
ol_flags = tx_pkt->ol_flags;
vlan_macip_lens = tx_pkt->vlan_macip.data;
- tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
+ tx_ol_req = ol_flags &
+ (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK);
/* If a Context Descriptor need be built . */
if (tx_ol_req) {
@@ -555,12 +556,12 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* RX functions
*
**********************************************************************/
-static inline uint16_t
+static inline uint32_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
- static uint16_t ip_pkt_types_map[16] = {
+ static uint32_t ip_pkt_types_map[16] = {
0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, 0, 0, 0,
PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
@@ -573,34 +574,34 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (uint16_t)((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+ pkt_flags = ((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ?
+ ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
+ ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
#else
- pkt_flags = (uint16_t)((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+ pkt_flags = ((hl_tp_rs & E1000_RXDADV_PKTTYPE_ETQF) ? 0 :
+ ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
#endif
- return (uint16_t)(pkt_flags | (((hl_tp_rs & 0x0F) == 0) ?
- 0 : PKT_RX_RSS_HASH));
+ return (pkt_flags | (((hl_tp_rs & 0x0F) == 0) ?
+ 0 : PKT_RX_RSS_HASH));
}
-static inline uint16_t
+static inline uint32_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
{
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
/* Check if VLAN present */
- pkt_flags = (uint16_t)((rx_status & E1000_RXD_STAT_VP) ?
- PKT_RX_VLAN_PKT : 0);
+ pkt_flags = ((rx_status & E1000_RXD_STAT_VP) ?
+ PKT_RX_VLAN_PKT : 0);
#if defined(RTE_LIBRTE_IEEE1588)
if (rx_status & E1000_RXD_STAT_TMST)
- pkt_flags = (uint16_t)(pkt_flags | PKT_RX_IEEE1588_TMST);
+ pkt_flags = (pkt_flags | PKT_RX_IEEE1588_TMST);
#endif
return pkt_flags;
}
-static inline uint16_t
+static inline uint32_t
rx_desc_error_to_pkt_flags(uint32_t rx_status)
{
/*
@@ -608,7 +609,7 @@ rx_desc_error_to_pkt_flags(uint32_t rx_status)
* Bit 29: L4I, L4I integrity error
*/
- static uint16_t error_to_pkt_flags_map[4] = {
+ static uint32_t error_to_pkt_flags_map[4] = {
0, PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD,
PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
};
@@ -635,7 +636,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
nb_rx = 0;
nb_hold = 0;
@@ -755,10 +756,10 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_flags = pkt_flags |
+ rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
/*
@@ -815,7 +816,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_rx;
uint16_t nb_hold;
uint16_t data_len;
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
nb_rx = 0;
nb_hold = 0;
@@ -992,10 +993,10 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_flags = pkt_flags |
+ rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
/* Prefetch data of first segment, if configured to do so. */
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index acde64b..0ff1a07 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -350,7 +350,7 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint16_t ol_flags, uint32_t vlan_macip_lens)
+ uint32_t ol_flags, uint32_t vlan_macip_lens)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx;
@@ -413,7 +413,7 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
* or create a new context descriptor.
*/
static inline uint32_t
-what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
+what_advctx_update(struct igb_tx_queue *txq, uint32_t flags,
uint32_t vlan_macip_lens)
{
/* If match with the current used context */
@@ -436,7 +436,7 @@ what_advctx_update(struct igb_tx_queue *txq, uint16_t flags,
}
static inline uint32_t
-tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
+tx_desc_cksum_flags_to_olinfo(uint32_t ol_flags)
{
static const uint32_t l4_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_TXSM};
static const uint32_t l3_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_IXSM};
@@ -448,7 +448,7 @@ tx_desc_cksum_flags_to_olinfo(uint16_t ol_flags)
}
static inline uint32_t
-tx_desc_vlan_flags_to_cmdtype(uint16_t ol_flags)
+tx_desc_vlan_flags_to_cmdtype(uint32_t ol_flags)
{
static const uint32_t vlan_cmd[2] = {0, IXGBE_ADVTXD_DCMD_VLE};
return vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
@@ -537,12 +537,12 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t cmd_type_len;
uint32_t pkt_len;
uint16_t slen;
- uint16_t ol_flags;
+ uint32_t ol_flags;
uint16_t tx_id;
uint16_t tx_last;
uint16_t nb_tx;
uint16_t nb_used;
- uint16_t tx_ol_req;
+ uint32_t tx_ol_req;
uint32_t vlan_macip_lens;
uint32_t ctx = 0;
uint32_t new_ctx;
@@ -574,7 +574,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
vlan_macip_lens = tx_pkt->vlan_macip.data;
/* If hardware offload required */
- tx_ol_req = (uint16_t)(ol_flags & PKT_TX_OFFLOAD_MASK);
+ tx_ol_req = ol_flags &
+ (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK);
if (tx_ol_req) {
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -804,19 +805,19 @@ end_of_tx:
* RX functions
*
**********************************************************************/
-static inline uint16_t
+static inline uint32_t
rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
{
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
- static uint16_t ip_pkt_types_map[16] = {
+ static uint32_t ip_pkt_types_map[16] = {
0, PKT_RX_IPV4_HDR, PKT_RX_IPV4_HDR_EXT, PKT_RX_IPV4_HDR_EXT,
PKT_RX_IPV6_HDR, 0, 0, 0,
PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
PKT_RX_IPV6_HDR_EXT, 0, 0, 0,
};
- static uint16_t ip_rss_types_map[16] = {
+ static uint32_t ip_rss_types_map[16] = {
0, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH, PKT_RX_RSS_HASH,
0, PKT_RX_RSS_HASH, 0, PKT_RX_RSS_HASH,
PKT_RX_RSS_HASH, 0, 0, 0,
@@ -829,45 +830,45 @@ rx_desc_hlen_type_rss_to_pkt_flags(uint32_t hl_tp_rs)
0, 0, 0, 0,
};
- pkt_flags = (uint16_t) ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
- ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+ pkt_flags = ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ?
+ ip_pkt_etqf_map[(hl_tp_rs >> 4) & 0x07] :
+ ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
#else
- pkt_flags = (uint16_t) ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
- ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
+ pkt_flags = ((hl_tp_rs & IXGBE_RXDADV_PKTTYPE_ETQF) ? 0 :
+ ip_pkt_types_map[(hl_tp_rs >> 4) & 0x0F]);
#endif
- return (uint16_t)(pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF]);
+ return (pkt_flags | ip_rss_types_map[hl_tp_rs & 0xF]);
}
-static inline uint16_t
+static inline uint32_t
rx_desc_status_to_pkt_flags(uint32_t rx_status)
{
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
/*
* Check if VLAN present only.
* Do not check whether L3/L4 rx checksum done by NIC or not,
* That can be found from rte_eth_rxmode.hw_ip_checksum flag
*/
- pkt_flags = (uint16_t)((rx_status & IXGBE_RXD_STAT_VP) ?
- PKT_RX_VLAN_PKT : 0);
+ pkt_flags = ((rx_status & IXGBE_RXD_STAT_VP) ?
+ PKT_RX_VLAN_PKT : 0);
#ifdef RTE_LIBRTE_IEEE1588
if (rx_status & IXGBE_RXD_STAT_TMST)
- pkt_flags = (uint16_t)(pkt_flags | PKT_RX_IEEE1588_TMST);
+ pkt_flags = pkt_flags | PKT_RX_IEEE1588_TMST;
#endif
return pkt_flags;
}
-static inline uint16_t
+static inline uint32_t
rx_desc_error_to_pkt_flags(uint32_t rx_status)
{
/*
* Bit 31: IPE, IPv4 checksum error
* Bit 30: L4I, L4I integrity error
*/
- static uint16_t error_to_pkt_flags_map[4] = {
+ static uint32_t error_to_pkt_flags_map[4] = {
0, PKT_RX_L4_CKSUM_BAD, PKT_RX_IP_CKSUM_BAD,
PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD
};
@@ -938,10 +939,10 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
mb->ol_flags = rx_desc_hlen_type_rss_to_pkt_flags(
rxdp[j].wb.lower.lo_dword.data);
/* reuse status field from scan list */
- mb->ol_flags = (uint16_t)(mb->ol_flags |
- rx_desc_status_to_pkt_flags(s[j]));
- mb->ol_flags = (uint16_t)(mb->ol_flags |
- rx_desc_error_to_pkt_flags(s[j]));
+ mb->ol_flags = mb->ol_flags |
+ rx_desc_status_to_pkt_flags(s[j]);
+ mb->ol_flags = mb->ol_flags |
+ rx_desc_error_to_pkt_flags(s[j]);
}
/* Move mbuf pointers from the S/W ring to the stage */
@@ -1134,7 +1135,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t rx_id;
uint16_t nb_rx;
uint16_t nb_hold;
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
nb_rx = 0;
nb_hold = 0;
@@ -1253,10 +1254,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_flags = pkt_flags |
+ rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ rx_desc_error_to_pkt_flags(staterr);
rxm->ol_flags = pkt_flags;
if (likely(pkt_flags & PKT_RX_RSS_HASH))
@@ -1321,7 +1322,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
uint16_t nb_rx;
uint16_t nb_hold;
uint16_t data_len;
- uint16_t pkt_flags;
+ uint32_t pkt_flags;
nb_rx = 0;
nb_hold = 0;
@@ -1498,10 +1499,10 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_status_to_pkt_flags(staterr));
- pkt_flags = (uint16_t)(pkt_flags |
- rx_desc_error_to_pkt_flags(staterr));
+ pkt_flags = pkt_flags |
+ rx_desc_status_to_pkt_flags(staterr);
+ pkt_flags = pkt_flags |
+ rx_desc_error_to_pkt_flags(staterr);
first_seg->ol_flags = pkt_flags;
if (likely(pkt_flags & PKT_RX_RSS_HASH))
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index 8cf66bf..75f8239 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -150,7 +150,7 @@ enum ixgbe_advctx_num {
*/
struct ixgbe_advctx_info {
- uint16_t flags; /**< ol_flags for context build. */
+ uint32_t flags; /**< ol_flags for context build. */
uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */
union rte_vlan_macip vlan_macip_lens; /**< vlan, mac ip length. */
};
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (8 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 09/17] mbuf: change ol_flags to 32 bits Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 11/17] testpmd: modify source address to validate checksum calculation Olivier Matz
` (8 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
To implement the TCP segmentation offload, we will need to add
some more meta information in the mbuf, like the length of the
L4 header, the MSS, ...
To prepare this modification, this patch renames vlan_macip_len in
hw_offload and change its length from 32 bits to 64 bits.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/csumonly.c | 4 +--
app/test-pmd/flowgen.c | 6 ++--
app/test-pmd/macfwd.c | 6 ++--
app/test-pmd/macswap.c | 6 ++--
app/test-pmd/rxonly.c | 2 +-
app/test-pmd/testpmd.c | 2 +-
app/test-pmd/txonly.c | 6 ++--
examples/ip_reassembly/ipv4_rsmbl.h | 10 +++----
examples/ip_reassembly/main.c | 4 +--
lib/librte_mbuf/rte_mbuf.h | 34 ++++++++++-----------
lib/librte_pmd_e1000/em_rxtx.c | 50 +++++++++++++++++--------------
lib/librte_pmd_e1000/igb_rxtx.c | 56 ++++++++++++++++++++---------------
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 54 +++++++++++++++++++--------------
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 3 +-
lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 4 +--
15 files changed, 132 insertions(+), 115 deletions(-)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 69b90a7..9caad8f 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -430,8 +430,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
}
/* Combine the packet header write. VLAN is not consider here */
- mb->vlan_macip.f.l2_len = l2_len;
- mb->vlan_macip.f.l3_len = l3_len;
+ mb->hw_offload.l2_len = l2_len;
+ mb->hw_offload.l3_len = l3_len;
mb->ol_flags = ol_flags;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index d69b2b8..14e43b5 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -208,9 +208,9 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
pkt->nb_segs = 1;
pkt->pkt_len = pkt_size;
pkt->ol_flags = ol_flags;
- pkt->vlan_macip.f.vlan_tci = vlan_tci;
- pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ pkt->hw_offload.vlan_tci = vlan_tci;
+ pkt->hw_offload.l2_len = sizeof(struct ether_hdr);
+ pkt->hw_offload.l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
next_flow = (next_flow + 1) % cfg_n_flows;
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index ab74d0c..d137f92 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -116,9 +116,9 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
ether_addr_copy(&ports[fs->tx_port].eth_addr,
ð_hdr->s_addr);
mb->ol_flags = txp->tx_ol_flags;
- mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;
+ mb->hw_offload.l2_len = sizeof(struct ether_hdr);
+ mb->hw_offload.l3_len = sizeof(struct ipv4_hdr);
+ mb->hw_offload.vlan_tci = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c
index d274b36..b1b2324 100644
--- a/app/test-pmd/macswap.c
+++ b/app/test-pmd/macswap.c
@@ -118,9 +118,9 @@ pkt_burst_mac_swap(struct fwd_stream *fs)
ether_addr_copy(&addr, ð_hdr->s_addr);
mb->ol_flags = txp->tx_ol_flags;
- mb->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- mb->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
- mb->vlan_macip.f.vlan_tci = txp->tx_vlan_id;
+ mb->hw_offload.l2_len = sizeof(struct ether_hdr);
+ mb->hw_offload.l3_len = sizeof(struct ipv4_hdr);
+ mb->hw_offload.vlan_tci = txp->tx_vlan_id;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
fs->tx_packets += nb_tx;
diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c
index 0bf4440..6283482 100644
--- a/app/test-pmd/rxonly.c
+++ b/app/test-pmd/rxonly.c
@@ -149,7 +149,7 @@ pkt_burst_receive(struct fwd_stream *fs)
mb->hash.fdir.hash, mb->hash.fdir.id);
if (ol_flags & PKT_RX_VLAN_PKT)
printf(" - VLAN tci=0x%x",
- mb->vlan_macip.f.vlan_tci);
+ mb->hw_offload.vlan_tci);
printf("\n");
if (ol_flags != 0) {
uint32_t rxf;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 92e2729..ec3a522 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -406,7 +406,7 @@ testpmd_mbuf_ctor(struct rte_mempool *mp,
mb->ol_flags = 0;
mb->data_off = RTE_PKTMBUF_HEADROOM;
mb->nb_segs = 1;
- mb->vlan_macip.data = 0;
+ mb->hw_offload.u64 = 0;
mb->hash.rss = 0;
}
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index b2d8dbd..0f3722f 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -264,9 +264,9 @@ pkt_burst_transmit(struct fwd_stream *fs)
pkt->nb_segs = tx_pkt_nb_segs;
pkt->pkt_len = tx_pkt_length;
pkt->ol_flags = ol_flags;
- pkt->vlan_macip.f.vlan_tci = vlan_tci;
- pkt->vlan_macip.f.l2_len = sizeof(struct ether_hdr);
- pkt->vlan_macip.f.l3_len = sizeof(struct ipv4_hdr);
+ pkt->hw_offload.vlan_tci = vlan_tci;
+ pkt->hw_offload.l2_len = sizeof(struct ether_hdr);
+ pkt->hw_offload.l3_len = sizeof(struct ipv4_hdr);
pkts_burst[nb_pkt] = pkt;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt);
diff --git a/examples/ip_reassembly/ipv4_rsmbl.h b/examples/ip_reassembly/ipv4_rsmbl.h
index 9b647fb..c653993 100644
--- a/examples/ip_reassembly/ipv4_rsmbl.h
+++ b/examples/ip_reassembly/ipv4_rsmbl.h
@@ -168,8 +168,8 @@ ipv4_frag_chain(struct rte_mbuf *mn, struct rte_mbuf *mp)
struct rte_mbuf *ms;
/* adjust start of the last fragment data. */
- rte_pktmbuf_adj(mp, (uint16_t)(mp->vlan_macip.f.l2_len +
- mp->vlan_macip.f.l3_len));
+ rte_pktmbuf_adj(mp, (uint16_t)(mp->hw_offload.l2_len +
+ mp->hw_offload.l3_len));
/* chain two fragments. */
ms = rte_pktmbuf_lastseg(mn);
@@ -233,10 +233,10 @@ ipv4_frag_reassemble(const struct ipv4_frag_pkt *fp)
/* update ipv4 header for the reassmebled packet */
ip_hdr = (struct ipv4_hdr*)(rte_pktmbuf_mtod(m, uint8_t *) +
- m->vlan_macip.f.l2_len);
+ m->hw_offload.l2_len);
ip_hdr->total_length = rte_cpu_to_be_16((uint16_t)(fp->total_size +
- m->vlan_macip.f.l3_len));
+ m->hw_offload.l3_len));
ip_hdr->fragment_offset = (uint16_t)(ip_hdr->fragment_offset &
rte_cpu_to_be_16(IPV4_HDR_DF_FLAG));
ip_hdr->hdr_checksum = 0;
@@ -377,7 +377,7 @@ ipv4_frag_mbuf(struct ipv4_frag_tbl *tbl, struct ipv4_frag_death_row *dr,
ip_ofs *= IPV4_HDR_OFFSET_UNITS;
ip_len = (uint16_t)(rte_be_to_cpu_16(ip_hdr->total_length) -
- mb->vlan_macip.f.l3_len);
+ mb->hw_offload.l3_len);
IPV4_FRAG_LOG(DEBUG, "%s:%d:\n"
"mbuf: %p, tms: %" PRIu64
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 5c5626a..a817d3d 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -680,8 +680,8 @@ l3fwd_simple_forward(struct rte_mbuf *m, uint8_t portid, uint32_t queue,
dr = &qconf->death_row;
/* prepare mbuf: setup l2_len/l3_len. */
- m->vlan_macip.f.l2_len = sizeof(*eth_hdr);
- m->vlan_macip.f.l3_len = sizeof(*ipv4_hdr);
+ m->hw_offload.l2_len = sizeof(*eth_hdr);
+ m->hw_offload.l3_len = sizeof(*ipv4_hdr);
/* process this fragment. */
if ((mo = ipv4_frag_mbuf(tbl, dr, m, tms, ipv4_hdr,
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 540a62c..1e63511 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -149,26 +149,22 @@ static inline const char *rte_get_tx_ol_flag_name(uint32_t mask)
}
/** Offload features */
-union rte_vlan_macip {
- uint32_t data;
+union rte_hw_offload {
+ uint64_t u64;
struct {
- uint16_t l3_len:9; /**< L3 (IP) Header Length. */
- uint16_t l2_len:7; /**< L2 (MAC) Header Length. */
+#define HW_OFFLOAD_L2_LEN_MASK 0x7f
+#define HW_OFFLOAD_L3_LEN_MASK 0x1ff
+#define HW_OFFLOAD_L4_LEN_MASK 0xff
+ uint32_t l2_len:7; /**< L2 (MAC) Header Length. */
+ uint32_t l3_len:9; /**< L3 (IP) Header Length. */
+ uint32_t reserved:16;
+
uint16_t vlan_tci;
/**< VLAN Tag Control Identifier (CPU order). */
- } f;
+ uint16_t reserved2;
+ };
};
-/*
- * Compare mask for vlan_macip_len.data,
- * should be in sync with rte_vlan_macip.f layout.
- * */
-#define TX_VLAN_CMP_MASK 0xFFFF0000 /**< VLAN length - 16-bits. */
-#define TX_MAC_LEN_CMP_MASK 0x0000FE00 /**< MAC length - 7-bits. */
-#define TX_IP_LEN_CMP_MASK 0x000001FF /**< IP length - 9-bits. */
-/**< MAC+IP length. */
-#define TX_MACIP_LEN_CMP_MASK (TX_MAC_LEN_CMP_MASK | TX_IP_LEN_CMP_MASK)
-
/**
* The generic rte_mbuf, containing a packet mbuf.
*/
@@ -207,7 +203,7 @@ struct rte_mbuf {
uint32_t ol_flags; /**< Offload features. */
/* offload features, valid for first segment only */
- union rte_vlan_macip vlan_macip;
+ union rte_hw_offload hw_offload;
union {
uint32_t rss; /**< RSS hash result if RSS enabled */
struct {
@@ -216,7 +212,7 @@ struct rte_mbuf {
} fdir; /**< Filter identifier if FDIR enabled */
uint32_t sched; /**< Hierarchical scheduler */
} hash; /**< hash information */
- uint64_t reserved; /**< Unused field. Required for padding. */
+ uint32_t reserved; /**< Unused field. Required for padding. */
} __rte_cache_aligned;
/**
@@ -483,7 +479,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
{
m->next = NULL;
m->pkt_len = 0;
- m->vlan_macip.data = 0;
+ m->hw_offload.u64 = 0;
m->nb_segs = 1;
m->in_port = 0xff;
@@ -549,7 +545,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
mi->data_off = md->data_off;
mi->data_len = md->data_len;
mi->in_port = md->in_port;
- mi->vlan_macip = md->vlan_macip;
+ mi->hw_offload.u64 = md->hw_offload.u64;
mi->hash = md->hash;
mi->next = NULL;
diff --git a/lib/librte_pmd_e1000/em_rxtx.c b/lib/librte_pmd_e1000/em_rxtx.c
index 1a34340..8870ccc 100644
--- a/lib/librte_pmd_e1000/em_rxtx.c
+++ b/lib/librte_pmd_e1000/em_rxtx.c
@@ -148,8 +148,8 @@ enum {
*/
struct em_ctx_info {
uint32_t flags; /**< ol_flags related to context build. */
- uint32_t cmp_mask; /**< compare mask */
- union rte_vlan_macip hdrlen; /**< L2 and L3 header lenghts */
+ union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */
+ union rte_hw_offload offload_mask; /**< compare mask for hw_offload */
};
/**
@@ -217,18 +217,18 @@ struct em_tx_queue {
static inline void
em_set_xmit_ctx(struct em_tx_queue* txq,
volatile struct e1000_context_desc *ctx_txd,
- uint32_t flags,
- union rte_vlan_macip hdrlen)
+ uint32_t flags, union rte_hw_offload hw_offload)
{
- uint32_t cmp_mask, cmd_len;
+ uint32_t cmd_len;
uint16_t ipcse, l2len;
struct e1000_context_desc ctx;
+ union rte_hw_offload offload_mask;
- cmp_mask = 0;
+ offload_mask.u64 = 0;
cmd_len = E1000_TXD_CMD_DEXT | E1000_TXD_DTYP_C;
- l2len = hdrlen.f.l2_len;
- ipcse = (uint16_t)(l2len + hdrlen.f.l3_len);
+ l2len = hw_offload.l2_len;
+ ipcse = (uint16_t)(l2len + hw_offload.l3_len);
/* setup IPCS* fields */
ctx.lower_setup.ip_fields.ipcss = (uint8_t)l2len;
@@ -243,7 +243,8 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
ctx.lower_setup.ip_fields.ipcse =
(uint16_t)rte_cpu_to_le_16(ipcse - 1);
cmd_len |= E1000_TXD_CMD_IP;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
} else {
ctx.lower_setup.ip_fields.ipcse = 0;
}
@@ -256,13 +257,15 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
case PKT_TX_UDP_CKSUM:
ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
offsetof(struct udp_hdr, dgram_cksum));
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
case PKT_TX_TCP_CKSUM:
ctx.upper_setup.tcp_fields.tucso = (uint8_t)(ipcse +
offsetof(struct tcp_hdr, cksum));
cmd_len |= E1000_TXD_CMD_TCP;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
default:
ctx.upper_setup.tcp_fields.tucso = 0;
@@ -274,8 +277,9 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
*ctx_txd = ctx;
txq->ctx_cache.flags = flags;
- txq->ctx_cache.cmp_mask = cmp_mask;
- txq->ctx_cache.hdrlen = hdrlen;
+ txq->ctx_cache.hw_offload.u64 =
+ offload_mask.u64 & hw_offload.u64;
+ txq->ctx_cache.offload_mask = offload_mask;
}
/*
@@ -284,12 +288,12 @@ em_set_xmit_ctx(struct em_tx_queue* txq,
*/
static inline uint32_t
what_ctx_update(struct em_tx_queue *txq, uint32_t flags,
- union rte_vlan_macip hdrlen)
+ union rte_hw_offload hw_offload)
{
/* If match with the current context */
if (likely (txq->ctx_cache.flags == flags &&
- ((txq->ctx_cache.hdrlen.data ^ hdrlen.data) &
- txq->ctx_cache.cmp_mask) == 0))
+ ((txq->ctx_cache.hw_offload.u64 ^ hw_offload.u64) &
+ txq->ctx_cache.offload_mask.u64) == 0))
return (EM_CTX_0);
/* Mismatch */
@@ -390,7 +394,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t tx_ol_req;
uint32_t ctx;
uint32_t new_ctx;
- union rte_vlan_macip hdrlen;
+ union rte_hw_offload hw_offload;
txq = tx_queue;
sw_ring = txq->sw_ring;
@@ -419,9 +423,9 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* If hardware offload required */
tx_ol_req = (ol_flags & (PKT_TX_IP_CKSUM | PKT_TX_L4_MASK));
if (tx_ol_req) {
- hdrlen = tx_pkt->vlan_macip;
+ hw_offload.u64 = tx_pkt->hw_offload.u64;
/* If new context to be built or reuse the exist ctx. */
- ctx = what_ctx_update(txq, tx_ol_req, hdrlen);
+ ctx = what_ctx_update(txq, tx_ol_req, hw_offload);
/* Only allocate context descriptor if required*/
new_ctx = (ctx == EM_CTX_NUM);
@@ -514,7 +518,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* Set VLAN Tag offload fields. */
if (ol_flags & PKT_TX_VLAN_PKT) {
cmd_type_len |= E1000_TXD_CMD_VLE;
- popts_spec = tx_pkt->vlan_macip.f.vlan_tci <<
+ popts_spec = tx_pkt->hw_offload.vlan_tci <<
E1000_TXD_VLAN_SHIFT;
}
@@ -537,7 +541,7 @@ eth_em_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
em_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- hdrlen);
+ hw_offload);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -782,7 +786,7 @@ eth_em_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special);
/*
* Store the mbuf address into the next entry of the array
@@ -1008,7 +1012,7 @@ eth_em_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rx_desc_error_to_pkt_flags(rxd.errors);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_macip.f.vlan_tci = rte_le_to_cpu_16(rxd.special);
+ rxm->hw_offload.vlan_tci = rte_le_to_cpu_16(rxd.special);
/* Prefetch data of first segment, if configured to do so. */
rte_packet_prefetch((char *)first_seg->buf_addr +
diff --git a/lib/librte_pmd_e1000/igb_rxtx.c b/lib/librte_pmd_e1000/igb_rxtx.c
index 322dfa0..2db496f 100644
--- a/lib/librte_pmd_e1000/igb_rxtx.c
+++ b/lib/librte_pmd_e1000/igb_rxtx.c
@@ -145,8 +145,8 @@ enum igb_advctx_num {
*/
struct igb_advctx_info {
uint32_t flags; /**< ol_flags related to context build. */
- uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */
- union rte_vlan_macip vlan_macip_lens; /**< vlan, mac & ip length. */
+ union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */
+ union rte_hw_offload offload_mask; /**< compare mask for hw_offload */
};
/**
@@ -212,26 +212,28 @@ struct igb_tx_queue {
static inline void
igbe_set_xmit_ctx(struct igb_tx_queue* txq,
volatile struct e1000_adv_tx_context_desc *ctx_txd,
- uint32_t ol_flags, uint32_t vlan_macip_lens)
+ uint32_t ol_flags, union rte_hw_offload hw_offload)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx;
uint32_t ctx_idx, ctx_curr;
- uint32_t cmp_mask;
+ uint32_t vlan_macip_lens;
+ union rte_hw_offload offload_mask;
ctx_curr = txq->ctx_curr;
ctx_idx = ctx_curr + txq->ctx_start;
- cmp_mask = 0;
+ offload_mask.u64 = 0;
type_tucmd_mlhl = 0;
if (ol_flags & PKT_TX_VLAN_PKT) {
- cmp_mask |= TX_VLAN_CMP_MASK;
+ offload_mask.vlan_tci = 0xffff;
}
if (ol_flags & PKT_TX_IP_CKSUM) {
type_tucmd_mlhl = E1000_ADVTXD_TUCMD_IPV4;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
}
/* Specify which HW CTX to upload. */
@@ -241,19 +243,22 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_UDP |
E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct udp_hdr) << E1000_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
case PKT_TX_TCP_CKSUM:
type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_TCP |
E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct tcp_hdr) << E1000_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
case PKT_TX_SCTP_CKSUM:
type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_SCTP |
E1000_ADVTXD_DTYP_CTXT | E1000_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct sctp_hdr) << E1000_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
default:
type_tucmd_mlhl |= E1000_ADVTXD_TUCMD_L4T_RSV |
@@ -262,11 +267,14 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
}
txq->ctx_cache[ctx_curr].flags = ol_flags;
- txq->ctx_cache[ctx_curr].cmp_mask = cmp_mask;
- txq->ctx_cache[ctx_curr].vlan_macip_lens.data =
- vlan_macip_lens & cmp_mask;
+ txq->ctx_cache[ctx_curr].hw_offload.u64 =
+ offload_mask.u64 & hw_offload.u64;
+ txq->ctx_cache[ctx_curr].offload_mask = offload_mask;
ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
+ vlan_macip_lens = hw_offload.l3_len;
+ vlan_macip_lens |= (hw_offload.l2_len << E1000_ADVTXD_MACLEN_SHIFT);
+ vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << E1000_ADVTXD_VLAN_SHIFT);
ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens);
ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx);
ctx_txd->seqnum_seed = 0;
@@ -278,20 +286,20 @@ igbe_set_xmit_ctx(struct igb_tx_queue* txq,
*/
static inline uint32_t
what_advctx_update(struct igb_tx_queue *txq, uint32_t flags,
- uint32_t vlan_macip_lens)
+ union rte_hw_offload hw_offload)
{
/* If match with the current context */
if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
- (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data ==
- (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) {
+ (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 ==
+ (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) {
return txq->ctx_curr;
}
/* If match with the second context */
txq->ctx_curr ^= 1;
if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
- (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data ==
- (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) {
+ (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 ==
+ (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) {
return txq->ctx_curr;
}
@@ -342,7 +350,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint32_t tx_ol_req;
uint32_t new_ctx = 0;
uint32_t ctx = 0;
- uint32_t vlan_macip_lens;
+ union rte_hw_offload hw_offload;
txq = tx_queue;
sw_ring = txq->sw_ring;
@@ -367,14 +375,14 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
tx_last = (uint16_t) (tx_id + tx_pkt->nb_segs - 1);
ol_flags = tx_pkt->ol_flags;
- vlan_macip_lens = tx_pkt->vlan_macip.data;
+ hw_offload.u64 = tx_pkt->hw_offload.u64;
tx_ol_req = ol_flags &
(PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK);
/* If a Context Descriptor need be built . */
if (tx_ol_req) {
ctx = what_advctx_update(txq, tx_ol_req,
- vlan_macip_lens);
+ hw_offload);
/* Only allocate context descriptor if required*/
new_ctx = (ctx == IGB_CTX_NUM);
ctx = txq->ctx_curr;
@@ -490,7 +498,7 @@ eth_igb_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
igbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- vlan_macip_lens);
+ hw_offload);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -752,7 +760,7 @@ eth_igb_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_macip.f.vlan_tci =
+ rxm->hw_offload.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -989,7 +997,7 @@ eth_igb_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- first_seg->vlan_macip.f.vlan_tci =
+ first_seg->hw_offload.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 0ff1a07..e1eb59d 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -350,24 +350,26 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
static inline void
ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
- uint32_t ol_flags, uint32_t vlan_macip_lens)
+ uint32_t ol_flags, union rte_hw_offload hw_offload)
{
uint32_t type_tucmd_mlhl;
uint32_t mss_l4len_idx;
uint32_t ctx_idx;
- uint32_t cmp_mask;
+ uint32_t vlan_macip_lens;
+ union rte_hw_offload offload_mask;
ctx_idx = txq->ctx_curr;
- cmp_mask = 0;
+ offload_mask.u64 = 0;
type_tucmd_mlhl = 0;
if (ol_flags & PKT_TX_VLAN_PKT) {
- cmp_mask |= TX_VLAN_CMP_MASK;
+ offload_mask.vlan_tci = 0xffff;
}
if (ol_flags & PKT_TX_IP_CKSUM) {
type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
}
/* Specify which HW CTX to upload. */
@@ -377,19 +379,22 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct udp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
case PKT_TX_TCP_CKSUM:
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct tcp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
case PKT_TX_SCTP_CKSUM:
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
mss_l4len_idx |= sizeof(struct sctp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- cmp_mask |= TX_MACIP_LEN_CMP_MASK;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
break;
default:
type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_RSV |
@@ -398,11 +403,14 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
}
txq->ctx_cache[ctx_idx].flags = ol_flags;
- txq->ctx_cache[ctx_idx].cmp_mask = cmp_mask;
- txq->ctx_cache[ctx_idx].vlan_macip_lens.data =
- vlan_macip_lens & cmp_mask;
+ txq->ctx_cache[ctx_idx].hw_offload.u64 =
+ offload_mask.u64 & hw_offload.u64;
+ txq->ctx_cache[ctx_idx].offload_mask = offload_mask;
ctx_txd->type_tucmd_mlhl = rte_cpu_to_le_32(type_tucmd_mlhl);
+ vlan_macip_lens = hw_offload.l3_len;
+ vlan_macip_lens |= (hw_offload.l2_len << IXGBE_ADVTXD_MACLEN_SHIFT);
+ vlan_macip_lens |= ((uint32_t)hw_offload.vlan_tci << IXGBE_ADVTXD_VLAN_SHIFT);
ctx_txd->vlan_macip_lens = rte_cpu_to_le_32(vlan_macip_lens);
ctx_txd->mss_l4len_idx = rte_cpu_to_le_32(mss_l4len_idx);
ctx_txd->seqnum_seed = 0;
@@ -414,20 +422,20 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
*/
static inline uint32_t
what_advctx_update(struct igb_tx_queue *txq, uint32_t flags,
- uint32_t vlan_macip_lens)
+ union rte_hw_offload hw_offload)
{
/* If match with the current used context */
if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
- (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data ==
- (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) {
+ (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 ==
+ (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) {
return txq->ctx_curr;
}
/* What if match with the next context */
txq->ctx_curr ^= 1;
if (likely((txq->ctx_cache[txq->ctx_curr].flags == flags) &&
- (txq->ctx_cache[txq->ctx_curr].vlan_macip_lens.data ==
- (txq->ctx_cache[txq->ctx_curr].cmp_mask & vlan_macip_lens)))) {
+ (txq->ctx_cache[txq->ctx_curr].hw_offload.u64 ==
+ (txq->ctx_cache[txq->ctx_curr].offload_mask.u64 & hw_offload.u64)))) {
return txq->ctx_curr;
}
@@ -543,9 +551,9 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_tx;
uint16_t nb_used;
uint32_t tx_ol_req;
- uint32_t vlan_macip_lens;
uint32_t ctx = 0;
uint32_t new_ctx;
+ union rte_hw_offload hw_offload;
txq = tx_queue;
sw_ring = txq->sw_ring;
@@ -571,7 +579,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* are needed for offload functionality.
*/
ol_flags = tx_pkt->ol_flags;
- vlan_macip_lens = tx_pkt->vlan_macip.data;
+ hw_offload.u64 = tx_pkt->hw_offload.u64;
/* If hardware offload required */
tx_ol_req = ol_flags &
@@ -579,7 +587,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
if (tx_ol_req) {
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
- vlan_macip_lens);
+ hw_offload);
/* Only allocate context descriptor if required*/
new_ctx = (ctx == IXGBE_CTX_NUM);
ctx = txq->ctx_curr;
@@ -721,7 +729,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
}
ixgbe_set_xmit_ctx(txq, ctx_txd, tx_ol_req,
- vlan_macip_lens);
+ hw_offload);
txe->last_id = tx_last;
tx_id = txe->next_id;
@@ -932,7 +940,7 @@ ixgbe_rx_scan_hw_ring(struct igb_rx_queue *rxq)
rxq->crc_len);
mb->data_len = pkt_len;
mb->pkt_len = pkt_len;
- mb->vlan_macip.f.vlan_tci = rxdp[j].wb.upper.vlan;
+ mb->hw_offload.vlan_tci = rxdp[j].wb.upper.vlan;
mb->hash.rss = rxdp[j].wb.lower.hi_dword.rss;
/* convert descriptor fields to rte mbuf flags */
@@ -1250,7 +1258,7 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
- rxm->vlan_macip.f.vlan_tci =
+ rxm->hw_offload.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
@@ -1495,7 +1503,7 @@ ixgbe_recv_scattered_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
* The vlan_tci field is only valid when PKT_RX_VLAN_PKT is
* set in the pkt_flags field.
*/
- first_seg->vlan_macip.f.vlan_tci =
+ first_seg->hw_offload.vlan_tci =
rte_le_to_cpu_16(rxd.wb.upper.vlan);
hlen_type_rss = rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
pkt_flags = rx_desc_hlen_type_rss_to_pkt_flags(hlen_type_rss);
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
index 75f8239..9199d31 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.h
@@ -152,7 +152,8 @@ enum ixgbe_advctx_num {
struct ixgbe_advctx_info {
uint32_t flags; /**< ol_flags for context build. */
uint32_t cmp_mask; /**< compare mask for vlan_macip_lens */
- union rte_vlan_macip vlan_macip_lens; /**< vlan, mac ip length. */
+ union rte_hw_offload hw_offload; /**< l2/l3/l4 length, vlan, mss. */
+ union rte_hw_offload offload_mask; /**< compare mask for hw_offload */
};
/**
diff --git a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
index 0845b1d..60309a3 100644
--- a/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
+++ b/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c
@@ -536,7 +536,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rte_pktmbuf_mtod(rxm, void *));
#endif
//Copy vlan tag in packet buffer
- rxm->vlan_macip.f.vlan_tci =
+ rxm->hw_offload.vlan_tci =
rte_le_to_cpu_16((uint16_t)rcd->tci);
} else
@@ -549,7 +549,7 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->pkt_len = (uint16_t)rcd->len;
rxm->data_len = (uint16_t)rcd->len;
rxm->in_port = rxq->port_id;
- rxm->vlan_macip.f.vlan_tci = 0;
+ rxm->hw_offload.vlan_tci = 0;
rxm->data_off = RTE_PKTMBUF_HEADROOM;
rx_pkts[nb_rx++] = rxm;
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 11/17] testpmd: modify source address to validate checksum calculation
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (9 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 12/17] mbuf: generic support of TCP segmentation offload Olivier Matz
` (7 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
Always modify the source address of the packet in order to validate
the calculation of the checksums (L3 or L4). This was already done
for IPv4 software checksum, add it for IPv4 hw checksum and IPv6.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/csumonly.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 9caad8f..e93d75f 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -310,6 +310,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
if (tx_ol_flags & PKT_TX_IP_CKSUM) {
/* HW checksum */
+ ipv4_hdr->src_addr--;
ol_flags |= PKT_TX_IP_CKSUM;
}
else {
@@ -373,6 +374,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
unsigned char *) + l2_len);
l3_len = sizeof(struct ipv6_hdr) ;
l4_proto = ipv6_hdr->proto;
+ ipv6_hdr->src_addr[3]--;
if (l4_proto == IPPROTO_UDP) {
udp_hdr = (struct udp_hdr*) (rte_pktmbuf_mtod(mb,
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 12/17] mbuf: generic support of TCP segmentation offload
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (10 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 11/17] testpmd: modify source address to validate checksum calculation Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 13/17] ixgbe: support " Olivier Matz
` (6 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
Implement the generic part of TCP segmentation offload:
- rte_mbuf modifications
- testpmd for validation
To delegate the TCP segmentation to the hardware, the user has to:
- set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
PKT_TX_IP_CKSUM and PKT_TX_TCP_CKSUM)
- fill the mbuf->hw_offload information: l2_len, l3_len, l4_len, mss
- calculate the pseudo header checksum and set it in the TCP header,
as required when doing hardware TCP checksum offload
- set the IP checksum to 0
The API is inspired from ixgbe hardware (the next commit adds the
support for ixgbe), but it seems generic enough to be used for other
hw/drivers in the future.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test-pmd/cmdline.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
app/test-pmd/config.c | 8 ++++++++
app/test-pmd/csumonly.c | 16 ++++++++++++++++
app/test-pmd/testpmd.h | 2 ++
lib/librte_mbuf/rte_mbuf.h | 17 +++++++++++++++--
5 files changed, 86 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 8f155e9..c686901 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -2298,6 +2298,50 @@ cmdline_parse_inst_t cmd_tx_cksum_set = {
},
};
+/* *** ENABLE HARDWARE SEGMENTATION IN TX PACKETS *** */
+struct cmd_tso_set_result {
+ cmdline_fixed_string_t tso;
+ cmdline_fixed_string_t set;
+ uint16_t mss;
+ uint8_t port_id;
+};
+
+static void
+cmd_tso_set_parsed(void *parsed_result,
+ __attribute__((unused)) struct cmdline *cl,
+ __attribute__((unused)) void *data)
+{
+ struct cmd_tso_set_result *res = parsed_result;
+ tso_set(res->port_id, res->mss);
+}
+
+cmdline_parse_token_string_t cmd_tso_set_tso =
+ TOKEN_STRING_INITIALIZER(struct cmd_tso_set_result,
+ tso, "tso");
+cmdline_parse_token_string_t cmd_tso_set_set =
+ TOKEN_STRING_INITIALIZER(struct cmd_tso_set_result,
+ set, "set");
+cmdline_parse_token_num_t cmd_tso_set_mss =
+ TOKEN_NUM_INITIALIZER(struct cmd_tso_set_result,
+ mss, UINT16);
+cmdline_parse_token_num_t cmd_tso_set_portid =
+ TOKEN_NUM_INITIALIZER(struct cmd_tso_set_result,
+ port_id, UINT8);
+
+cmdline_parse_inst_t cmd_tso_set = {
+ .f = cmd_tso_set_parsed,
+ .data = NULL,
+ .help_str = "Enable hardware segmentation (set MSS to 0 to disable): "
+ "tso set <MSS> <PORT>",
+ .tokens = {
+ (void *)&cmd_tso_set_tso,
+ (void *)&cmd_tso_set_set,
+ (void *)&cmd_tso_set_mss,
+ (void *)&cmd_tso_set_portid,
+ NULL,
+ },
+};
+
/* *** ENABLE/DISABLE FLUSH ON RX STREAMS *** */
struct cmd_set_flush_rx {
cmdline_fixed_string_t set;
@@ -5197,6 +5241,7 @@ cmdline_parse_ctx_t main_ctx[] = {
(cmdline_parse_inst_t *)&cmd_tx_vlan_set,
(cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
(cmdline_parse_inst_t *)&cmd_tx_cksum_set,
+ (cmdline_parse_inst_t *)&cmd_tso_set,
(cmdline_parse_inst_t *)&cmd_link_flow_control_set,
(cmdline_parse_inst_t *)&cmd_priority_flow_control_set,
(cmdline_parse_inst_t *)&cmd_config_dcb,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 018a278..184efdb 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1545,6 +1545,14 @@ tx_cksum_set(portid_t port_id, uint32_t ol_flags)
}
void
+tso_set(portid_t port_id, uint16_t mss)
+{
+ if (port_id_is_invalid(port_id))
+ return;
+ ports[port_id].tx_mss = mss;
+}
+
+void
fdir_add_signature_filter(portid_t port_id, uint8_t queue_id,
struct rte_fdir_filter *fdir_filter)
{
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index e93d75f..9983618 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -220,10 +220,12 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
uint32_t ol_flags;
uint32_t pkt_ol_flags;
uint32_t tx_ol_flags;
+ uint16_t tx_mss;
uint16_t l4_proto;
uint16_t eth_type;
uint8_t l2_len;
uint8_t l3_len;
+ uint8_t l4_len;
uint32_t rx_bad_ip_csum;
uint32_t rx_bad_l4_csum;
@@ -255,6 +257,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
txp = &ports[fs->tx_port];
tx_ol_flags = txp->tx_ol_flags;
+ tx_mss = txp->tx_mss;
for (i = 0; i < nb_rx; i++) {
@@ -272,6 +275,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
((uintptr_t)ð_hdr->ether_type +
sizeof(struct vlan_hdr)));
}
+ l4_len = 0;
/* Update the L3/L4 checksum error packet count */
rx_bad_ip_csum += ((pkt_ol_flags & PKT_RX_IP_CKSUM_BAD) != 0);
@@ -347,6 +351,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tcp_hdr->cksum = get_ipv4_udptcp_checksum(ipv4_hdr,
(uint16_t*)tcp_hdr);
}
+
+ if (tx_mss != 0) {
+ ol_flags |= PKT_TX_TCP_SEG;
+ l4_len = (tcp_hdr->data_off & 0xf0) >> 2;
+ }
}
else if (l4_proto == IPPROTO_SCTP) {
sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
@@ -404,6 +413,11 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
tcp_hdr->cksum = get_ipv6_udptcp_checksum(ipv6_hdr,
(uint16_t*)tcp_hdr);
}
+
+ if (tx_mss != 0) {
+ ol_flags |= PKT_TX_TCP_SEG;
+ l4_len = (tcp_hdr->data_off & 0xf0) >> 2;
+ }
}
else if (l4_proto == IPPROTO_SCTP) {
sctp_hdr = (struct sctp_hdr*) (rte_pktmbuf_mtod(mb,
@@ -434,6 +448,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
/* Combine the packet header write. VLAN is not consider here */
mb->hw_offload.l2_len = l2_len;
mb->hw_offload.l3_len = l3_len;
+ mb->hw_offload.l4_len = l4_len;
+ mb->hw_offload.mss = tx_mss;
mb->ol_flags = ol_flags;
}
nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_rx);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 68eccfa..b093911 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -135,6 +135,7 @@ struct rte_port {
struct fwd_stream *tx_stream; /**< Port TX stream, if unique */
unsigned int socket_id; /**< For NUMA support */
uint32_t tx_ol_flags;/**< Offload Flags of TX packets. */
+ uint16_t tx_mss; /**< MSS for segmentation offload. */
uint16_t tx_vlan_id; /**< Tag Id. in TX VLAN packets. */
void *fwd_ctx; /**< Forwarding mode context */
uint64_t rx_bad_ip_csum; /**< rx pkts with bad ip checksum */
@@ -486,6 +487,7 @@ void tx_vlan_reset(portid_t port_id);
void set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value);
void tx_cksum_set(portid_t port_id, uint32_t ol_flags);
+void tso_set(portid_t port_id, uint16_t mss);
void set_verbose_level(uint16_t vb_level);
void set_tx_pkt_segments(unsigned *seg_lengths, unsigned nb_segs);
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 1e63511..c65ab81 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -96,6 +96,17 @@ extern "C" {
#define PKT_TX_SCTP_CKSUM 0x00080000 /**< SCTP cksum of TX pkt. computed by NIC. */
#define PKT_TX_UDP_CKSUM 0x000C0000 /**< UDP cksum of TX pkt. computed by NIC. */
#define PKT_TX_IEEE1588_TMST 0x00100000 /**< TX IEEE1588 packet to timestamp. */
+/**
+ * TCP segmentation offload. To enable this offload feature for a
+ * packet to be transmitted on hardware supporting TSO:
+ * - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ * PKT_TX_IP_CKSUM and PKT_TX_TCP_CKSUM)
+ * - fill the mbuf->hw_offload information: l2_len, l3_len, l4_len, mss
+ * - calculate the pseudo header checksum and set it in the TCP header,
+ * as required when doing hardware TCP checksum offload
+ * - set the IP checksum to 0
+ */
+#define PKT_TX_TCP_SEG 0x00200000
/**
* Get the name of a RX offload flag
@@ -144,6 +155,7 @@ static inline const char *rte_get_tx_ol_flag_name(uint32_t mask)
case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
+ case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
default: return NULL;
}
}
@@ -157,11 +169,12 @@ union rte_hw_offload {
#define HW_OFFLOAD_L4_LEN_MASK 0xff
uint32_t l2_len:7; /**< L2 (MAC) Header Length. */
uint32_t l3_len:9; /**< L3 (IP) Header Length. */
- uint32_t reserved:16;
+ uint32_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
+ uint32_t reserved:8;
uint16_t vlan_tci;
/**< VLAN Tag Control Identifier (CPU order). */
- uint16_t reserved2;
+ uint16_t mss; /**< Maximum segment size. */
};
};
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 13/17] ixgbe: support TCP segmentation offload
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (11 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 12/17] mbuf: generic support of TCP segmentation offload Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [virtio-net-pmd PATCH v2 14/17] pmd: adapt to new rte_mbuf structure Olivier Matz
` (5 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
Implement TSO (TCP segmentation offload) in ixgbe driver.
The driver is now able to use PKT_TX_TCP_SEG mbuf flag and m->hw_offload
to configure the hardware support of TCP segmentation.
In the patch, the tx_desc_cksum_flags_to_olinfo() and
tx_desc_ol_flags_to_cmdtype() functions have been reworked to make them
clearer. This should not impact performance as gcc (version 4.8 in my
case) is smart enough to convert the tests into a code that does not
contain any branch instruction.
validation
==========
platform:
Tester (linux) <----> DUT (DPDK)
Run testpmd on DUT:
cd dpdk.org/
make install T=x86_64-default-linuxapp-gcc
cd x86_64-default-linuxapp-gcc/
modprobe uio
insmod kmod/igb_uio.ko
python ../tools/igb_uio_bind.py -b igb_uio 0000:02:00.0
echo 0 > /proc/sys/kernel/randomize_va_space
echo 1000 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 1000 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
mount -t hugetlbfs none /mnt/huge
./app/testpmd -c 0x55 -n 4 -m 800 -- -i --port-topology=chained
Disable all offload feature on Tester, and start capture:
ethtool -K ixgbe0 rx off tx off tso off gso off gro off lro off
ip l set ixgbe0 up
tcpdump -n -e -i ixgbe0 -s 0 -w /tmp/cap
We use the following scapy script for testing:
def test():
############### IPv4
# checksum TCP
p=Ether()/IP(src=RandIP(), dst=RandIP())/TCP(flags=0x10)/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# checksum UDP
p=Ether()/IP(src=RandIP(), dst=RandIP())/UDP()/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# bad IP checksum
p=Ether()/IP(src=RandIP(), dst=RandIP(), chksum=0x1234)/TCP(flags=0x10)/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# bad TCP checksum
p=Ether()/IP(src=RandIP(), dst=RandIP())/TCP(flags=0x10, chksum=0x1234)/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# large packet
p=Ether()/IP(src=RandIP(), dst=RandIP())/TCP(flags=0x10)/Raw(RandString(1400))
sendp(p, iface="ixgbe0", count=5)
############### IPv6v6
# checksum TCP
p=Ether()/IPv6(src=RandIP6(), dst=RandIP6())/TCP(flags=0x10)/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# checksum UDP
p=Ether()/IPv6(src=RandIP6(), dst=RandIP6())/UDP()/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# bad TCP checksum
p=Ether()/IPv6(src=RandIP6(), dst=RandIP6())/TCP(flags=0x10, chksum=0x1234)/Raw(RandString(50))
sendp(p, iface="ixgbe0", count=5)
# large packet
p=Ether()/IPv6(src=RandIP6(), dst=RandIP6())/TCP(flags=0x10)/Raw(RandString(1400))
sendp(p, iface="ixgbe0", count=5)
Without hw cksum
----------------
On DUT:
# disable hw cksum (use sw) in csumonly test, disable tso
stop
set fwd csum
tx_checksum set 0x0 0
tso set 0 0
start
On tester:
>>> test()
Then check the capture file.
With hw cksum
-------------
On DUT:
# enable hw cksum in csumonly test, disable tso
stop
set fwd csum
tx_checksum set 0xf 0
tso set 0 0
start
On tester:
>>> test()
Then check the capture file.
With TSO
--------
On DUT:
set fwd csum
tx_checksum set 0xf 0
tso set 800 0
start
On tester:
>>> test()
Then check the capture file.
Performance tests
=================
Performance tests have been done on v1 of this patch series.
See http://dpdk.org/ml/archives/dev/2014-May/002516.html
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 3 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 169 +++++++++++++++++++++++++++---------
2 files changed, 129 insertions(+), 43 deletions(-)
diff --git a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
index e78c208..2d5e524 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_ethdev.c
@@ -1751,7 +1751,8 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM |
- DEV_TX_OFFLOAD_SCTP_CKSUM;
+ DEV_TX_OFFLOAD_SCTP_CKSUM |
+ DEV_TX_OFFLOAD_TCP_TSO;
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index e1eb59d..541dd58 100644
--- a/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
+++ b/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
@@ -347,13 +347,59 @@ ixgbe_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts,
return nb_tx;
}
+/* When doing TSO, the IP length must not be included in the pseudo
+ * header checksum of the packet given to the hardware */
+static inline void
+ixgbe_fix_tcp_phdr_cksum(struct rte_mbuf *m)
+{
+ char *data;
+ uint16_t *cksum_ptr;
+ uint16_t prev_cksum;
+ uint16_t new_cksum;
+ uint16_t ip_len, ip_paylen;
+ uint32_t tmp;
+ uint8_t ip_version;
+
+ /* get phdr cksum at offset 16 of TCP header */
+ data = rte_pktmbuf_mtod(m, char *);
+ cksum_ptr = (uint16_t *)(data + m->hw_offload.l2_len +
+ m->hw_offload.l3_len + 16);
+ prev_cksum = *cksum_ptr;
+
+ /* get ip_version */
+ ip_version = (*(uint8_t *)(data + m->hw_offload.l2_len)) >> 4;
+
+ /* get ip_len at offset 2 of IP header or offset 4 of IPv6 header */
+ if (ip_version == 4) {
+ /* override ip cksum to 0 */
+ data[m->hw_offload.l2_len + 10] = 0;
+ data[m->hw_offload.l2_len + 11] = 0;
+
+ ip_len = *(uint16_t *)(data + m->hw_offload.l2_len + 2);
+ ip_paylen = rte_cpu_to_be_16(rte_be_to_cpu_16(ip_len) -
+ m->hw_offload.l3_len);
+ } else {
+ ip_paylen = *(uint16_t *)(data + m->hw_offload.l2_len + 4);
+ }
+
+ /* calculate the new phdr checksum that doesn't include ip_paylen */
+ tmp = prev_cksum ^ 0xffff;
+ if (tmp < ip_paylen)
+ tmp += 0xffff;
+ tmp -= ip_paylen;
+ new_cksum = tmp;
+
+ /* replace it in the packet */
+ *cksum_ptr = new_cksum;
+}
+
static inline void
ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
volatile struct ixgbe_adv_tx_context_desc *ctx_txd,
uint32_t ol_flags, union rte_hw_offload hw_offload)
{
uint32_t type_tucmd_mlhl;
- uint32_t mss_l4len_idx;
+ uint32_t mss_l4len_idx = 0;
uint32_t ctx_idx;
uint32_t vlan_macip_lens;
union rte_hw_offload offload_mask;
@@ -362,44 +408,61 @@ ixgbe_set_xmit_ctx(struct igb_tx_queue* txq,
offload_mask.u64 = 0;
type_tucmd_mlhl = 0;
+ /* Specify which HW CTX to upload. */
+ mss_l4len_idx |= (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
+
if (ol_flags & PKT_TX_VLAN_PKT) {
offload_mask.vlan_tci = 0xffff;
}
- if (ol_flags & PKT_TX_IP_CKSUM) {
- type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
+ /* check if TCP segmentation required for this packet */
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ /* implies IP cksum and TCP cksum */
+ type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4 |
+ IXGBE_ADVTXD_TUCMD_L4T_TCP |
+ IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;;
+
offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
- }
+ offload_mask.l4_len = HW_OFFLOAD_L4_LEN_MASK;
+ offload_mask.mss = 0xffff;
+ mss_l4len_idx |= hw_offload.mss << IXGBE_ADVTXD_MSS_SHIFT;
+ mss_l4len_idx |= hw_offload.l4_len << IXGBE_ADVTXD_L4LEN_SHIFT;
+ } else { /* no TSO, check if hardware checksum is needed */
+ if (ol_flags & PKT_TX_IP_CKSUM) {
+ type_tucmd_mlhl = IXGBE_ADVTXD_TUCMD_IPV4;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
+ }
- /* Specify which HW CTX to upload. */
- mss_l4len_idx = (ctx_idx << IXGBE_ADVTXD_IDX_SHIFT);
- switch (ol_flags & PKT_TX_L4_MASK) {
- case PKT_TX_UDP_CKSUM:
- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
+ switch (ol_flags & PKT_TX_L4_MASK) {
+ case PKT_TX_UDP_CKSUM:
+ type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_UDP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
- mss_l4len_idx |= sizeof(struct udp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
- offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
- break;
- case PKT_TX_TCP_CKSUM:
- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
+ mss_l4len_idx |= sizeof(struct udp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
+ break;
+ case PKT_TX_TCP_CKSUM:
+ type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_TCP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
- mss_l4len_idx |= sizeof(struct tcp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
- offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
- break;
- case PKT_TX_SCTP_CKSUM:
- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
+ mss_l4len_idx |= sizeof(struct tcp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
+ offload_mask.l4_len = HW_OFFLOAD_L4_LEN_MASK;
+ break;
+ case PKT_TX_SCTP_CKSUM:
+ type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_SCTP |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
- mss_l4len_idx |= sizeof(struct sctp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
- offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
- offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
- break;
- default:
- type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_RSV |
+ mss_l4len_idx |= sizeof(struct sctp_hdr) << IXGBE_ADVTXD_L4LEN_SHIFT;
+ offload_mask.l2_len = HW_OFFLOAD_L2_LEN_MASK;
+ offload_mask.l3_len = HW_OFFLOAD_L3_LEN_MASK;
+ break;
+ default:
+ type_tucmd_mlhl |= IXGBE_ADVTXD_TUCMD_L4T_RSV |
IXGBE_ADVTXD_DTYP_CTXT | IXGBE_ADVTXD_DCMD_DEXT;
- break;
+ break;
+ }
}
txq->ctx_cache[ctx_idx].flags = ol_flags;
@@ -446,20 +509,25 @@ what_advctx_update(struct igb_tx_queue *txq, uint32_t flags,
static inline uint32_t
tx_desc_cksum_flags_to_olinfo(uint32_t ol_flags)
{
- static const uint32_t l4_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_TXSM};
- static const uint32_t l3_olinfo[2] = {0, IXGBE_ADVTXD_POPTS_IXSM};
- uint32_t tmp;
-
- tmp = l4_olinfo[(ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM];
- tmp |= l3_olinfo[(ol_flags & PKT_TX_IP_CKSUM) != 0];
+ uint32_t tmp = 0;
+ if ((ol_flags & PKT_TX_L4_MASK) != PKT_TX_L4_NO_CKSUM)
+ tmp |= IXGBE_ADVTXD_POPTS_TXSM;
+ if (ol_flags & PKT_TX_IP_CKSUM)
+ tmp |= IXGBE_ADVTXD_POPTS_IXSM;
+ if (ol_flags & PKT_TX_TCP_SEG)
+ tmp |= IXGBE_ADVTXD_POPTS_TXSM | IXGBE_ADVTXD_POPTS_IXSM;
return tmp;
}
static inline uint32_t
-tx_desc_vlan_flags_to_cmdtype(uint32_t ol_flags)
+tx_desc_ol_flags_to_cmdtype(uint32_t ol_flags)
{
- static const uint32_t vlan_cmd[2] = {0, IXGBE_ADVTXD_DCMD_VLE};
- return vlan_cmd[(ol_flags & PKT_TX_VLAN_PKT) != 0];
+ uint32_t cmdtype = 0;
+ if (ol_flags & PKT_TX_VLAN_PKT)
+ cmdtype |= IXGBE_ADVTXD_DCMD_VLE;
+ if (ol_flags & PKT_TX_TCP_SEG)
+ cmdtype |= IXGBE_ADVTXD_DCMD_TSE;
+ return cmdtype;
}
/* Default RS bit threshold values */
@@ -583,7 +651,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
/* If hardware offload required */
tx_ol_req = ol_flags &
- (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK);
+ (PKT_TX_VLAN_PKT | PKT_TX_IP_CKSUM | PKT_TX_L4_MASK |
+ PKT_TX_TCP_SEG);
if (tx_ol_req) {
/* If new context need be built or reuse the exist ctx. */
ctx = what_advctx_update(txq, tx_ol_req,
@@ -702,13 +771,26 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
*/
cmd_type_len = IXGBE_ADVTXD_DTYP_DATA |
IXGBE_ADVTXD_DCMD_IFCS | IXGBE_ADVTXD_DCMD_DEXT;
- olinfo_status = (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+
#ifdef RTE_LIBRTE_IEEE1588
if (ol_flags & PKT_TX_IEEE1588_TMST)
cmd_type_len |= IXGBE_ADVTXD_MAC_1588;
#endif
+ olinfo_status = 0;
if (tx_ol_req) {
+
+ if (ol_flags & PKT_TX_TCP_SEG) {
+ /* paylen in descriptor is the not the packet
+ * len bu the tcp payload len if TSO in on */
+ pkt_len -= (hw_offload.l2_len +
+ hw_offload.l3_len + hw_offload.l4_len);
+
+ /* the pseudo header checksum must be modified:
+ * it should not include the ip_len */
+ ixgbe_fix_tcp_phdr_cksum(tx_pkt);
+ }
+
/*
* Setup the TX Advanced Context Descriptor if required
*/
@@ -741,11 +823,13 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
* This path will go through
* whatever new/reuse the context descriptor
*/
- cmd_type_len |= tx_desc_vlan_flags_to_cmdtype(ol_flags);
+ cmd_type_len |= tx_desc_ol_flags_to_cmdtype(ol_flags);
olinfo_status |= tx_desc_cksum_flags_to_olinfo(ol_flags);
olinfo_status |= ctx << IXGBE_ADVTXD_IDX_SHIFT;
}
+ olinfo_status |= (pkt_len << IXGBE_ADVTXD_PAYLEN_SHIFT);
+
m_seg = tx_pkt;
do {
txd = &txr[tx_id];
@@ -3420,9 +3504,10 @@ ixgbe_dev_tx_init(struct rte_eth_dev *dev)
PMD_INIT_FUNC_TRACE();
hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- /* Enable TX CRC (checksum offload requirement) */
+ /* Enable TX CRC (checksum offload requirement) and hw padding
+ * (TSO requirement) */
hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- hlreg0 |= IXGBE_HLREG0_TXCRCEN;
+ hlreg0 |= (IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_TXPADEN);
IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
/* Setup the Base and Length of the Tx Descriptor Rings */
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [virtio-net-pmd PATCH v2 14/17] pmd: adapt to new rte_mbuf structure
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (12 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 13/17] ixgbe: support " Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 15/17] pmd: remove support of old dpdk versions Olivier Matz
` (4 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The rte_mbuf structure is modified by the following commits in dpdk:
mbuf: rename vlan_macip_len in hw_offload and increase its size
mbuf: change ol_flags to 32 bits
mbuf: replace data pointer by an offset
mbuf: merge physaddr and buf_len in a bitfield
mbuf: remove the rte_pktmbuf structure
mbuf: remove rte_ctrlmbuf
We need to modify to conform to this new struct.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
virtio_user.c | 44 +++++++++++++++++++++-----------------------
1 file changed, 21 insertions(+), 23 deletions(-)
diff --git a/virtio_user.c b/virtio_user.c
index 4e73aa3..5df7dde 100644
--- a/virtio_user.c
+++ b/virtio_user.c
@@ -192,11 +192,11 @@ struct virtio_pmd_ctrl {
typedef struct igb_tx_queue rte_txq_t;
typedef struct igb_rx_queue rte_rxq_t;
/* until dpdk-1.3, bit-fields vlan_macip are flat in rte_pktmbuf */
-#define RTE_MBUF_VLAN_MAC_IP(m) (m)->pkt.vlan_tci
+#define RTE_MBUF_VLAN_MAC_IP(m) (m)->vlan_tci
#else /* since dpdk-1.3 */
typedef void rte_txq_t;
typedef void rte_rxq_t;
-#define RTE_MBUF_VLAN_MAC_IP(m) (m)->pkt.vlan_macip.f.vlan_tci
+#define RTE_MBUF_VLAN_MAC_IP(m) (m)->hw_offload.vlan_tci
#endif
/* This part is taken from ixgbe and igb pmd drivers. */
@@ -205,14 +205,12 @@ rte_rxmbuf_alloc(struct rte_mempool* mp)
{
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check(m, 0);
return m;
}
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- ((uint64_t)(mb)->pkt.data - \
- (uint64_t)(mb)->buf_addr) + \
- (mb)->buf_physaddr
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
static inline int
virtio_has_feature(const struct virtio_net_adapter* adapter,
@@ -442,7 +440,7 @@ virtio_recv_buf(rte_rxq_t *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i = idx;
buf = (struct rte_mbuf*)rq->user[i];
- buf->pkt.data = ((char *)buf->buf_addr + RTE_PKTMBUF_HEADROOM);
+ buf->data_off = RTE_PKTMBUF_HEADROOM;
ptr = (struct virtio_pmd_net_hdr*)
((char*)buf->buf_addr + hdr_offt);
@@ -460,11 +458,11 @@ virtio_recv_buf(rte_rxq_t *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rq->last_used++;
rq->freeslots++;
- buf->pkt.nb_segs = n_frag;
- buf->pkt.next = NULL;
- buf->pkt.pkt_len = len;
- buf->pkt.data_len = len;
- buf->pkt.in_port = rq->port_id;
+ buf->nb_segs = n_frag;
+ buf->next = NULL;
+ buf->pkt_len = len;
+ buf->data_len = len;
+ buf->in_port = rq->port_id;
buf->ol_flags = 0; /* RSS - FDIR // Error */
RTE_MBUF_VLAN_MAC_IP(buf) = 0;
@@ -485,17 +483,17 @@ virtio_recv_buf(rte_rxq_t *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
i = idx;
buf = (struct rte_mbuf*)rq->user[i];
- buf->pkt.data = ((char *)buf->buf_addr + hdr_offt);
+ buf->data_off = hdr_offt;
- buf->pkt.next = NULL;
- buf->pkt.data_len = len;
+ buf->next = NULL;
+ buf->data_len = len;
/* New tail */
- cur_seg->pkt.next = buf;
+ cur_seg->next = buf;
cur_seg = buf;
/* Update head pkt_len */
- rx_pkts[completed]->pkt.pkt_len += len;
+ rx_pkts[completed]->pkt_len += len;
rq->vr.desc[i].next = rq->head;
rq->head = idx;
@@ -505,7 +503,7 @@ virtio_recv_buf(rte_rxq_t *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
n_frag--;
}
- rq->stats.ibytes += rx_pkts[completed]->pkt.pkt_len;
+ rq->stats.ibytes += rx_pkts[completed]->pkt_len;
rq->stats.ipackets++;
completed++;
}
@@ -613,11 +611,11 @@ static inline int
virtio_send_packet(struct virtio_net_vring* tq, struct rte_mbuf* mbuf)
{
unsigned int head, prev, i, avail;
- unsigned int nseg = mbuf->pkt.nb_segs;
+ unsigned int nseg = mbuf->nb_segs;
struct rte_mbuf* cur = mbuf;
unsigned int hdr_len;
unsigned long hdr_phys;
- uint32_t len = mbuf->pkt.pkt_len;
+ uint32_t len = mbuf->pkt_len;
/* No need to send empty packets. */
if (len == 0) {
@@ -642,13 +640,13 @@ virtio_send_packet(struct virtio_net_vring* tq, struct rte_mbuf* mbuf)
tq->vr.desc[head].len = hdr_len;
tq->user[head] = NULL;
- for (i = tq->vr.desc[head].next; nseg > 0; nseg--, cur = cur->pkt.next) {
- if (!cur->pkt.data_len) {
+ for (i = tq->vr.desc[head].next; nseg > 0; nseg--, cur = cur->next) {
+ if (!cur->data_len) {
continue;
}
tq->vr.desc[i].flags = VRING_DESC_F_NEXT;
tq->vr.desc[i].addr = RTE_MBUF_DATA_DMA_ADDR(cur);
- tq->vr.desc[i].len = cur->pkt.data_len;
+ tq->vr.desc[i].len = cur->data_len;
tq->user[i] = cur;
prev = i;
i = tq->vr.desc[i].next;
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [vmxnet3-usermap PATCH v2 15/17] pmd: remove support of old dpdk versions
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (13 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [virtio-net-pmd PATCH v2 14/17] pmd: adapt to new rte_mbuf structure Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 16/17] pmd: adapt to new rte_mbuf structure Olivier Matz
` (3 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
It's a pain to support many versions of DPDK for external drivers.
Today, it's not required to keep this driver working on older DPDK
versions. If it's required in the future, as the DPDK API is not stable
enough, the correct approach will probably to have several maintenance
branches instead of trying to have a driver that compiles with all
DPDK versions.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
pmd/vmxnet3.c | 68 -----------------------------------------------------------
1 file changed, 68 deletions(-)
diff --git a/pmd/vmxnet3.c b/pmd/vmxnet3.c
index a25ea41..3f3f715 100644
--- a/pmd/vmxnet3.c
+++ b/pmd/vmxnet3.c
@@ -47,26 +47,14 @@
/* Dependency over DPDK */
#include <rte_byteorder.h>
#include <rte_ethdev.h>
-#include <rte_version.h>
-
-#define DPDK_VERSION_BEFORE(x,y) RTE_VERSION < RTE_VERSION_NUM(x,y,0,0)
-#define DPDK_VERSION_SINCE(x,y) RTE_VERSION >= RTE_VERSION_NUM(x,y,0,0)
/* Local includes */
#include "vmxnet3_shm_shared.h"
#include "vmx_user.h"
-#if DPDK_VERSION_BEFORE(1,3)
-/* until dpdk-1.3, queue structs are igb ones */
-typedef struct igb_tx_queue rte_txq_t;
-typedef struct igb_rx_queue rte_rxq_t;
-/* until dpdk-1.3, bit-fields vlan_macip are flat in rte_pktmbuf */
-#define RTE_MBUF_VLAN_MAC_IP(m) (m)->pkt.vlan_tci
-#else /* since dpdk-1.3 */
typedef void rte_txq_t;
typedef void rte_rxq_t;
#define RTE_MBUF_VLAN_MAC_IP(m) (m)->pkt.vlan_macip.f.vlan_tci
-#endif
#define VMXNET3_MAX_MAC_ADDRS 1
@@ -747,11 +735,7 @@ vmxnet3_rss_configure(const struct rte_eth_dev *dev)
* Clear VLAN Filter table and declare VLAN filtering capability,
* so that we can enable VLAN filtering by simply adding entries in it.
*/
-#if DPDK_VERSION_SINCE(1,3)
static int
-#else
-static void
-#endif
vmxnet3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
{
const struct vmxnet3_adapter *priv;
@@ -765,11 +749,7 @@ vmxnet3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
if (!(devRead->misc.uptFeatures & UPT1_F_RXVLAN)) {
RTE_LOG(WARNING, PMD, "%s(): No VLAN enabled for device\n",
__func__);
-#if DPDK_VERSION_SINCE(1,3)
return -1;
-#else
- return;
-#endif
}
if (on) {
@@ -778,9 +758,7 @@ vmxnet3_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
VMXNET3_CLEAR_VFTABLE_ENTRY(vfTable, vlan_id);
}
-#if DPDK_VERSION_SINCE(1,3)
return 0;
-#endif
}
/* Exported Rx function after Rx Queue setup. */
@@ -1001,58 +979,14 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
return ret;
}
-#if DPDK_VERSION_BEFORE(1,3)
-/* Allocate array of pointers to register queues */
-static int
-pointer_array_alloc(void **pointer_array, uint16_t *size, uint16_t new_size)
-{
- if (*pointer_array == NULL) {
- *pointer_array = malloc(sizeof(void*) * new_size);
- if (*pointer_array == NULL) {
- *size = 0;
- return -ENOMEM;
- }
- } else if (new_size != *size)
- rte_panic("%s(): cannot change queues number\n", __func__);
-
- *size = new_size;
- return 0;
-}
-
-/* Configure VLAN and Flow Director. Flow Director is not supported. */
-static int
-vmxnet3_dev_configure(struct rte_eth_dev *dev,
- uint16_t nb_rx_q, uint16_t nb_tx_q)
-#else
static int
vmxnet3_dev_configure(struct rte_eth_dev *dev)
-#endif
{
const struct vmxnet3_adapter* priv;
struct Vmxnet3_DSDevRead *devRead;
unsigned int i;
u32 *vfTable;
-#if DPDK_VERSION_BEFORE(1,3)
- int ret;
-
- /* Allocate array of pointers to RX queues */
- ret = pointer_array_alloc((void**)&dev->data->rx_queues,
- &dev->data->nb_rx_queues, nb_rx_q);
- if (ret != 0)
- rte_panic("%s(): port_id=%d: allocation of array "
- "of %d pointers to RX queues failed\n",
- __func__, dev->data->port_id, nb_rx_q);
-
- /* Allocate array of pointers to TX queues */
- ret = pointer_array_alloc((void**)&dev->data->tx_queues,
- &dev->data->nb_tx_queues, nb_tx_q);
- if (ret != 0)
- rte_panic("%s(): port_id=%d: allocation of array "
- "of %d pointers to TX queues failed\n",
- __func__, dev->data->port_id, nb_tx_q);
-#endif
-
priv = (struct vmxnet3_adapter*)dev->data->dev_private;
devRead = &priv->shared->devRead;
@@ -1424,11 +1358,9 @@ static struct eth_dev_ops vmxnet3_eth_dev_ops = {
.vlan_filter_set = vmxnet3_vlan_filter_set,
.rx_queue_setup = vmxnet3_dev_rx_queue_setup,
.tx_queue_setup = vmxnet3_dev_tx_queue_setup,
-#if DPDK_VERSION_SINCE(1,3)
/* Reconfiguration not supported yet */
.rx_queue_release = NULL,
.tx_queue_release = NULL,
-#endif
.dev_led_on = NULL,
.dev_led_off = NULL,
.flow_ctrl_set = NULL,
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [vmxnet3-usermap PATCH v2 16/17] pmd: adapt to new rte_mbuf structure
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (14 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 15/17] pmd: remove support of old dpdk versions Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [memnic PATCH v2 17/17] " Olivier Matz
` (2 subsequent siblings)
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The rte_mbuf structure is modified by the following commits in dpdk:
mbuf: rename vlan_macip_len in hw_offload and increase its size
mbuf: change ol_flags to 32 bits
mbuf: replace data pointer by an offset
mbuf: merge physaddr and buf_len in a bitfield
mbuf: remove the rte_pktmbuf structure
mbuf: remove rte_ctrlmbuf
We need to modify to conform to this new struct.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
pmd/vmxnet3.c | 51 ++++++++++++++++++++++++---------------------------
1 file changed, 24 insertions(+), 27 deletions(-)
diff --git a/pmd/vmxnet3.c b/pmd/vmxnet3.c
index 3f3f715..69bc4d5 100644
--- a/pmd/vmxnet3.c
+++ b/pmd/vmxnet3.c
@@ -54,7 +54,7 @@
typedef void rte_txq_t;
typedef void rte_rxq_t;
-#define RTE_MBUF_VLAN_MAC_IP(m) (m)->pkt.vlan_macip.f.vlan_tci
+#define RTE_MBUF_VLAN_MAC_IP(m) (m)->hw_offload.vlan_tci
#define VMXNET3_MAX_MAC_ADDRS 1
@@ -73,14 +73,12 @@ rte_rxmbuf_alloc(struct rte_mempool *mp)
{
struct rte_mbuf *m;
m = __rte_mbuf_raw_alloc(mp);
- __rte_mbuf_sanity_check(m, RTE_MBUF_PKT, 0);
+ __rte_mbuf_sanity_check(m, 0);
return m;
}
#define RTE_MBUF_DATA_DMA_ADDR(mb) \
- (uint64_t) ((mb)->buf_physaddr + \
- (uint64_t) ((char *)((mb)->pkt.data) - \
- (char *)(mb)->buf_addr))
+ (uint64_t) ((mb)->buf_physaddr + (mb)->data_off)
#ifndef min
#define min(x, y) ((x)<(y) ? (x) : (y))
@@ -314,16 +312,15 @@ queue_rx_complete(struct vmxnet3_rx_queue *rq,
/* Buffer information */
if ((*binfo & 0x01) == VMXNET3_RXD_BTYPE_HEAD) {
- buf->pkt.data = ((char *)buf->buf_addr +
- RTE_PKTMBUF_HEADROOM);
+ buf->data_off = RTE_PKTMBUF_HEADROOM;
} else {
- buf->pkt.data = (char *)buf->buf_addr;
+ buf->data_off = 0;
}
- buf->pkt.nb_segs = 1;
- buf->pkt.next = NULL;
- buf->pkt.pkt_len = rcd->len;
- buf->pkt.data_len = rcd->len;
- buf->pkt.in_port = rq->port_id;
+ buf->nb_segs = 1;
+ buf->next = NULL;
+ buf->pkt_len = rcd->len;
+ buf->data_len = rcd->len;
+ buf->in_port = rq->port_id;
buf->ol_flags = 0; // RSS - FDIR // Error
RTE_MBUF_VLAN_MAC_IP(buf) = 0;
@@ -344,11 +341,11 @@ queue_rx_complete(struct vmxnet3_rx_queue *rq,
head = tail = buf;
prev_tail = NULL;
} else {
- tail->pkt.next = buf;
+ tail->next = buf;
prev_tail = tail;
tail = buf;
- ++head->pkt.nb_segs;
- head->pkt.pkt_len += rcd->len;
+ ++head->nb_segs;
+ head->pkt_len += rcd->len;
}
assert(head != NULL);
@@ -381,8 +378,8 @@ queue_rx_complete(struct vmxnet3_rx_queue *rq,
assert(head != tail);
if (unlikely(rcd->len == 0)) {
/* Remove useless empty fragment. */
- prev_tail->pkt.next = NULL;
- --head->pkt.nb_segs;
+ prev_tail->next = NULL;
+ --head->nb_segs;
tail = prev_tail;
prev_tail = NULL;
rte_pktmbuf_free(buf);
@@ -474,7 +471,7 @@ send_packet(struct vmxnet3_tx_queue *tq,
/* Make sure len doesn't overflow. */
assert(len >= 0);
- assert((unsigned int)len <= mbuf->pkt.pkt_len);
+ assert((unsigned int)len <= mbuf->pkt_len);
/* Zero-sized packets can't be transmitted. Silently ignore them. */
if (len == 0) {
@@ -488,7 +485,7 @@ send_packet(struct vmxnet3_tx_queue *tq,
* than the first segment size and less or equal to
* VMXNET3_HDR_COPY_SIZE.
*/
- copy_size = ((mbuf->pkt.data_len < len) ? mbuf->pkt.data_len : len);
+ copy_size = ((mbuf->data_len < len) ? mbuf->data_len : len);
if (copy_size > VMXNET3_HDR_COPY_SIZE)
copy_size = VMXNET3_HDR_COPY_SIZE;
@@ -502,11 +499,11 @@ send_packet(struct vmxnet3_tx_queue *tq,
* An extra descriptor is required for the data ring part if
* copy_size is nonzero.
*/
- count = mbuf->pkt.nb_segs;
+ count = mbuf->nb_segs;
assert(count >= 1);
- if (mbuf->pkt.data_len == copy_size) {
+ if (mbuf->data_len == copy_size) {
/* First segment is either in the data ring or empty. */
- cur = mbuf->pkt.next;
+ cur = mbuf->next;
offset = 0;
} else {
/* Rest of the first segment needs its own descriptor. */
@@ -550,7 +547,7 @@ send_packet(struct vmxnet3_tx_queue *tq,
dw2 |= (copy_size & (VMXNET3_TXD_GEN - 1));
tdd = tq->data_ring.base + tq->tx_ring.next2fill;
/* Copy to the data ring. */
- memcpy(tdd->data, mbuf->pkt.data, copy_size);
+ memcpy(tdd->data, rte_pktmbuf_mtod(mbuf, char *), copy_size);
tmp.txd.addr =
rte_cpu_to_le_64(tq->data_ring.basePA +
tq->tx_ring.next2fill *
@@ -581,7 +578,7 @@ send_packet(struct vmxnet3_tx_queue *tq,
int buf_size;
assert(cur != NULL);
- buf_size = min(len, (cur->pkt.data_len - offset));
+ buf_size = min(len, (cur->data_len - offset));
if (buf_size == 0) {
/*
* Can't fill a descriptor with an empty segment,
@@ -625,7 +622,7 @@ send_packet(struct vmxnet3_tx_queue *tq,
gdesc = tq->tx_ring.base + tq->tx_ring.next2fill;
skip:
offset = 0;
- cur = cur->pkt.next;
+ cur = cur->next;
}
/* Reverse gen bit in SOP descriptor. */
@@ -781,7 +778,7 @@ eth_tx_burst(rte_txq_t *txq,
int num;
for (num = 0; num < nb_pkts; num++) {
- int ret = send_packet(tq, tx_pkts[num]->pkt.pkt_len,
+ int ret = send_packet(tq, tx_pkts[num]->pkt_len,
tx_pkts[num]);
if (ret == -1) {
break;
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [memnic PATCH v2 17/17] pmd: adapt to new rte_mbuf structure
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (15 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 16/17] pmd: adapt to new rte_mbuf structure Olivier Matz
@ 2014-05-19 13:56 ` Olivier Matz
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
2014-11-03 7:32 ` [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Liu, Jijiang
18 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2014-05-19 13:56 UTC (permalink / raw)
To: dev
The rte_mbuf structure is modified by the following commits in dpdk:
mbuf: rename vlan_macip_len in hw_offload and increase its size
mbuf: change ol_flags to 32 bits
mbuf: replace data pointer by an offset
mbuf: merge physaddr and buf_len in a bitfield
mbuf: remove the rte_pktmbuf structure
mbuf: remove rte_ctrlmbuf
We need to modify to conform to this new struct.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
pmd/pmd_memnic.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/pmd/pmd_memnic.c b/pmd/pmd_memnic.c
index 4abdf26..0ea932e 100644
--- a/pmd/pmd_memnic.c
+++ b/pmd/pmd_memnic.c
@@ -277,11 +277,11 @@ static uint16_t memnic_recv_pkts(void *rx_queue,
break;
rte_memcpy(rte_pktmbuf_mtod(mb, void *), p->data, p->len);
- mb->pkt.in_port = q->port_id;
- mb->pkt.nb_segs = 1;
- mb->pkt.next = NULL;
- mb->pkt.pkt_len = p->len;
- mb->pkt.data_len = p->len;
+ mb->in_port = q->port_id;
+ mb->nb_segs = 1;
+ mb->next = NULL;
+ mb->pkt_len = p->len;
+ mb->data_len = p->len;
rx_pkts[nr] = mb;
pkts++;
@@ -363,9 +363,9 @@ retry:
p->len = pkt_len;
ptr = p->data;
- for (sg = tx_pkts[nr]; sg; sg = sg->pkt.next) {
+ for (sg = tx_pkts[nr]; sg; sg = sg->next) {
void *src = rte_pktmbuf_mtod(sg, void *);
- int data_len = sg->pkt.data_len;
+ int data_len = sg->data_len;
rte_memcpy(ptr, src, data_len);
ptr += data_len;
--
1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (16 preceding siblings ...)
2014-05-19 13:56 ` [dpdk-dev] [memnic PATCH v2 17/17] " Olivier Matz
@ 2014-05-22 15:02 ` Thomas Monjalon
2014-05-22 16:09 ` Venkatesan, Venky
2014-05-23 12:47 ` Ananyev, Konstantin
2014-11-03 7:32 ` [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Liu, Jijiang
18 siblings, 2 replies; 28+ messages in thread
From: Thomas Monjalon @ 2014-05-22 15:02 UTC (permalink / raw)
To: konstantin.ananyev, jeffrey.b.shaw, bruce.richardson,
venky.venkatesan, nhorman, stephen
Cc: dev
There were many comments on the first version of this serie,
and there is no comment on this second version.
So I understand everyone is OK with it and it can be applied in the git tree.
I'll wait 1 day more. Feel free to comment or acknowledge.
--
Thomas
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
@ 2014-05-22 16:09 ` Venkatesan, Venky
2014-05-23 14:22 ` Olivier MATZ
2014-05-23 12:47 ` Ananyev, Konstantin
1 sibling, 1 reply; 28+ messages in thread
From: Venkatesan, Venky @ 2014-05-22 16:09 UTC (permalink / raw)
To: Thomas Monjalon, Ananyev, Konstantin, Shaw, Jeffrey B,
Richardson, Bruce, nhorman, stephen
Cc: dev
Thomas,
It's because we haven't gotten to testing the patch yet, and figuring out all the problems. Putting it in and modifying MBUF needs a bit of time - one other option that I've looked at is to let the transmit offload parts (except for the VLAN) flow onto the second cache line. That doesn't seem to have a performance hit at this point - since it's going to be populated before calling transmit anyway, it's cache hot. Have we thought of simply doing that instead of these changes that have net negative side effects in terms of mbuf mods?
Regards,
-Venky
-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Thursday, May 22, 2014 8:02 AM
To: Ananyev, Konstantin; Shaw, Jeffrey B; Richardson, Bruce; Venkatesan, Venky; nhorman@tuxdriver.com; stephen@networkplumber.org
Cc: Olivier Matz; dev@dpdk.org
Subject: Re: [PATCH v2 00/17] add TSO support
There were many comments on the first version of this serie, and there is no comment on this second version.
So I understand everyone is OK with it and it can be applied in the git tree.
I'll wait 1 day more. Feel free to comment or acknowledge.
--
Thomas
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
2014-05-22 16:09 ` Venkatesan, Venky
@ 2014-05-23 12:47 ` Ananyev, Konstantin
2014-05-23 14:32 ` Olivier MATZ
1 sibling, 1 reply; 28+ messages in thread
From: Ananyev, Konstantin @ 2014-05-23 12:47 UTC (permalink / raw)
To: Thomas Monjalon, Shaw, Jeffrey B, Richardson, Bruce, Venkatesan,
Venky, nhorman, stephen
Cc: dev
Hi Thomas,
I don't see any big changes in the v2 of that patch.
At least both things that I have concerns about, stay unchanged in the v2:
1) merge physaddr and buf_len in a bitfield - I still think we better keep physaddr as 64bit field (PATCH 5).
2) fix_tcp_phdr_cksum() is inside ixgbe TX function, while I think it should be moved out of it to the upper layer (PATCH 11).
Thanks
Konstantin
-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Thursday, May 22, 2014 4:02 PM
To: Ananyev, Konstantin; Shaw, Jeffrey B; Richardson, Bruce; Venkatesan, Venky; nhorman@tuxdriver.com; stephen@networkplumber.org
Cc: Olivier Matz; dev@dpdk.org
Subject: Re: [PATCH v2 00/17] add TSO support
There were many comments on the first version of this serie,
and there is no comment on this second version.
So I understand everyone is OK with it and it can be applied in the git tree.
I'll wait 1 day more. Feel free to comment or acknowledge.
--
Thomas
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-22 16:09 ` Venkatesan, Venky
@ 2014-05-23 14:22 ` Olivier MATZ
2014-05-23 14:43 ` Venkatesan, Venky
0 siblings, 1 reply; 28+ messages in thread
From: Olivier MATZ @ 2014-05-23 14:22 UTC (permalink / raw)
To: Venkatesan, Venky, Thomas Monjalon, Ananyev, Konstantin, Shaw,
Jeffrey B, Richardson, Bruce, nhorman, stephen
Cc: dev
Hi Venky,
> It's because we haven't gotten to testing the patch yet, and figuring
> out all the problems. Putting it in and modifying MBUF needs a bit of
> time - one other option that I've looked at is to let the transmit
> offload parts (except for the VLAN) flow onto the second cache
> line. That doesn't seem to have a performance hit at this point -
> since it's going to be populated before calling transmit anyway, it's
> cache hot. Have we thought of simply doing that instead of these
> changes that have net negative side effects in terms of mbuf mods?
I think that the performance gain on a real use case provided by this
patch series can justify a really small impact (see my test reports) on
demonstration-only applications: my testpmd iofwd test with the
txqflags option disabling many mbuf features is not representative of
a real world application.
In my opinion, moving offload parts outside in another cache line would
have an impact on performance. If not, why would you exclude vlan?
But this is speculation. As Neil and Thomas suggested previously, we
should rely on performance and functional tests.
Today, there is no alternative that brings equivalent features and
better performance (I mean there is no patch nor test reports). If the
series is applied after your ack, it won't prevent anyone to bring new
enhancements or reworks on top it.
Regards,
Olivier
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-23 12:47 ` Ananyev, Konstantin
@ 2014-05-23 14:32 ` Olivier MATZ
2014-05-26 15:20 ` Ananyev, Konstantin
0 siblings, 1 reply; 28+ messages in thread
From: Olivier MATZ @ 2014-05-23 14:32 UTC (permalink / raw)
To: Ananyev, Konstantin, Thomas Monjalon, Shaw, Jeffrey B,
Richardson, Bruce, Venkatesan, Venky, nhorman, stephen
Cc: dev
Hi Konstantin,
> I don't see any big changes in the v2 of that patch.
>
> At least both things that I have concerns about, stay unchanged in
> the v2:
>
> 1) merge physaddr and buf_len in a bitfield - I still think we better
> keep physaddr as 64bit field (PATCH 5).
As nobody reacted to our discussion thread [1] with other arguments, I
stayed on my initial position:
- 2 bytes more in the mbuf structure is always good to have
- no performance impact detected (see my test reports)
- the 48 bits physical address limit will be reached in several years,
and at this time, maybe the cache lines will be 128 bytes? (solving
our initial rte_mbuf issue). So let's just deal with current or near
future hardware.
> 2) fix_tcp_phdr_cksum() is inside ixgbe TX function, while I think it
> should be moved out of it to the upper layer (PATCH 11).
Thomas's answer on this question [2] was to do that way, so I did not
modify the v1 patches.
Regards,
Olivier
[1] http://dpdk.org/ml/archives/dev/2014-May/002454.html
[2] http://dpdk.org/ml/archives/dev/2014-May/002531.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-23 14:22 ` Olivier MATZ
@ 2014-05-23 14:43 ` Venkatesan, Venky
2014-05-26 11:59 ` Olivier MATZ
0 siblings, 1 reply; 28+ messages in thread
From: Venkatesan, Venky @ 2014-05-23 14:43 UTC (permalink / raw)
To: Olivier MATZ, Thomas Monjalon, Ananyev, Konstantin, Shaw,
Jeffrey B, Richardson, Bruce, nhorman, stephen
Cc: dev
Olivier,
>> It's because we haven't gotten to testing the patch yet, and figuring > out all the problems. Putting it in and modifying MBUF needs a bit of > time - one other option that I've looked at is to let the transmit > offload parts (except for the VLAN) flow onto the second cache > line. That doesn't seem to have a performance hit at this point - > since it's going to be populated before calling transmit anyway, it's > cache hot. Have we thought of simply doing that instead of these > changes that have net negative side effects in terms of mbuf mods?
> I think that the performance gain on a real use case provided by this patch series can justify a really small impact (see my test reports) on demonstration-only applications: my testpmd iofwd test with the txqflags option disabling many mbuf features is not representative of a real world application.
[Venky] I did see your test reports. I also made the point that the tests we have are insufficient for testing the impact. If you look at data_ofs, it actually has an impact on two sides - the driver and the upper layer. We do not at this time have a test for the upper layer/accessor. Secondly, there is a whole class of apps (fast path route for example) that aren't in your purview that do not need txqflags. Calling it not representative of a real world application is incorrect.
Secondly, your testpmd run baseline performance should be higher. At this point, it is about 40% off from the numbers we see on the baseline on the same CPU. If the baseline is incorrect, I cannot judge anything more on the performance. We need to get the baseline performance the same, and then compare impact.
> In my opinion, moving offload parts outside in another cache line would have an impact on performance. If not, why would you exclude vlan?
But this is speculation. As Neil and Thomas suggested previously, we should rely on performance and functional tests.
[Venky] I exclude VLAN because it is something explicitly set by the Rx side of the driver. Having Rx access a second cache line will generate a performance impact (can be mitigated by a prefetch, but it will cost more instructions, and cannot be deterministically controlled). The rest of the structure is on the transmit side - which is going to be cache hot - at least in LLC anyway. There are cases where this will not be in LLC - and we have a few of those. Those however, we can mitigate.
> Today, there is no alternative that brings equivalent features and better performance (I mean there is no patch nor test reports). If the series is applied after your ack, it won't prevent anyone to bring new enhancements or reworks on top it.
[Venky] I don't think reworking core data structures (especially regressing core data structures) is a good thing. We have kept this relatively stable over 5 releases, sometimes at the impact of performance, and churning data structures is not a good thing.
BR,
- Venky
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-23 14:43 ` Venkatesan, Venky
@ 2014-05-26 11:59 ` Olivier MATZ
0 siblings, 0 replies; 28+ messages in thread
From: Olivier MATZ @ 2014-05-26 11:59 UTC (permalink / raw)
To: Venkatesan, Venky, Thomas Monjalon, Ananyev, Konstantin, Shaw,
Jeffrey B, Richardson, Bruce, nhorman, stephen
Cc: dev
Hi Venky,
>> my testpmd iofwd test with the txqflags option disabling many mbuf
>> features is not representative of a real world application.
> [Venky] I did see your test reports. I also made the point that the
> tests we have are insufficient for testing the impact. If you look at
> data_ofs, it actually has an impact on two sides - the driver and the
> upper layer. We do not at this time have a test for the upper
> layer/accessor. Secondly, there is a whole class of apps (fast path
> route for example) that aren't in your purview that do not need
> txqflags. Calling it not representative of a real world application is
> incorrect.
I was talking about "iofwd with txqflags", not "txqflags". The iofwd
mode means that the mbuf data is not accessed by the application,
so I think we should not rely on it.
I agree that "txqflags" could be useful in some situations, when the
user does not need multi-segment. Anyway, the tests I've posted on the
list do not show a real impact on this use case.
Finally, as stated in the initial TSO cover letter [1], I've tested
the upper layer impact that you are talking about with the 6WINDGate
stack and there is no visible regression.
> Secondly, your testpmd run baseline performance should be higher. At
> this point, it is about 40% off from the numbers we see on the baseline
> on the same CPU. If the baseline is incorrect, I cannot judge anything
> more on the performance. We need to get the baseline performance the
> same, and then compare impact.
I tested with the default testpmd configuration, I though it was
close enough to the best performance.
> [Venky] I exclude VLAN because it is something explicitly set by the Rx
> side of the driver. Having Rx access a second cache line will generate a
> performance impact (can be mitigated by a prefetch, but it will cost
> more instructions, and cannot be deterministically controlled). The rest
> of the structure is on the transmit side - which is going to be cache
> hot - at least in LLC anyway. There are cases where this will not be in
> LLC - and we have a few of those. Those however, we can mitigate.
If we add another cache line to the mbuf, if the code accesses it
(whatever rx or tx side), it will at least increase the required memory
bandwidth, and in the worst case will result in an additional cache
miss. This is difficult to predict and depends on the use case. I
think each solution would have its drawbacks in specific cases.
I think the tests I've provided are representative enough to assume
that there is no need to search for an alternative, knowing the
fact that it also clean the mbuf structure and give more room for
later use.
> [Venky] I don't think reworking core data structures (especially
> regressing core data structures) is a good thing. We have kept this
> relatively stable over 5 releases, sometimes at the impact of
> performance, and churning data structures is not a good thing.
Sorry, but I don't think this is a good argument: we cannot say
"it was stable during 5 releases so we cannot modify it". I would say
that the code modifications should be guided by technical reasons.
Keeping the API stable could be a good technical reason, but as far
as I understood, the DPDK is not at this stage today. The mbuf
rework brings 9 additional bytes in the structure, which help for
TSO today, but may also help for next features. For instance, having
another checksum flag as proposed by Stephen and me [2].
Moreover, the patches make the rte_mbuf structure clearer: the current
separation rte_mbuf / rte_pktmbuf is a bit strange, for instance the
ol_flags field which is clearly related to a pktmuf is located in
the rte_mbuf.
Now, I'm ready to make some concessions and discuss about an alternative
solution.
Thomas, you are the maintainer ;) what are your plans?
Regards,
Olivier
[1] http://dpdk.org/ml/archives/dev/2014-May/002322.html
[2] http://dpdk.org/ml/archives/dev/2014-May/002339.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] add TSO support
2014-05-23 14:32 ` Olivier MATZ
@ 2014-05-26 15:20 ` Ananyev, Konstantin
0 siblings, 0 replies; 28+ messages in thread
From: Ananyev, Konstantin @ 2014-05-26 15:20 UTC (permalink / raw)
To: Olivier MATZ, Thomas Monjalon, Shaw, Jeffrey B, Richardson,
Bruce, Venkatesan, Venky, nhorman, stephen
Cc: dev
Hi Oliver,
>> I don't see any big changes in the v2 of that patch.
>>
>> At least both things that I have concerns about, stay unchanged in
>> the v2:
>>
>> 1) merge physaddr and buf_len in a bitfield - I still think we better
>> keep physaddr as 64bit field (PATCH 5).
>As nobody reacted to our discussion thread [1] with other arguments, I
>stayed on my initial position:
>- 2 bytes more in the mbuf structure is always good to have
>- no performance impact detected (see my test reports)
>- the 48 bits physical address limit will be reached in several years,
> and at this time, maybe the cache lines will be 128 bytes? (solving
> our initial rte_mbuf issue). So let's just deal with current or near
> future hardware.
As, I understand right now, you don't need these 2 bytes to enable TSO support.
>From other side, if we'll occupy them by some fields now, then in future
we wouldn't be able to expand our phys_addr without re-working mbuf structure again
and breaking backward compatibility.
No doubt, that there are a lot of extra things that needed(wanted) to be put into the mbuf.
And as new HW with extra offload capabilities will come out - that demand will only increase.
That's why I think we wouldn't be able to squeeze all future fields in 64B anyway.
So it seems to me that we would have to expand mbuf to 128B sooner or later and like we it or not.
So I still think we better keep phys_addr 64bits long.
At least for now.
>> 2) fix_tcp_phdr_cksum() is inside ixgbe TX function, while I think it
>> should be moved out of it to the upper layer (PATCH 11).
>Thomas's answer on this question [2] was to do that way, so I did not
>modify the v1 patches.
> Thomas Monjalon wrote:
>We know at least 2 checksum methods:
>- the standard one
>- the special one for ixgbe TSO
>In Linux ixgbe, checksum is redone in the driver for TSO case.
I don't think we should use linux ixgbe driver as an etalon here.
>We want to compute checksum in the application/stack in order to prevent
>driver from modifying packet data, that could cause cache miss.
>But the application cannot always know which checksum method to use because it
>doesn't have to know which driver will process the packet.
>So we have to choose which checksum method can be done in the application
>without driver processing. It's not an easy choice.
>It seems simpler and reasonable to choose the standard pseudo-header checksum
>method as it is done in Linux.
>Having a stable and generic API is something important we must target.
As I said: with the current DPDK implementation the upper code would still be different for packets with TCP checksum (without segmentation) and TCP segmentation:
you need to setup different flags in mbuf, with TSO you need to setup l4_len and mss fields inside mbuf, with just checksum - you don't.
So right now upper layer code has to know is it doing TSO or just checksum anyway.
If that's the case why it also can't compute pseudo-checksum in a 2 different ways too?
Also, the patch introduces support TSO only for ixgbe.
I suppose that TSO support for igb can be added in the same way.
Let's talk about generic API, when we'll have to expand it into different devices.
Konstantin
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
` (17 preceding siblings ...)
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
@ 2014-11-03 7:32 ` Liu, Jijiang
2014-11-03 10:12 ` Olivier MATZ
18 siblings, 1 reply; 28+ messages in thread
From: Liu, Jijiang @ 2014-11-03 7:32 UTC (permalink / raw)
To: 'Olivier Matz'; +Cc: dev
Hi Olivier,
Do you plan to integrate this patch set with latest mbuf changes into DPDK1.8?
If yes, I can develop TSO feature of tunneling packet based on this patch set and the Mirek's i40e TSO patch set in the next release.
Thanks
Jijiang Liu
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Olivier Matz
> Sent: Monday, May 19, 2014 9:56 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support
>
> This series add TSO support in ixgbe DPDK driver. This is the second version of the
> series. The first version (RFC) was posted 10 days ago and discussed on the list
> [1]. The list of changes is provided at the end of this cover letter.
>
> Also, as discussed previously on the list [2], one problem is that there is not
> enough room in rte_mbuf today to store the required information to implement
> this feature:
> - a new ol_flag
> - the MSS
> - the L4 header len
>
> A solution would be to increase the size of the mbuf to 2 cache lines but it could
> have a bad impact on performance. This series proposes some rework to
> drastically reduce the size of the rte_mbuf structures before implementing the
> TSO, avoiding to change the mbuf size to 128 bytes.
>
> After the rework of mbuf structures, the size of rte_mbuf structure is reduced by
> 9 bytes. The implementation of TSO requires to double the size of ol_flags (16 to
> 32 bits) and to double the size of offload information in order to add the mss and
> the l4 header length (32 to 64 bits). At the end of the whole series,
> sizeof(rte_mbuf) is still 64 bytes and 4 bytes are available for future use.
>
> This rework causes a lot of modifications in the mbuf structure, implying some
> changes in the applications that directly use the mbuf structure fields instead of
> using the API functions (sometimes there is no function). In my opinion, it's the
> proper moment for this evolution as the 1.7.0 window is open.
>
> About TSO, the new fields in mbuf try to be generic enough to apply to other
> hardware in the future. To delegate the TCP segmentation to the hardware, the
> user has to:
>
> - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
> PKT_TX_IP_CKSUM and PKT_TX_TCP_CKSUM)
> - fill the mbuf->hw_offload information: l2_len, l3_len, l4_len, mss
> - calculate the pseudo header checksum and set it in the TCP header,
> as required when doing hardware TCP checksum offload
> - set the IP checksum to 0
>
> Compilation of DPDK and examples is tested for the following
> targets: x86_64-*-linuxapp-gcc, i686-*-linuxapp-gcc, x86_64-*-bsdapp-gcc
>
> The mbuf rework series is validated with autotests:
>
> cd dpdk.org/
> make install T=x86_64-default-linuxapp-gcc
> cd x86_64-default-linuxapp-gcc/
> modprobe uio
> insmod kmod/igb_uio.ko
> python ../tools/igb_uio_bind.py -b igb_uio 0000:02:00.0
> echo 0 > /proc/sys/kernel/randomize_va_space
> echo 1000 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> echo 1000 > /sys/devices/system/node/node1/hugepages/hugepages-
> 2048kB/nr_hugepages
> mount -t hugetlbfs none /mnt/huge
> make test
>
> TSO is validated with IPv4 and IPv6 with testpmd (see the commit log of patch
> "ixgbe: support TCP segmentation offload" for details).
>
> The performance non-regression has been tested with 6WINDGate fast path, and
> with test-pmd [3].
>
> [1] http://dpdk.org/ml/archives/dev/2014-May/002322.html
> [2] http://dpdk.org/ml/archives/dev/2013-October/thread.html#572
> [3] http://dpdk.org/ml/archives/dev/2014-May/002516.html
>
>
> Changes included in v2:
> - rebase on head, resolve conflicts in 3 test-pmd engines
> - add more comments in rte_mbuf.h about how to use TSO
> - ixgbe: advertise the TSO feature
> - ixgbe: small optimization, move some tso code in the if (tx_ol_req)
> - split "mbuf: replace data pointer by an offset" (cosmetics vs functional)
> - split "ixgbe/mbuf: add TSO support" (ixgbe vs generic changes)
> - modifications of external PMDs (memnic, virtio, vmxnet3)
> - I did not add the new checksum flags PKT_RX_L4_CKSUM_GOOD and
> PKT_RX_IP_CKSUM_GOOD as proposed on the list. The patch is ready but
> I don't think it should be included in this series.
>
> Olivier Matz (17):
> dpdk igb/ixgbe: fix IP checksum calculation
> dpdk mbuf: rename RTE_MBUF_SCATTER_GATHER into
> dpdk mbuf: remove rte_ctrlmbuf
> dpdk mbuf: remove the rte_pktmbuf structure
> dpdk mbuf: merge physaddr and buf_len in a bitfield
> dpdk mbuf: cosmetic changes in rte_mbuf structure
> dpdk mbuf: replace data pointer by an offset
> dpdk mbuf: add functions to get the name of an ol_flag
> dpdk mbuf: change ol_flags to 32 bits
> dpdk mbuf: rename vlan_macip_len in hw_offload and increase
> dpdk testpmd: modify source address to validate checksum
> dpdk mbuf: generic support of TCP segmentation offload
> dpdk ixgbe: support TCP segmentation offload
> virtio-net-pmd pmd: adapt to new rte_mbuf structure
> vmxnet3-usermap pmd: remove support of old dpdk versions vmxnet3-usermap
> pmd: adapt to new rte_mbuf structure
> memnic pmd: adapt to new rte_mbuf structure
>
> Diffstat:
> dpdk.org/app/test-pmd/cmdline.c | 60 -
> dpdk.org/app/test-pmd/config.c | 18
> dpdk.org/app/test-pmd/csumonly.c | 56
> dpdk.org/app/test-pmd/flowgen.c | 26
> dpdk.org/app/test-pmd/icmpecho.c | 6
> dpdk.org/app/test-pmd/ieee1588fwd.c | 6
> dpdk.org/app/test-pmd/macfwd-retry.c | 4
> dpdk.org/app/test-pmd/macfwd.c | 16
> dpdk.org/app/test-pmd/macswap.c | 16
> dpdk.org/app/test-pmd/rxonly.c | 53
> dpdk.org/app/test-pmd/testpmd.c | 14
> dpdk.org/app/test-pmd/testpmd.h | 15
> dpdk.org/app/test-pmd/txonly.c | 59 -
> dpdk.org/app/test/commands.c | 2
> dpdk.org/app/test/test_mbuf.c | 106 -
> dpdk.org/app/test/test_sched.c | 4
> dpdk.org/config/defconfig_i686-default-linuxapp-gcc | 2
> dpdk.org/config/defconfig_i686-default-linuxapp-icc | 2
> dpdk.org/config/defconfig_x86_64-default-bsdapp-gcc | 2
> dpdk.org/config/defconfig_x86_64-default-linuxapp-gcc | 2
> dpdk.org/config/defconfig_x86_64-default-linuxapp-icc | 2
> dpdk.org/doc/doxy-api.conf | 2
> dpdk.org/examples/dpdk_qat/crypto.c | 22
> dpdk.org/examples/dpdk_qat/main.c | 2
> dpdk.org/examples/exception_path/main.c | 13
> dpdk.org/examples/ip_reassembly/ipv4_rsmbl.h | 30
> dpdk.org/examples/ip_reassembly/main.c | 10
> dpdk.org/examples/ipv4_frag/Makefile | 4
> dpdk.org/examples/ipv4_frag/main.c | 4
> dpdk.org/examples/ipv4_frag/rte_ipv4_frag.h | 42
> dpdk.org/examples/ipv4_multicast/Makefile | 4
> dpdk.org/examples/ipv4_multicast/main.c | 16
> dpdk.org/examples/l3fwd-power/main.c | 2
> dpdk.org/examples/l3fwd-vf/main.c | 2
> dpdk.org/examples/l3fwd/main.c | 10
> dpdk.org/examples/load_balancer/runtime.c | 2
> dpdk.org/examples/multi_process/client_server_mp/mp_client/client.c | 2
> dpdk.org/examples/quota_watermark/qw/main.c | 4
> dpdk.org/examples/vhost/main.c | 43
> dpdk.org/examples/vhost_xen/main.c | 24
> dpdk.org/lib/librte_eal/bsdapp/eal/include/exec-env/rte_kni_common.h | 2
> dpdk.org/lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h | 2
> dpdk.org/lib/librte_mbuf/rte_mbuf.c | 105 -
> dpdk.org/lib/librte_mbuf/rte_mbuf.h | 580 ++++------
> dpdk.org/lib/librte_pmd_e1000/em_rxtx.c | 163 +-
> dpdk.org/lib/librte_pmd_e1000/igb_rxtx.c | 212 +--
> dpdk.org/lib/librte_pmd_ixgbe/ixgbe_ethdev.c | 3
> dpdk.org/lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 419 ++++---
> dpdk.org/lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 10
> dpdk.org/lib/librte_pmd_pcap/rte_eth_pcap.c | 14
> dpdk.org/lib/librte_pmd_virtio/virtio_rxtx.c | 20
> dpdk.org/lib/librte_pmd_virtio/virtqueue.h | 11
> dpdk.org/lib/librte_pmd_vmxnet3/vmxnet3_rxtx.c | 37
> dpdk.org/lib/librte_pmd_xenvirt/rte_eth_xenvirt.c | 14
> dpdk.org/lib/librte_pmd_xenvirt/virtqueue.h | 4
> dpdk.org/lib/librte_sched/rte_sched.c | 14
> dpdk.org/lib/librte_sched/rte_sched.h | 10
> memnic/pmd/pmd_memnic.c | 14
> vmxnet3-usermap/pmd/vmxnet3.c | 119 --
> virtio-net-pmd/virtio_user.c | 44
> 60 files changed, 1217 insertions(+), 1289 deletions(-)
>
>
> --
> 1.9.2
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support
2014-11-03 7:32 ` [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Liu, Jijiang
@ 2014-11-03 10:12 ` Olivier MATZ
0 siblings, 0 replies; 28+ messages in thread
From: Olivier MATZ @ 2014-11-03 10:12 UTC (permalink / raw)
To: Liu, Jijiang; +Cc: dev
Hi Jijang,
On 11/03/2014 08:32 AM, Liu, Jijiang wrote:
> Do you plan to integrate this patch set with latest mbuf changes into DPDK1.8?
Yes, it should be in dpdk 1.8.
> If yes, I can develop TSO feature of tunneling packet based on this patch set and the Mirek's i40e TSO patch set in the next release.
I suggest to wait a bit in case there are some changes (maybe after
the review). The first patchset will arrive soon.
Regards,
Olivier
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2014-11-03 10:03 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-05-19 13:56 [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: add TSO support Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 01/17] igb/ixgbe: fix IP checksum calculation Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 02/17] mbuf: rename RTE_MBUF_SCATTER_GATHER into RTE_MBUF_REFCNT Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 03/17] mbuf: remove rte_ctrlmbuf Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 04/17] mbuf: remove the rte_pktmbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 05/17] mbuf: merge physaddr and buf_len in a bitfield Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 06/17] mbuf: cosmetic changes in rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 07/17] mbuf: replace data pointer by an offset Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 08/17] mbuf: add functions to get the name of an ol_flag Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 09/17] mbuf: change ol_flags to 32 bits Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 10/17] mbuf: rename vlan_macip_len in hw_offload and increase its size Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 11/17] testpmd: modify source address to validate checksum calculation Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 12/17] mbuf: generic support of TCP segmentation offload Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [PATCH v2 13/17] ixgbe: support " Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [virtio-net-pmd PATCH v2 14/17] pmd: adapt to new rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 15/17] pmd: remove support of old dpdk versions Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [vmxnet3-usermap PATCH v2 16/17] pmd: adapt to new rte_mbuf structure Olivier Matz
2014-05-19 13:56 ` [dpdk-dev] [memnic PATCH v2 17/17] " Olivier Matz
2014-05-22 15:02 ` [dpdk-dev] [PATCH v2 00/17] add TSO support Thomas Monjalon
2014-05-22 16:09 ` Venkatesan, Venky
2014-05-23 14:22 ` Olivier MATZ
2014-05-23 14:43 ` Venkatesan, Venky
2014-05-26 11:59 ` Olivier MATZ
2014-05-23 12:47 ` Ananyev, Konstantin
2014-05-23 14:32 ` Olivier MATZ
2014-05-26 15:20 ` Ananyev, Konstantin
2014-11-03 7:32 ` [dpdk-dev] [PATCH v2 00/17] ixgbe/mbuf: " Liu, Jijiang
2014-11-03 10:12 ` Olivier MATZ
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).