* [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
@ 2018-01-23 1:05 John Daley
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues John Daley
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: John Daley @ 2018-01-23 1:05 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, Hyong Youb Kim
From: Hyong Youb Kim <hyonkim@cisco.com>
enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
for IPv4. The hardware actually validates the TCP/UDP checksum of
IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic_rxtx.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 98902caa0..8157697a0 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -185,14 +185,14 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
}
/* checksum flags */
- if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
+ if (mbuf->packet_type & (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
if (!enic_cq_rx_desc_csum_not_calc(cqrd)) {
uint32_t l4_flags;
l4_flags = mbuf->packet_type & RTE_PTYPE_L4_MASK;
if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
- else
+ else if (mbuf->packet_type & RTE_PTYPE_L3_IPV4)
pkt_flags |= PKT_RX_IP_CKSUM_BAD;
if (l4_flags == RTE_PTYPE_L4_UDP ||
--
2.12.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues
2018-01-23 1:05 [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets John Daley
@ 2018-01-23 1:05 ` John Daley
2018-01-24 17:19 ` Ferruh Yigit
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler John Daley
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: John Daley @ 2018-01-23 1:05 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, Hyong Youb Kim, stable
From: Hyong Youb Kim <hyonkim@cisco.com>
ENIC_CQ_MAX, ENIC_WQ_MAX and others are arbitrary values that
prevent the app from using more queues when they are available on
hardware. Remove them and dynamically allocate vnic_cq and such
arrays to accommodate all available hardware queues.
As a side effect of removing ENIC_CQ_MAX, this commit fixes a segfault
that would happen when the app requests more than 16 CQs, because
enic_set_vnic_res() does not consider ENIC_CQ_MAX. For example, the
following command causes a crash.
testpmd -- --rxq=16 --txq=16
Fixes: ce93d3c36db0 ("net/enic: fix resource check failures when bonding devices")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic.h | 25 +++++++++---------------
drivers/net/enic/enic_ethdev.c | 20 ++------------------
drivers/net/enic/enic_main.c | 43 ++++++++++++++++++++++++++++++++----------
3 files changed, 44 insertions(+), 44 deletions(-)
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index c3ed1fa02..c76a8f0af 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -24,13 +24,6 @@
#define DRV_DESCRIPTION "Cisco VIC Ethernet NIC Poll-mode Driver"
#define DRV_COPYRIGHT "Copyright 2008-2015 Cisco Systems, Inc"
-#define ENIC_WQ_MAX 8
-/* With Rx scatter support, we use two RQs on VIC per RQ used by app. Both
- * RQs use the same CQ.
- */
-#define ENIC_RQ_MAX 16
-#define ENIC_CQ_MAX (ENIC_WQ_MAX + (ENIC_RQ_MAX / 2))
-#define ENIC_INTR_MAX (ENIC_CQ_MAX + 2)
#define ENIC_MAX_MAC_ADDR 64
#define VLAN_ETH_HLEN 18
@@ -121,17 +114,17 @@ struct enic {
unsigned int flags;
unsigned int priv_flags;
- /* work queue */
- struct vnic_wq wq[ENIC_WQ_MAX];
- unsigned int wq_count;
+ /* work queue (len = conf_wq_count) */
+ struct vnic_wq *wq;
+ unsigned int wq_count; /* equals eth_dev nb_tx_queues */
- /* receive queue */
- struct vnic_rq rq[ENIC_RQ_MAX];
- unsigned int rq_count;
+ /* receive queue (len = conf_rq_count) */
+ struct vnic_rq *rq;
+ unsigned int rq_count; /* equals eth_dev nb_rx_queues */
- /* completion queue */
- struct vnic_cq cq[ENIC_CQ_MAX];
- unsigned int cq_count;
+ /* completion queue (len = conf_cq_count) */
+ struct vnic_cq *cq;
+ unsigned int cq_count; /* equals rq_count + wq_count */
/* interrupt resource */
struct vnic_intr intr;
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index e1ff9dcbd..2c356dc27 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -190,13 +190,7 @@ static int enicpmd_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
return -E_RTE_SECONDARY;
ENICPMD_FUNC_TRACE();
- if (queue_idx >= ENIC_WQ_MAX) {
- dev_err(enic,
- "Max number of TX queues exceeded. Max is %d\n",
- ENIC_WQ_MAX);
- return -EINVAL;
- }
-
+ RTE_ASSERT(queue_idx < enic->conf_wq_count);
eth_dev->data->tx_queues[queue_idx] = (void *)&enic->wq[queue_idx];
ret = enic_alloc_wq(enic, queue_idx, socket_id, nb_desc);
@@ -310,17 +304,7 @@ static int enicpmd_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return -E_RTE_SECONDARY;
-
- /* With Rx scatter support, two RQs are now used on VIC per RQ used
- * by the application.
- */
- if (queue_idx * 2 >= ENIC_RQ_MAX) {
- dev_err(enic,
- "Max number of RX queues exceeded. Max is %d. This PMD uses 2 RQs on VIC per RQ used by DPDK.\n",
- ENIC_RQ_MAX);
- return -EINVAL;
- }
-
+ RTE_ASSERT(enic_rte_rq_idx_to_sop_idx(queue_idx) < enic->conf_rq_count);
eth_dev->data->rx_queues[queue_idx] =
(void *)&enic->rq[enic_rte_rq_idx_to_sop_idx(queue_idx)];
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 9274defd9..ec9d343fd 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1042,6 +1042,9 @@ static void enic_dev_deinit(struct enic *enic)
vnic_dev_notify_unset(enic->vdev);
rte_free(eth_dev->data->mac_addrs);
+ rte_free(enic->cq);
+ rte_free(enic->rq);
+ rte_free(enic->wq);
}
@@ -1049,27 +1052,28 @@ int enic_set_vnic_res(struct enic *enic)
{
struct rte_eth_dev *eth_dev = enic->rte_dev;
int rc = 0;
+ unsigned int required_rq, required_wq, required_cq;
- /* With Rx scatter support, two RQs are now used per RQ used by
- * the application.
- */
- if (enic->conf_rq_count < eth_dev->data->nb_rx_queues) {
+ /* Always use two vNIC RQs per eth_dev RQ, regardless of Rx scatter. */
+ required_rq = eth_dev->data->nb_rx_queues * 2;
+ required_wq = eth_dev->data->nb_tx_queues;
+ required_cq = eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues;
+
+ if (enic->conf_rq_count < required_rq) {
dev_err(dev, "Not enough Receive queues. Requested:%u which uses %d RQs on VIC, Configured:%u\n",
eth_dev->data->nb_rx_queues,
- eth_dev->data->nb_rx_queues * 2, enic->conf_rq_count);
+ required_rq, enic->conf_rq_count);
rc = -EINVAL;
}
- if (enic->conf_wq_count < eth_dev->data->nb_tx_queues) {
+ if (enic->conf_wq_count < required_wq) {
dev_err(dev, "Not enough Transmit queues. Requested:%u, Configured:%u\n",
eth_dev->data->nb_tx_queues, enic->conf_wq_count);
rc = -EINVAL;
}
- if (enic->conf_cq_count < (eth_dev->data->nb_rx_queues +
- eth_dev->data->nb_tx_queues)) {
+ if (enic->conf_cq_count < required_cq) {
dev_err(dev, "Not enough Completion queues. Required:%u, Configured:%u\n",
- (eth_dev->data->nb_rx_queues +
- eth_dev->data->nb_tx_queues), enic->conf_cq_count);
+ required_cq, enic->conf_cq_count);
rc = -EINVAL;
}
@@ -1275,6 +1279,25 @@ static int enic_dev_init(struct enic *enic)
dev_err(enic, "See the ENIC PMD guide for more information.\n");
return -EINVAL;
}
+ /* Queue counts may be zeros. rte_zmalloc returns NULL in that case. */
+ enic->cq = rte_zmalloc("enic_vnic_cq", sizeof(struct vnic_cq) *
+ enic->conf_cq_count, 8);
+ enic->rq = rte_zmalloc("enic_vnic_rq", sizeof(struct vnic_rq) *
+ enic->conf_rq_count, 8);
+ enic->wq = rte_zmalloc("enic_vnic_wq", sizeof(struct vnic_wq) *
+ enic->conf_wq_count, 8);
+ if (enic->conf_cq_count > 0 && enic->cq == NULL) {
+ dev_err(enic, "failed to allocate vnic_cq, aborting.\n");
+ return -1;
+ }
+ if (enic->conf_rq_count > 0 && enic->rq == NULL) {
+ dev_err(enic, "failed to allocate vnic_rq, aborting.\n");
+ return -1;
+ }
+ if (enic->conf_wq_count > 0 && enic->wq == NULL) {
+ dev_err(enic, "failed to allocate vnic_wq, aborting.\n");
+ return -1;
+ }
/* Get the supported filters */
enic_fdir_info(enic);
--
2.12.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler
2018-01-23 1:05 [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets John Daley
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues John Daley
@ 2018-01-23 1:05 ` John Daley
2018-01-24 17:20 ` Ferruh Yigit
2018-01-24 17:18 ` [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets Ferruh Yigit
2018-01-24 17:48 ` Ferruh Yigit
3 siblings, 1 reply; 10+ messages in thread
From: John Daley @ 2018-01-23 1:05 UTC (permalink / raw)
To: ferruh.yigit; +Cc: dev, Hyong Youb Kim
From: Hyong Youb Kim <hyonkim@cisco.com>
Like most NICs, this hardware (Cisco VIC) also requires partial
checksum in the packet for checksum offload and TSO. So, add
the tx_pkt_prepare handler like other PMDs do.
Technically, VIC has an offload mode that does not require partial
checksum for non-TSO packets. But, it has no such mode for TSO
packets, making tx_pkt_prepare unavoidable.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic.h | 2 ++
drivers/net/enic/enic_ethdev.c | 1 +
drivers/net/enic/enic_rxtx.c | 42 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 45 insertions(+)
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index c76a8f0af..c083985ee 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -268,6 +268,8 @@ uint16_t enic_dummy_recv_pkts(void *rx_queue,
uint16_t nb_pkts);
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
+uint16_t enic_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts);
int enic_set_mtu(struct enic *enic, uint16_t new_mtu);
int enic_link_update(struct enic *enic);
void enic_fdir_info(struct enic *enic);
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 2c356dc27..c46d592ed 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -641,6 +641,7 @@ static int eth_enicpmd_dev_init(struct rte_eth_dev *eth_dev)
eth_dev->dev_ops = &enicpmd_eth_dev_ops;
eth_dev->rx_pkt_burst = &enic_recv_pkts;
eth_dev->tx_pkt_burst = &enic_xmit_pkts;
+ eth_dev->tx_pkt_prepare = &enic_prep_pkts;
pdev = RTE_ETH_DEV_TO_PCI(eth_dev);
rte_eth_copy_pci_info(eth_dev, pdev);
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 8157697a0..2fe5a3fa3 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -5,6 +5,7 @@
#include <rte_mbuf.h>
#include <rte_ethdev_driver.h>
+#include <rte_net.h>
#include <rte_prefetch.h>
#include "enic_compat.h"
@@ -14,6 +15,15 @@
#include <rte_ip.h>
#include <rte_tcp.h>
+#define ENIC_TX_OFFLOAD_MASK ( \
+ PKT_TX_VLAN_PKT | \
+ PKT_TX_IP_CKSUM | \
+ PKT_TX_L4_MASK | \
+ PKT_TX_TCP_SEG)
+
+#define ENIC_TX_OFFLOAD_NOTSUP_MASK \
+ (PKT_TX_OFFLOAD_MASK ^ ENIC_TX_OFFLOAD_MASK)
+
#define RTE_PMD_USE_PREFETCH
#ifdef RTE_PMD_USE_PREFETCH
@@ -433,6 +443,38 @@ unsigned int enic_cleanup_wq(__rte_unused struct enic *enic, struct vnic_wq *wq)
return 0;
}
+uint16_t enic_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+ uint16_t nb_pkts)
+{
+ int32_t ret;
+ uint16_t i;
+ uint64_t ol_flags;
+ struct rte_mbuf *m;
+
+ for (i = 0; i != nb_pkts; i++) {
+ m = tx_pkts[i];
+ ol_flags = m->ol_flags;
+ if (ol_flags & ENIC_TX_OFFLOAD_NOTSUP_MASK) {
+ rte_errno = -ENOTSUP;
+ return i;
+ }
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+ ret = rte_validate_tx_offload(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+#endif
+ ret = rte_net_intel_cksum_prepare(m);
+ if (ret != 0) {
+ rte_errno = ret;
+ return i;
+ }
+ }
+
+ return i;
+}
+
uint16_t enic_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts)
{
--
2.12.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
2018-01-23 1:05 [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets John Daley
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues John Daley
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler John Daley
@ 2018-01-24 17:18 ` Ferruh Yigit
2018-01-24 17:30 ` Hyong Youb Kim
2018-01-24 17:48 ` Ferruh Yigit
3 siblings, 1 reply; 10+ messages in thread
From: Ferruh Yigit @ 2018-01-24 17:18 UTC (permalink / raw)
To: John Daley; +Cc: dev, Hyong Youb Kim
On 1/23/2018 1:05 AM, John Daley wrote:
> From: Hyong Youb Kim <hyonkim@cisco.com>
>
> enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
> for IPv4. The hardware actually validates the TCP/UDP checksum of
> IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
>
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
> Reviewed-by: John Daley <johndale@cisco.com>
> ---
> drivers/net/enic/enic_rxtx.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
> index 98902caa0..8157697a0 100644
> --- a/drivers/net/enic/enic_rxtx.c
> +++ b/drivers/net/enic/enic_rxtx.c
> @@ -185,14 +185,14 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
> }
>
> /* checksum flags */
> - if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
> + if (mbuf->packet_type & (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
> if (!enic_cq_rx_desc_csum_not_calc(cqrd)) {
> uint32_t l4_flags;
> l4_flags = mbuf->packet_type & RTE_PTYPE_L4_MASK;
>
> if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
> pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
> - else
> + else if (mbuf->packet_type & RTE_PTYPE_L3_IPV4)
This looks like conflicting with commit log.
Is pkt_flags intentionally not set for this case?
If so can you update commit log to say only PKT_RX_IP_CKSUM_GOOD set for ipv6?
> pkt_flags |= PKT_RX_IP_CKSUM_BAD;
>
> if (l4_flags == RTE_PTYPE_L4_UDP ||
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues John Daley
@ 2018-01-24 17:19 ` Ferruh Yigit
0 siblings, 0 replies; 10+ messages in thread
From: Ferruh Yigit @ 2018-01-24 17:19 UTC (permalink / raw)
To: John Daley; +Cc: dev, Hyong Youb Kim, stable
On 1/23/2018 1:05 AM, John Daley wrote:
> From: Hyong Youb Kim <hyonkim@cisco.com>
>
> ENIC_CQ_MAX, ENIC_WQ_MAX and others are arbitrary values that
> prevent the app from using more queues when they are available on
> hardware. Remove them and dynamically allocate vnic_cq and such
> arrays to accommodate all available hardware queues.
>
> As a side effect of removing ENIC_CQ_MAX, this commit fixes a segfault
> that would happen when the app requests more than 16 CQs, because
> enic_set_vnic_res() does not consider ENIC_CQ_MAX. For example, the
> following command causes a crash.
>
> testpmd -- --rxq=16 --txq=16
>
> Fixes: ce93d3c36db0 ("net/enic: fix resource check failures when bonding devices")
> Cc: stable@dpdk.org
>
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
> Reviewed-by: John Daley <johndale@cisco.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler John Daley
@ 2018-01-24 17:20 ` Ferruh Yigit
0 siblings, 0 replies; 10+ messages in thread
From: Ferruh Yigit @ 2018-01-24 17:20 UTC (permalink / raw)
To: John Daley; +Cc: dev, Hyong Youb Kim
On 1/23/2018 1:05 AM, John Daley wrote:
> From: Hyong Youb Kim <hyonkim@cisco.com>
>
> Like most NICs, this hardware (Cisco VIC) also requires partial
> checksum in the packet for checksum offload and TSO. So, add
> the tx_pkt_prepare handler like other PMDs do.
>
> Technically, VIC has an offload mode that does not require partial
> checksum for non-TSO packets. But, it has no such mode for TSO
> packets, making tx_pkt_prepare unavoidable.
>
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
> Reviewed-by: John Daley <johndale@cisco.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
2018-01-24 17:18 ` [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets Ferruh Yigit
@ 2018-01-24 17:30 ` Hyong Youb Kim
2018-01-24 17:45 ` Ferruh Yigit
0 siblings, 1 reply; 10+ messages in thread
From: Hyong Youb Kim @ 2018-01-24 17:30 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: John Daley, dev
On Wed, Jan 24, 2018 at 05:18:37PM +0000, Ferruh Yigit wrote:
> On 1/23/2018 1:05 AM, John Daley wrote:
> > From: Hyong Youb Kim <hyonkim@cisco.com>
> >
> > enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
> > for IPv4. The hardware actually validates the TCP/UDP checksum of
> > IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
[...]
> > /* checksum flags */
> > - if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
> > + if (mbuf->packet_type & (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
> > if (!enic_cq_rx_desc_csum_not_calc(cqrd)) {
> > uint32_t l4_flags;
> > l4_flags = mbuf->packet_type & RTE_PTYPE_L4_MASK;
> >
> > if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
> > pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
> > - else
> > + else if (mbuf->packet_type & RTE_PTYPE_L3_IPV4)
>
> This looks like conflicting with commit log.
> Is pkt_flags intentionally not set for this case?
> If so can you update commit log to say only PKT_RX_IP_CKSUM_GOOD set for ipv6?
>
> > pkt_flags |= PKT_RX_IP_CKSUM_BAD;
Yes, it is intentional. IPv6 has no IP header checksum, and
PKT_RX_IP_CKSUM_{GOOD,BAD} does not apply. So, the code does not set
PKT_RX_IP_CKSUM for IPv6. As the commit log says, for IPv6, we are
only setting L4 checksum flags.
-Hyong
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
2018-01-24 17:30 ` Hyong Youb Kim
@ 2018-01-24 17:45 ` Ferruh Yigit
2018-01-24 17:48 ` Hyong Youb Kim
0 siblings, 1 reply; 10+ messages in thread
From: Ferruh Yigit @ 2018-01-24 17:45 UTC (permalink / raw)
To: Hyong Youb Kim; +Cc: John Daley, dev
On 1/24/2018 5:30 PM, Hyong Youb Kim wrote:
> On Wed, Jan 24, 2018 at 05:18:37PM +0000, Ferruh Yigit wrote:
>> On 1/23/2018 1:05 AM, John Daley wrote:
>>> From: Hyong Youb Kim <hyonkim@cisco.com>
>>>
>>> enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
>>> for IPv4. The hardware actually validates the TCP/UDP checksum of
>>> IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
> [...]
>>> /* checksum flags */
>>> - if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
>>> + if (mbuf->packet_type & (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
>>> if (!enic_cq_rx_desc_csum_not_calc(cqrd)) {
>>> uint32_t l4_flags;
>>> l4_flags = mbuf->packet_type & RTE_PTYPE_L4_MASK;
>>>
>>> if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
>>> pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
>>> - else
>>> + else if (mbuf->packet_type & RTE_PTYPE_L3_IPV4)
>>
>> This looks like conflicting with commit log.
>> Is pkt_flags intentionally not set for this case?
>> If so can you update commit log to say only PKT_RX_IP_CKSUM_GOOD set for ipv6?
>>
>>> pkt_flags |= PKT_RX_IP_CKSUM_BAD;
>
> Yes, it is intentional. IPv6 has no IP header checksum, and
> PKT_RX_IP_CKSUM_{GOOD,BAD} does not apply. So, the code does not set
> PKT_RX_IP_CKSUM for IPv6. As the commit log says, for IPv6, we are
> only setting L4 checksum flags.
Ahh, I see.
And I assume enic_cq_rx_desc_ipv4_csum_ok() always false for IPv6.
>
> -Hyong
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
2018-01-24 17:45 ` Ferruh Yigit
@ 2018-01-24 17:48 ` Hyong Youb Kim
0 siblings, 0 replies; 10+ messages in thread
From: Hyong Youb Kim @ 2018-01-24 17:48 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: John Daley, dev
On Wed, Jan 24, 2018 at 05:45:48PM +0000, Ferruh Yigit wrote:
> On 1/24/2018 5:30 PM, Hyong Youb Kim wrote:
> > On Wed, Jan 24, 2018 at 05:18:37PM +0000, Ferruh Yigit wrote:
> >> On 1/23/2018 1:05 AM, John Daley wrote:
> >>> From: Hyong Youb Kim <hyonkim@cisco.com>
> >>>
> >>> enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
> >>> for IPv4. The hardware actually validates the TCP/UDP checksum of
> >>> IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
> > [...]
> >>> /* checksum flags */
> >>> - if (mbuf->packet_type & RTE_PTYPE_L3_IPV4) {
> >>> + if (mbuf->packet_type & (RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6)) {
> >>> if (!enic_cq_rx_desc_csum_not_calc(cqrd)) {
> >>> uint32_t l4_flags;
> >>> l4_flags = mbuf->packet_type & RTE_PTYPE_L4_MASK;
> >>>
> >>> if (enic_cq_rx_desc_ipv4_csum_ok(cqrd))
> >>> pkt_flags |= PKT_RX_IP_CKSUM_GOOD;
> >>> - else
> >>> + else if (mbuf->packet_type & RTE_PTYPE_L3_IPV4)
> >>
> >> This looks like conflicting with commit log.
> >> Is pkt_flags intentionally not set for this case?
> >> If so can you update commit log to say only PKT_RX_IP_CKSUM_GOOD set for ipv6?
> >>
> >>> pkt_flags |= PKT_RX_IP_CKSUM_BAD;
> >
> > Yes, it is intentional. IPv6 has no IP header checksum, and
> > PKT_RX_IP_CKSUM_{GOOD,BAD} does not apply. So, the code does not set
> > PKT_RX_IP_CKSUM for IPv6. As the commit log says, for IPv6, we are
> > only setting L4 checksum flags.
>
> Ahh, I see.
> And I assume enic_cq_rx_desc_ipv4_csum_ok() always false for IPv6.
>
Yes, that is correct. enic_cq_rx_desc_ipv4_csum_ok() returns true only
for IPv4. It is a bit confusing..
-Hyong
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets
2018-01-23 1:05 [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets John Daley
` (2 preceding siblings ...)
2018-01-24 17:18 ` [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets Ferruh Yigit
@ 2018-01-24 17:48 ` Ferruh Yigit
3 siblings, 0 replies; 10+ messages in thread
From: Ferruh Yigit @ 2018-01-24 17:48 UTC (permalink / raw)
To: John Daley; +Cc: dev, Hyong Youb Kim
On 1/23/2018 1:05 AM, John Daley wrote:
> From: Hyong Youb Kim <hyonkim@cisco.com>
>
> enic_cq_rx_to_pkt_flags() currently sets checksum good/bad flags only
> for IPv4. The hardware actually validates the TCP/UDP checksum of
> IPv6 packets too. Set PKT_RX_L4_CKSUM_{GOOD,BAD} accordingly.
>
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
> Reviewed-by: John Daley <johndale@cisco.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2018-01-24 17:48 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-23 1:05 [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets John Daley
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: fix segfault due to static max number of queues John Daley
2018-01-24 17:19 ` Ferruh Yigit
2018-01-23 1:05 ` [dpdk-dev] [PATCH] net/enic: add a Tx prepare handler John Daley
2018-01-24 17:20 ` Ferruh Yigit
2018-01-24 17:18 ` [dpdk-dev] [PATCH] net/enic: set L4 checksum flags for IPv6 packets Ferruh Yigit
2018-01-24 17:30 ` Hyong Youb Kim
2018-01-24 17:45 ` Ferruh Yigit
2018-01-24 17:48 ` Hyong Youb Kim
2018-01-24 17:48 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).