* [dpdk-stable] [PATCH] enic: check for nb_free > 0
@ 2017-08-02 18:02 Aaron Conole
2017-08-02 18:34 ` John Daley (johndale)
0 siblings, 1 reply; 3+ messages in thread
From: Aaron Conole @ 2017-08-02 18:02 UTC (permalink / raw)
To: dev; +Cc: stable, John Daley, Bruce Richardson
Occasionally, the amount of packets to free from the work queue ends
perfectly on a boundary to have nb_free = 0 and pool = 0. This causes
a segfault as follows:
(gdb) bt
#0 rte_mempool_default_cache (mp=<optimized out>, mp=<optimized out>,
lcore_id=<optimized out>)
at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1017
#1 rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0)
at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1174
#2 enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0,
completed_index=completed_index@entry=33)
at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:429
#3 0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>,
wq=wq@entry=0x7efabffcd5b0)
at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:442
#4 0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0,
tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>)
at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/drivers/net/enic/enic_rxtx.c:470
#5 0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>,
tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>)
This commit makes the enic wq driver match other drivers who call the
bulk free, by checking that there are actual packets to free.
Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling")
CC: stable@dpdk.org
Reported-by: Vincent S. Cojot <vcojot@redhat.com>
Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631
Signed-off-by: Aaron Conole <aconole@redhat.com>
---
drivers/net/enic/enic_rxtx.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c
index 5867acf..a39172f 100644
--- a/drivers/net/enic/enic_rxtx.c
+++ b/drivers/net/enic/enic_rxtx.c
@@ -503,7 +503,8 @@ static inline void enic_free_wq_bufs(struct vnic_wq *wq, u16 completed_index)
tail_idx = enic_ring_incr(desc_count, tail_idx);
}
- rte_mempool_put_bulk(pool, (void **)free, nb_free);
+ if (nb_free > 0)
+ rte_mempool_put_bulk(pool, (void **)free, nb_free);
wq->tail_idx = tail_idx;
wq->ring.desc_avail += nb_to_free;
--
2.9.4
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-stable] [PATCH] enic: check for nb_free > 0
2017-08-02 18:02 [dpdk-stable] [PATCH] enic: check for nb_free > 0 Aaron Conole
@ 2017-08-02 18:34 ` John Daley (johndale)
2017-08-03 20:58 ` Thomas Monjalon
0 siblings, 1 reply; 3+ messages in thread
From: John Daley (johndale) @ 2017-08-02 18:34 UTC (permalink / raw)
To: Aaron Conole, dev; +Cc: stable, Bruce Richardson
> -----Original Message-----
> From: Aaron Conole [mailto:aconole@redhat.com]
> Sent: Wednesday, August 02, 2017 11:02 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; John Daley (johndale) <johndale@cisco.com>; Bruce
> Richardson <bruce.richardson@intel.com>
> Subject: [PATCH] enic: check for nb_free > 0
>
> Occasionally, the amount of packets to free from the work queue ends
> perfectly on a boundary to have nb_free = 0 and pool = 0. This causes a
> segfault as follows:
>
> (gdb) bt
> #0 rte_mempool_default_cache (mp=<optimized out>, mp=<optimized
> out>,
> lcore_id=<optimized out>)
> at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-
> linuxapp-gcc/include/rte_mempool.h:1017
> #1 rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0)
> at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-
> linuxapp-gcc/include/rte_mempool.h:1174
> #2 enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0,
> completed_index=completed_index@entry=33)
> at /usr/src/debug/openvswitch-2.6.1/dpdk-
> 16.11/drivers/net/enic/enic_rxtx.c:429
> #3 0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>,
> wq=wq@entry=0x7efabffcd5b0)
> at /usr/src/debug/openvswitch-2.6.1/dpdk-
> 16.11/drivers/net/enic/enic_rxtx.c:442
> #4 0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0,
> tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>)
> at /usr/src/debug/openvswitch-2.6.1/dpdk-
> 16.11/drivers/net/enic/enic_rxtx.c:470
> #5 0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>,
> tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>)
>
> This commit makes the enic wq driver match other drivers who call the bulk
> free, by checking that there are actual packets to free.
>
> Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling")
> CC: stable@dpdk.org
> Reported-by: Vincent S. Cojot <vcojot@redhat.com>
> Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631
> Signed-off-by: Aaron Conole <aconole@redhat.com>
> ---
> drivers/net/enic/enic_rxtx.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/enic/enic_rxtx.c b/drivers/net/enic/enic_rxtx.c index
> 5867acf..a39172f 100644
> --- a/drivers/net/enic/enic_rxtx.c
> +++ b/drivers/net/enic/enic_rxtx.c
> @@ -503,7 +503,8 @@ static inline void enic_free_wq_bufs(struct vnic_wq
> *wq, u16 completed_index)
> tail_idx = enic_ring_incr(desc_count, tail_idx);
> }
>
> - rte_mempool_put_bulk(pool, (void **)free, nb_free);
> + if (nb_free > 0)
> + rte_mempool_put_bulk(pool, (void **)free, nb_free);
>
> wq->tail_idx = tail_idx;
> wq->ring.desc_avail += nb_to_free;
> --
> 2.9.4
Reviewed-by: John Daley <johndale@cisco.com>
Thank you!
johnd
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-stable] [PATCH] enic: check for nb_free > 0
2017-08-02 18:34 ` John Daley (johndale)
@ 2017-08-03 20:58 ` Thomas Monjalon
0 siblings, 0 replies; 3+ messages in thread
From: Thomas Monjalon @ 2017-08-03 20:58 UTC (permalink / raw)
To: Aaron Conole; +Cc: stable, John Daley (johndale), dev, Bruce Richardson
> > Occasionally, the amount of packets to free from the work queue ends
> > perfectly on a boundary to have nb_free = 0 and pool = 0. This causes a
> > segfault as follows:
> >
> > (gdb) bt
> > #0 rte_mempool_default_cache (mp=<optimized out>, mp=<optimized
> > out>,
> > lcore_id=<optimized out>)
> > at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-
> > linuxapp-gcc/include/rte_mempool.h:1017
> > #1 rte_mempool_put_bulk (n=0, obj_table=0x7f10deff2530, mp=0x0)
> > at /usr/src/debug/openvswitch-2.6.1/dpdk-16.11/x86_64-native-
> > linuxapp-gcc/include/rte_mempool.h:1174
> > #2 enic_free_wq_bufs (wq=wq@entry=0x7efabffcd5b0,
> > completed_index=completed_index@entry=33)
> > at /usr/src/debug/openvswitch-2.6.1/dpdk-
> > 16.11/drivers/net/enic/enic_rxtx.c:429
> > #3 0x00007f11e9c86e17 in enic_cleanup_wq (enic=<optimized out>,
> > wq=wq@entry=0x7efabffcd5b0)
> > at /usr/src/debug/openvswitch-2.6.1/dpdk-
> > 16.11/drivers/net/enic/enic_rxtx.c:442
> > #4 0x00007f11e9c86e5f in enic_xmit_pkts (tx_queue=0x7efabffcd5b0,
> > tx_pkts=0x7f10deffb1a8, nb_pkts=<optimized out>)
> > at /usr/src/debug/openvswitch-2.6.1/dpdk-
> > 16.11/drivers/net/enic/enic_rxtx.c:470
> > #5 0x00007f11e9e147ad in rte_eth_tx_burst (nb_pkts=<optimized out>,
> > tx_pkts=0x7f10deffb1a8, queue_id=0, port_id=<optimized out>)
> >
> > This commit makes the enic wq driver match other drivers who call the bulk
> > free, by checking that there are actual packets to free.
> >
> > Fixes: 36935afbc53c ("net/enic: refactor Tx mbuf recycling")
> > CC: stable@dpdk.org
> > Reported-by: Vincent S. Cojot <vcojot@redhat.com>
> > Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1468631
> > Signed-off-by: Aaron Conole <aconole@redhat.com>
>
> Reviewed-by: John Daley <johndale@cisco.com>
Applied, thanks
With more context in the title:
net/enic: fix crash when freeing 0 packet to mempool
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-08-03 20:58 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-02 18:02 [dpdk-stable] [PATCH] enic: check for nb_free > 0 Aaron Conole
2017-08-02 18:34 ` John Daley (johndale)
2017-08-03 20:58 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).