* [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
@ 2016-04-21 12:36 Jianfeng Tan
2016-04-21 22:44 ` Yuanhan Liu
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Jianfeng Tan @ 2016-04-21 12:36 UTC (permalink / raw)
To: dev; +Cc: huawei.xie, yuanhan.liu, Jianfeng Tan
Issue: when using virtio nic to transmit pkts, it causes segment fault.
How to reproduce:
a. start testpmd with vhost.
$testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
--vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
b. start a qemu with a virtio nic connected with the vhost-user port.
$qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
-object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
-numa node,memdev=mem -mem-prealloc \
-chardev socket,id=char1,path=$sock_vhost \
-netdev type=vhost-user,id=net1,chardev=char1 \
-device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
c. enable testpmd on the host.
testpmd> set fwd io
testpmd> start
d. start testpmd in VM.
$testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
testpmd> set fwd txonly
testpmd> start
How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
desc has been updated inside the do {} while (); and after the loop, all descs
could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
reference the start_dp array will lead to segment fault.
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
drivers/net/virtio/virtio_rxtx.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index ef21d8e..432aeab 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
idx = start_dp[idx].next;
} while ((cookie = cookie->next) != NULL);
- start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
-
if (use_indirect)
idx = txvq->vq_ring.desc[head_idx].next;
--
2.1.4
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
2016-04-21 12:36 [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts Jianfeng Tan
@ 2016-04-21 22:44 ` Yuanhan Liu
2016-04-22 14:23 ` Xie, Huawei
2016-04-25 2:37 ` [dpdk-dev] [PATCH v2] " Jianfeng Tan
2016-04-26 4:48 ` [dpdk-dev] [PATCH] " Stephen Hemminger
2 siblings, 1 reply; 12+ messages in thread
From: Yuanhan Liu @ 2016-04-21 22:44 UTC (permalink / raw)
To: Jianfeng Tan; +Cc: dev, huawei.xie
On Thu, Apr 21, 2016 at 12:36:10PM +0000, Jianfeng Tan wrote:
> Issue: when using virtio nic to transmit pkts, it causes segment fault.
Jianfeng,
It's good to describe the issues, steps to reproduce it and how to fix
it. But you don't have to tell it like filling a form. Instead, you
could try to describe in plain English.
> How to reproduce:
> a. start testpmd with vhost.
> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
I personally would suggest to add some indentations (and some whitespace
lines), like
a. start testpmd with vhost.
$ testpmd -c 0x3 -n 4 ... \
--vdev '...' ...
b. something else ...
some more explanations.
And, do not quote a command line like "$testpmd ...", it's like a var
in base in this way. Instead, add a space after "$ ", like what I did
above.
> b. start a qemu with a virtio nic connected with the vhost-user port.
This should be enough. I didn't find any special options below,
therefore, above words is enough. However, if it's some specific
option introduces a bug, you could claim it aloud then.
> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
> -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
> -numa node,memdev=mem -mem-prealloc \
> -chardev socket,id=char1,path=$sock_vhost \
> -netdev type=vhost-user,id=net1,chardev=char1 \
> -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
> c. enable testpmd on the host.
> testpmd> set fwd io
> testpmd> start
> d. start testpmd in VM.
> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> testpmd> set fwd txonly
> testpmd> start
>
> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
> desc has been updated inside the do {} while (); and after the loop, all descs
> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
> reference the start_dp array will lead to segment fault.
You missed a fix line here.
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index ef21d8e..432aeab 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
> idx = start_dp[idx].next;
> } while ((cookie = cookie->next) != NULL);
>
> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This looks a good fix to me. I'm just wondering why we never met it
before, and on which specific case do we meet it? Talking about that,
seems like "set fwd txonly" is with high suspicious.
--yliu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
2016-04-21 22:44 ` Yuanhan Liu
@ 2016-04-22 14:23 ` Xie, Huawei
2016-04-25 1:58 ` Tan, Jianfeng
0 siblings, 1 reply; 12+ messages in thread
From: Xie, Huawei @ 2016-04-22 14:23 UTC (permalink / raw)
To: Yuanhan Liu, Tan, Jianfeng; +Cc: dev, Stephen Hemminger
On 4/22/2016 6:43 AM, Yuanhan Liu wrote:
> On Thu, Apr 21, 2016 at 12:36:10PM +0000, Jianfeng Tan wrote:
>> Issue: when using virtio nic to transmit pkts, it causes segment fault.
> Jianfeng,
>
> It's good to describe the issues, steps to reproduce it and how to fix
> it. But you don't have to tell it like filling a form. Instead, you
> could try to describe in plain English.
>
>> How to reproduce:
>> a. start testpmd with vhost.
>> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
>> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
> I personally would suggest to add some indentations (and some whitespace
> lines), like
>
> a. start testpmd with vhost.
> $ testpmd -c 0x3 -n 4 ... \
> --vdev '...' ...
>
> b. something else ...
> some more explanations.
>
> And, do not quote a command line like "$testpmd ...", it's like a var
> in base in this way. Instead, add a space after "$ ", like what I did
> above.
>
>> b. start a qemu with a virtio nic connected with the vhost-user port.
> This should be enough. I didn't find any special options below,
> therefore, above words is enough. However, if it's some specific
> option introduces a bug, you could claim it aloud then.
>
>> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
>> -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
>> -numa node,memdev=mem -mem-prealloc \
>> -chardev socket,id=char1,path=$sock_vhost \
>> -netdev type=vhost-user,id=net1,chardev=char1 \
>> -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
>> c. enable testpmd on the host.
>> testpmd> set fwd io
>> testpmd> start
>> d. start testpmd in VM.
>> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
With txqflags=0xf01, virtio PMD will use virtio_xmit_pkts_simple, in
which case the enqueue_xmit willn't called, so typo?
>> testpmd> set fwd txonly
>> testpmd> start
>>
>> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
>> desc has been updated inside the do {} while (); and after the loop, all descs
>> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
>> reference the start_dp array will lead to segment fault.
> You missed a fix line here.
Yes, please include the commit that introduces this bug.
>
>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>> ---
>> drivers/net/virtio/virtio_rxtx.c | 2 --
>> 1 file changed, 2 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> index ef21d8e..432aeab 100644
>> --- a/drivers/net/virtio/virtio_rxtx.c
>> +++ b/drivers/net/virtio/virtio_rxtx.c
>> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
>> idx = start_dp[idx].next;
>> } while ((cookie = cookie->next) != NULL);
>>
>> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
> This looks a good fix to me. I'm just wondering why we never met it
> before, and on which specific case do we meet it? Talking about that,
> seems like "set fwd txonly" is with high suspicious.
You will not meet this if you have more free descriptors than needed, so
this depends on the relative speed of virtio xmit and vhost dequeue.
Also if indirect feature is negotiated, it willn't trigger.
However, without indirect, seems as long as we start testpmd with txonly
in guest `first`, we could definitely trigger this. Jianfeng, could you
confirm this and emphasize the order in the commit message?
>
> --yliu
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
2016-04-22 14:23 ` Xie, Huawei
@ 2016-04-25 1:58 ` Tan, Jianfeng
0 siblings, 0 replies; 12+ messages in thread
From: Tan, Jianfeng @ 2016-04-25 1:58 UTC (permalink / raw)
To: Xie, Huawei, Yuanhan Liu; +Cc: dev, Stephen Hemminger
Hi Yuanhan & Huawei,
On 4/22/2016 10:23 PM, Xie, Huawei wrote:
> On 4/22/2016 6:43 AM, Yuanhan Liu wrote:
>> On Thu, Apr 21, 2016 at 12:36:10PM +0000, Jianfeng Tan wrote:
>>> Issue: when using virtio nic to transmit pkts, it causes segment fault.
>> Jianfeng,
>>
>> It's good to describe the issues, steps to reproduce it and how to fix
>> it. But you don't have to tell it like filling a form. Instead, you
>> could try to describe in plain English.
>>
>>> How to reproduce:
>>> a. start testpmd with vhost.
>>> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
>>> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
>> I personally would suggest to add some indentations (and some whitespace
>> lines), like
>>
>> a. start testpmd with vhost.
>> $ testpmd -c 0x3 -n 4 ... \
>> --vdev '...' ...
>>
>> b. something else ...
>> some more explanations.
>>
>> And, do not quote a command line like "$testpmd ...", it's like a var
>> in base in this way. Instead, add a space after "$ ", like what I did
>> above.
>>
>>> b. start a qemu with a virtio nic connected with the vhost-user port.
>> This should be enough. I didn't find any special options below,
>> therefore, above words is enough. However, if it's some specific
>> option introduces a bug, you could claim it aloud then.
>>
>>> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
>>> -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
>>> -numa node,memdev=mem -mem-prealloc \
>>> -chardev socket,id=char1,path=$sock_vhost \
>>> -netdev type=vhost-user,id=net1,chardev=char1 \
>>> -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
>>> c. enable testpmd on the host.
>>> testpmd> set fwd io
>>> testpmd> start
>>> d. start testpmd in VM.
>>> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> With txqflags=0xf01, virtio PMD will use virtio_xmit_pkts_simple, in
> which case the enqueue_xmit willn't called, so typo?
Since mrg_rxbuf is by default enabled, so virtio PMD will not use
virtio_xmit_pkts_simple.
>
>>> testpmd> set fwd txonly
>>> testpmd> start
>>>
>>> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
>>> desc has been updated inside the do {} while (); and after the loop, all descs
>>> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
>>> reference the start_dp array will lead to segment fault.
>> You missed a fix line here.
> Yes, please include the commit that introduces this bug.
Thanks to both of you for all the advice above. I'll send a v2.
>
>>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_rxtx.c | 2 --
>>> 1 file changed, 2 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>>> index ef21d8e..432aeab 100644
>>> --- a/drivers/net/virtio/virtio_rxtx.c
>>> +++ b/drivers/net/virtio/virtio_rxtx.c
>>> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
>>> idx = start_dp[idx].next;
>>> } while ((cookie = cookie->next) != NULL);
>>>
>>> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
>> This looks a good fix to me. I'm just wondering why we never met it
>> before, and on which specific case do we meet it? Talking about that,
>> seems like "set fwd txonly" is with high suspicious.
> You will not meet this if you have more free descriptors than needed, so
> this depends on the relative speed of virtio xmit and vhost dequeue.
> Also if indirect feature is negotiated, it willn't trigger.
> However, without indirect, seems as long as we start testpmd with txonly
> in guest `first`, we could definitely trigger this. Jianfeng, could you
> confirm this and emphasize the order in the commit message?
Yes, exactly. And will add the scenario description into commit message.
Thanks,
Jianfeng
>
>> --yliu
>>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-21 12:36 [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts Jianfeng Tan
2016-04-21 22:44 ` Yuanhan Liu
@ 2016-04-25 2:37 ` Jianfeng Tan
2016-04-25 7:33 ` Xie, Huawei
2016-04-26 3:43 ` Yuanhan Liu
2016-04-26 4:48 ` [dpdk-dev] [PATCH] " Stephen Hemminger
2 siblings, 2 replies; 12+ messages in thread
From: Jianfeng Tan @ 2016-04-25 2:37 UTC (permalink / raw)
To: dev; +Cc: huawei.xie, yuanhan.liu, Jianfeng Tan
Issue: when using virtio nic to transmit pkts, it causes segment fault.
How to reproduce:
Basically, we need to construct a case with vm send packets to vhost-user,
and this issue does not happen when transmitting packets using indirect
desc. Besides, make sure all descriptors are exhausted before vhost
dequeues any packets.
a. start testpmd with vhost.
$ testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
--vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
b. start a qemu with a virtio nic connected with the vhost-user port, just
make sure mrg_rxbuf is enabled.
c. enable testpmd on the host.
testpmd> set fwd io
testpmd> start (better without start vhost-user)
d. start testpmd in VM.
$testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
testpmd> set fwd txonly
testpmd> start
How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
desc has been updated inside the do {} while (), not necessary to update after
the loop. (And if we do that after the loop, if all descs could have run out,
idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to reference the start_dp
array will lead to segment fault.)
Fixes: dd856dfcb9e ("virtio: use any layout on Tx")
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
---
v2: refine the commit message.
drivers/net/virtio/virtio_rxtx.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
index ef21d8e..432aeab 100644
--- a/drivers/net/virtio/virtio_rxtx.c
+++ b/drivers/net/virtio/virtio_rxtx.c
@@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
idx = start_dp[idx].next;
} while ((cookie = cookie->next) != NULL);
- start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
-
if (use_indirect)
idx = txvq->vq_ring.desc[head_idx].next;
--
2.1.4
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-25 2:37 ` [dpdk-dev] [PATCH v2] " Jianfeng Tan
@ 2016-04-25 7:33 ` Xie, Huawei
2016-04-26 3:43 ` Yuanhan Liu
1 sibling, 0 replies; 12+ messages in thread
From: Xie, Huawei @ 2016-04-25 7:33 UTC (permalink / raw)
To: Tan, Jianfeng, dev; +Cc: yuanhan.liu, Stephen Hemminger
On 4/25/2016 10:37 AM, Tan, Jianfeng wrote:
> Issue: when using virtio nic to transmit pkts, it causes segment fault.
>
> How to reproduce:
> Basically, we need to construct a case with vm send packets to vhost-user,
> and this issue does not happen when transmitting packets using indirect
> desc. Besides, make sure all descriptors are exhausted before vhost
> dequeues any packets.
>
> a. start testpmd with vhost.
> $ testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
>
> b. start a qemu with a virtio nic connected with the vhost-user port, just
> make sure mrg_rxbuf is enabled.
>
> c. enable testpmd on the host.
> testpmd> set fwd io
> testpmd> start (better without start vhost-user)
>
> d. start testpmd in VM.
> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> testpmd> set fwd txonly
> testpmd> start
>
> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
> desc has been updated inside the do {} while (), not necessary to update after
> the loop. (And if we do that after the loop, if all descs could have run out,
> idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to reference the start_dp
> array will lead to segment fault.)
>
> Fixes: dd856dfcb9e ("virtio: use any layout on Tx")
>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> ---
> v2: refine the commit message.
>
> drivers/net/virtio/virtio_rxtx.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index ef21d8e..432aeab 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
> idx = start_dp[idx].next;
> } while ((cookie = cookie->next) != NULL);
>
> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
> -
> if (use_indirect)
> idx = txvq->vq_ring.desc[head_idx].next;
>
Ack the code.
Acked-by: Huawei Xie <huawei.xie@intel.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-25 2:37 ` [dpdk-dev] [PATCH v2] " Jianfeng Tan
2016-04-25 7:33 ` Xie, Huawei
@ 2016-04-26 3:43 ` Yuanhan Liu
2016-04-26 3:47 ` Tan, Jianfeng
2016-04-26 8:43 ` Thomas Monjalon
1 sibling, 2 replies; 12+ messages in thread
From: Yuanhan Liu @ 2016-04-26 3:43 UTC (permalink / raw)
To: Jianfeng Tan; +Cc: dev, huawei.xie
On Mon, Apr 25, 2016 at 02:37:45AM +0000, Jianfeng Tan wrote:
> Issue: when using virtio nic to transmit pkts, it causes segment fault.
>
> How to reproduce:
> Basically, we need to construct a case with vm send packets to vhost-user,
> and this issue does not happen when transmitting packets using indirect
> desc. Besides, make sure all descriptors are exhausted before vhost
> dequeues any packets.
>
> a. start testpmd with vhost.
> $ testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
>
> b. start a qemu with a virtio nic connected with the vhost-user port, just
> make sure mrg_rxbuf is enabled.
>
> c. enable testpmd on the host.
> testpmd> set fwd io
> testpmd> start (better without start vhost-user)
>
> d. start testpmd in VM.
> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> testpmd> set fwd txonly
> testpmd> start
>
> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
^^^^^^^
> desc has been updated inside the do {} while (), not necessary to update after
> the loop.
That's not a right "because": you were stating a fact of the right way
to do setup desc flags, but not the cause of this bug.
> (And if we do that after the loop, if all descs could have run out,
> idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to reference the start_dp
> array will lead to segment fault.)
And that's the cause. So, you should state the cause first, then the fix
(which we already have), but not in the verse order you just did.
So, I'd like to reword the commit log a bit, to something like following.
What do you think of it? If no objection, I could merge it soon. Thanks
for the fix, BTW!
--yliu
---
Subject: virtio: fix segfault on Tx desc flags setup
After the do-while loop, idx could be VQ_RING_DESC_CHAIN_END (32768)
when it's the last vring desc buf we can get. Therefore, following
expresssion could lead to a segfault error, as it tries to access
beyond the desc memory boundary.
start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
This bug could be reproduced easily with "set fwd txonly" in the
guest PMD, where the dequeue on host is slower than the guest Tx,
that running out of free desc buf is pretty easy.
The fix is straightforward and easy, just remove it, as we have
already set desc flags properly inside the do-while loop.
Fixes: dd856dfcb9e ("virtio: use any layout on Tx")
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-26 3:43 ` Yuanhan Liu
@ 2016-04-26 3:47 ` Tan, Jianfeng
2016-04-26 8:43 ` Thomas Monjalon
1 sibling, 0 replies; 12+ messages in thread
From: Tan, Jianfeng @ 2016-04-26 3:47 UTC (permalink / raw)
To: Yuanhan Liu; +Cc: dev, Xie, Huawei
Hi Yuanhan,
> -----Original Message-----
> From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com]
> Sent: Tuesday, April 26, 2016 11:43 AM
> To: Tan, Jianfeng
> Cc: dev@dpdk.org; Xie, Huawei
> Subject: Re: [PATCH v2] virtio: fix segfault when transmit pkts
>
> On Mon, Apr 25, 2016 at 02:37:45AM +0000, Jianfeng Tan wrote:
> > Issue: when using virtio nic to transmit pkts, it causes segment fault.
> >
> > How to reproduce:
> > Basically, we need to construct a case with vm send packets to vhost-user,
> > and this issue does not happen when transmitting packets using indirect
> > desc. Besides, make sure all descriptors are exhausted before vhost
> > dequeues any packets.
> >
> > a. start testpmd with vhost.
> > $ testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
> > --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
> >
> > b. start a qemu with a virtio nic connected with the vhost-user port, just
> > make sure mrg_rxbuf is enabled.
> >
> > c. enable testpmd on the host.
> > testpmd> set fwd io
> > testpmd> start (better without start vhost-user)
> >
> > d. start testpmd in VM.
> > $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> > testpmd> set fwd txonly
> > testpmd> start
> >
> > How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag
> of
> ^^^^^^^
> > desc has been updated inside the do {} while (), not necessary to update
> after
> > the loop.
>
> That's not a right "because": you were stating a fact of the right way
> to do setup desc flags, but not the cause of this bug.
>
> > (And if we do that after the loop, if all descs could have run out,
> > idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to reference the
> start_dp
> > array will lead to segment fault.)
>
> And that's the cause. So, you should state the cause first, then the fix
> (which we already have), but not in the verse order you just did.
>
> So, I'd like to reword the commit log a bit, to something like following.
> What do you think of it? If no objection, I could merge it soon. Thanks
> for the fix, BTW!
>
Your refinement sounds much better, thanks!
Jianfeng
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
2016-04-21 12:36 [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts Jianfeng Tan
2016-04-21 22:44 ` Yuanhan Liu
2016-04-25 2:37 ` [dpdk-dev] [PATCH v2] " Jianfeng Tan
@ 2016-04-26 4:48 ` Stephen Hemminger
2016-04-26 5:08 ` Tan, Jianfeng
2 siblings, 1 reply; 12+ messages in thread
From: Stephen Hemminger @ 2016-04-26 4:48 UTC (permalink / raw)
To: Jianfeng Tan; +Cc: dev, huawei.xie, yuanhan.liu
On Thu, 21 Apr 2016 12:36:10 +0000
Jianfeng Tan <jianfeng.tan@intel.com> wrote:
> Issue: when using virtio nic to transmit pkts, it causes segment fault.
>
> How to reproduce:
> a. start testpmd with vhost.
> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
> b. start a qemu with a virtio nic connected with the vhost-user port.
> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
> -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
> -numa node,memdev=mem -mem-prealloc \
> -chardev socket,id=char1,path=$sock_vhost \
> -netdev type=vhost-user,id=net1,chardev=char1 \
> -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
> c. enable testpmd on the host.
> testpmd> set fwd io
> testpmd> start
> d. start testpmd in VM.
> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
> testpmd> set fwd txonly
> testpmd> start
>
> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
> desc has been updated inside the do {} while (); and after the loop, all descs
> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
> reference the start_dp array will lead to segment fault.
>
> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> ---
> drivers/net/virtio/virtio_rxtx.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
> index ef21d8e..432aeab 100644
> --- a/drivers/net/virtio/virtio_rxtx.c
> +++ b/drivers/net/virtio/virtio_rxtx.c
> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
> idx = start_dp[idx].next;
> } while ((cookie = cookie->next) != NULL);
>
> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
> -
> if (use_indirect)
> idx = txvq->vq_ring.desc[head_idx].next;
>
At this point in the code idx is the index past the current set of ring
descriptors. So yes this is a real bug.
I think the description meta-data needs work to explain it better.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts
2016-04-26 4:48 ` [dpdk-dev] [PATCH] " Stephen Hemminger
@ 2016-04-26 5:08 ` Tan, Jianfeng
0 siblings, 0 replies; 12+ messages in thread
From: Tan, Jianfeng @ 2016-04-26 5:08 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, huawei.xie, yuanhan.liu
Hi Stephen,
On 4/26/2016 12:48 PM, Stephen Hemminger wrote:
> On Thu, 21 Apr 2016 12:36:10 +0000
> Jianfeng Tan <jianfeng.tan@intel.com> wrote:
>
>> Issue: when using virtio nic to transmit pkts, it causes segment fault.
>>
>> How to reproduce:
>> a. start testpmd with vhost.
>> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
>> --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
>> b. start a qemu with a virtio nic connected with the vhost-user port.
>> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
>> -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
>> -numa node,memdev=mem -mem-prealloc \
>> -chardev socket,id=char1,path=$sock_vhost \
>> -netdev type=vhost-user,id=net1,chardev=char1 \
>> -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
>> c. enable testpmd on the host.
>> testpmd> set fwd io
>> testpmd> start
>> d. start testpmd in VM.
>> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01
>> testpmd> set fwd txonly
>> testpmd> start
>>
>> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
>> desc has been updated inside the do {} while (); and after the loop, all descs
>> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
>> reference the start_dp array will lead to segment fault.
>>
>> Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
>> ---
>> drivers/net/virtio/virtio_rxtx.c | 2 --
>> 1 file changed, 2 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> index ef21d8e..432aeab 100644
>> --- a/drivers/net/virtio/virtio_rxtx.c
>> +++ b/drivers/net/virtio/virtio_rxtx.c
>> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
>> idx = start_dp[idx].next;
>> } while ((cookie = cookie->next) != NULL);
>>
>> - start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
>> -
>> if (use_indirect)
>> idx = txvq->vq_ring.desc[head_idx].next;
>>
> At this point in the code idx is the index past the current set of ring
> descriptors. So yes this is a real bug.
>
> I think the description meta-data needs work to explain it better.
>
>
Yes, please see v2. Yuanhan gives a hand to refine it already.
Thanks,
Jianfeng
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-26 3:43 ` Yuanhan Liu
2016-04-26 3:47 ` Tan, Jianfeng
@ 2016-04-26 8:43 ` Thomas Monjalon
2016-04-26 16:54 ` Yuanhan Liu
1 sibling, 1 reply; 12+ messages in thread
From: Thomas Monjalon @ 2016-04-26 8:43 UTC (permalink / raw)
To: Yuanhan Liu; +Cc: dev, Jianfeng Tan, huawei.xie
Talking about wording,
2016-04-25 20:43, Yuanhan Liu:
> ---
> Subject: virtio: fix segfault on Tx desc flags setup
I think the english word "crash" is better than "segfault".
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] [PATCH v2] virtio: fix segfault when transmit pkts
2016-04-26 8:43 ` Thomas Monjalon
@ 2016-04-26 16:54 ` Yuanhan Liu
0 siblings, 0 replies; 12+ messages in thread
From: Yuanhan Liu @ 2016-04-26 16:54 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Jianfeng Tan, huawei.xie
On Tue, Apr 26, 2016 at 10:43:35AM +0200, Thomas Monjalon wrote:
> Talking about wording,
>
> 2016-04-25 20:43, Yuanhan Liu:
> > ---
> > Subject: virtio: fix segfault on Tx desc flags setup
>
> I think the english word "crash" is better than "segfault".
Yes.
Acked-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
And, applied to dpdk-next-virtio, with the commit log rewording.
Thanks.
--yliu
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-04-26 16:52 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-21 12:36 [dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts Jianfeng Tan
2016-04-21 22:44 ` Yuanhan Liu
2016-04-22 14:23 ` Xie, Huawei
2016-04-25 1:58 ` Tan, Jianfeng
2016-04-25 2:37 ` [dpdk-dev] [PATCH v2] " Jianfeng Tan
2016-04-25 7:33 ` Xie, Huawei
2016-04-26 3:43 ` Yuanhan Liu
2016-04-26 3:47 ` Tan, Jianfeng
2016-04-26 8:43 ` Thomas Monjalon
2016-04-26 16:54 ` Yuanhan Liu
2016-04-26 4:48 ` [dpdk-dev] [PATCH] " Stephen Hemminger
2016-04-26 5:08 ` Tan, Jianfeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).