* [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
@ 2015-01-29 10:30 linhaifeng
2015-01-29 10:39 ` Xie, Huawei
0 siblings, 1 reply; 15+ messages in thread
From: linhaifeng @ 2015-01-29 10:30 UTC (permalink / raw)
To: dev
From: Linhaifeng <haifeng.lin@huawei.com>
If we found there is no buffer we should notify virtio_net to
fill buffers.
We use mz send buffers from VM to VM,found that the other VM
stop to receive data after many hours.
Signed-off-by: Linhaifeng <haifeng.lin@huawei.com>
---
lib/librte_vhost/vhost_rxtx.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
index ccfd82f..013c526 100644
--- a/lib/librte_vhost/vhost_rxtx.c
+++ b/lib/librte_vhost/vhost_rxtx.c
@@ -87,9 +87,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
/*check that we have enough buffers*/
if (unlikely(count > free_entries))
count = free_entries;
-
- if (count == 0)
+ /* If there is no buffers we should notify guest to fill.
+ * This is need when guest use virtio_net driver(not pmd).
+ */
+ if (count == 0) {
+ if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+ eventfd_write((int)vq->kickfd, 1);
return 0;
+ }
res_end_idx = res_base_idx + count;
/* vq->last_used_idx_res is atomically updated. */
--
1.9.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-29 10:30 [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer linhaifeng
@ 2015-01-29 10:39 ` Xie, Huawei
2015-01-29 12:39 ` Linhaifeng
0 siblings, 1 reply; 15+ messages in thread
From: Xie, Huawei @ 2015-01-29 10:39 UTC (permalink / raw)
To: linhaifeng, dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of linhaifeng
> Sent: Thursday, January 29, 2015 6:30 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no
> buffer
>
> From: Linhaifeng <haifeng.lin@huawei.com>
>
> If we found there is no buffer we should notify virtio_net to
> fill buffers.
>
> We use mz send buffers from VM to VM,found that the other VM
> stop to receive data after many hours.
>
> Signed-off-by: Linhaifeng <haifeng.lin@huawei.com>
> ---
> lib/librte_vhost/vhost_rxtx.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
> index ccfd82f..013c526 100644
> --- a/lib/librte_vhost/vhost_rxtx.c
> +++ b/lib/librte_vhost/vhost_rxtx.c
> @@ -87,9 +87,14 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
> /*check that we have enough buffers*/
> if (unlikely(count > free_entries))
> count = free_entries;
> -
> - if (count == 0)
> + /* If there is no buffers we should notify guest to fill.
> + * This is need when guest use virtio_net driver(not pmd).
> + */
> + if (count == 0) {
> + if (!(vq->avail->flags &
> VRING_AVAIL_F_NO_INTERRUPT))
> + eventfd_write((int)vq->kickfd, 1);
> return 0;
> + }
Haifeng:
Is it the root cause and is it protocol required?
Could you give a detailed description for that scenario?
>
> res_end_idx = res_base_idx + count;
> /* vq->last_used_idx_res is atomically updated. */
> --
> 1.9.0
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-29 10:39 ` Xie, Huawei
@ 2015-01-29 12:39 ` Linhaifeng
2015-01-29 13:00 ` Xie, Huawei
0 siblings, 1 reply; 15+ messages in thread
From: Linhaifeng @ 2015-01-29 12:39 UTC (permalink / raw)
To: Xie, Huawei, dev
On 2015/1/29 18:39, Xie, Huawei wrote:
>> - if (count == 0)
>> + /* If there is no buffers we should notify guest to fill.
>> + * This is need when guest use virtio_net driver(not pmd).
>> + */
>> + if (count == 0) {
>> + if (!(vq->avail->flags &
>> VRING_AVAIL_F_NO_INTERRUPT))
>> + eventfd_write((int)vq->kickfd, 1);
>> return 0;
>> + }
>
> Haifeng:
> Is it the root cause and is it protocol required?
> Could you give a detailed description for that scenario?
>
I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
VM1 execute follow script:
for((i=0;i<999999999;i++));
do
mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b 00:00:00:00:00:02 -c 10000000 -p 512
sleep 4
done
VM2 execute follow command to watch:
watch -d ifconfig
After many hours VM2 stop to receive data.
Could you test it ?
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-29 12:39 ` Linhaifeng
@ 2015-01-29 13:00 ` Xie, Huawei
2015-01-29 13:50 ` Linhaifeng
0 siblings, 1 reply; 15+ messages in thread
From: Xie, Huawei @ 2015-01-29 13:00 UTC (permalink / raw)
To: Linhaifeng, dev
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
> Sent: Thursday, January 29, 2015 8:39 PM
> To: Xie, Huawei; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is
> no buffer
>
>
>
> On 2015/1/29 18:39, Xie, Huawei wrote:
>
> >> - if (count == 0)
> >> + /* If there is no buffers we should notify guest to fill.
> >> + * This is need when guest use virtio_net driver(not pmd).
> >> + */
> >> + if (count == 0) {
> >> + if (!(vq->avail->flags &
> >> VRING_AVAIL_F_NO_INTERRUPT))
> >> + eventfd_write((int)vq->kickfd, 1);
> >> return 0;
> >> + }
> >
> > Haifeng:
> > Is it the root cause and is it protocol required?
> > Could you give a detailed description for that scenario?
> >
>
> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
> VM1 execute follow script:
> for((i=0;i<999999999;i++));
> do
> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b 00:00:00:00:00:02 -c
> 10000000 -p 512
> sleep 4
> done
>
> VM2 execute follow command to watch:
> watch -d ifconfig
>
> After many hours VM2 stop to receive data.
>
> Could you test it ?
We could try next week after I send the whole patch.
How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
Could you please also reply to that mail?
For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
People could temporarily apply this patch as a work around.
Or anyone
> --
> Regards,
> Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-29 13:00 ` Xie, Huawei
@ 2015-01-29 13:50 ` Linhaifeng
2015-01-30 8:20 ` Xu, Qian Q
0 siblings, 1 reply; 15+ messages in thread
From: Linhaifeng @ 2015-01-29 13:50 UTC (permalink / raw)
To: Xie, Huawei, dev
On 2015/1/29 21:00, Xie, Huawei wrote:
>
>
>> -----Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
>> Sent: Thursday, January 29, 2015 8:39 PM
>> To: Xie, Huawei; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is
>> no buffer
>>
>>
>>
>> On 2015/1/29 18:39, Xie, Huawei wrote:
>>
>>>> - if (count == 0)
>>>> + /* If there is no buffers we should notify guest to fill.
>>>> + * This is need when guest use virtio_net driver(not pmd).
>>>> + */
>>>> + if (count == 0) {
>>>> + if (!(vq->avail->flags &
>>>> VRING_AVAIL_F_NO_INTERRUPT))
>>>> + eventfd_write((int)vq->kickfd, 1);
>>>> return 0;
>>>> + }
>>>
>>> Haifeng:
>>> Is it the root cause and is it protocol required?
>>> Could you give a detailed description for that scenario?
>>>
>>
>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
>> VM1 execute follow script:
>> for((i=0;i<999999999;i++));
>> do
>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b 00:00:00:00:00:02 -c
>> 10000000 -p 512
>> sleep 4
>> done
>>
>> VM2 execute follow command to watch:
>> watch -d ifconfig
>>
>> After many hours VM2 stop to receive data.
>>
>> Could you test it ?
>
>
> We could try next week after I send the whole patch.
> How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
About three hours.
What kind of driver you used in guest?virtio-net-pmd or virtio-net?
> As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
How to dump the status of vring in guest?
> Could you please also reply to that mail?
>
Which mail?
> For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
I found that if we add this notify the performance is better(growth of 100kpps when use 64byte UDP packets)
> People could temporarily apply this patch as a work around.
>
> Or anyone
>
OK.I'm also not sure about this bug.I think i should do something to found the real reason.
>
>> --
>> Regards,
>> Haifeng
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-29 13:50 ` Linhaifeng
@ 2015-01-30 8:20 ` Xu, Qian Q
2015-01-30 10:33 ` Linhaifeng
2015-02-01 6:04 ` Linhaifeng
0 siblings, 2 replies; 15+ messages in thread
From: Xu, Qian Q @ 2015-01-30 8:20 UTC (permalink / raw)
To: Linhaifeng, Xie, Huawei, dev
Haifeng
Could you give more information so that we can reproduce your issue? Thanks.
1. What's your dpdk package, based on which branch, with Huawei's vhost-user's patches?
2. What's your step and command to launch vhost sample?
3. What is mz? Your internal tool? I can't yum install mz or download mz tool.
4. As to your test scenario, I understand it in this way: virtio1 in VM1, virtio2 in VM2, then let virtio1 send packages to virtio2, the problem is that after 3 hours, virtio2 can't receive packets, but virtio1 is still sending packets, am I right? So mz is like a packet generator to send packets, right?
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Linhaifeng
Sent: Thursday, January 29, 2015 9:51 PM
To: Xie, Huawei; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
On 2015/1/29 21:00, Xie, Huawei wrote:
>
>
>> -----Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
>> Sent: Thursday, January 29, 2015 8:39 PM
>> To: Xie, Huawei; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
>> when there is no buffer
>>
>>
>>
>> On 2015/1/29 18:39, Xie, Huawei wrote:
>>
>>>> - if (count == 0)
>>>> + /* If there is no buffers we should notify guest to fill.
>>>> + * This is need when guest use virtio_net driver(not pmd).
>>>> + */
>>>> + if (count == 0) {
>>>> + if (!(vq->avail->flags &
>>>> VRING_AVAIL_F_NO_INTERRUPT))
>>>> + eventfd_write((int)vq->kickfd, 1);
>>>> return 0;
>>>> + }
>>>
>>> Haifeng:
>>> Is it the root cause and is it protocol required?
>>> Could you give a detailed description for that scenario?
>>>
>>
>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
>> VM1 execute follow script:
>> for((i=0;i<999999999;i++));
>> do
>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b
>> 00:00:00:00:00:02 -c
>> 10000000 -p 512
>> sleep 4
>> done
>>
>> VM2 execute follow command to watch:
>> watch -d ifconfig
>>
>> After many hours VM2 stop to receive data.
>>
>> Could you test it ?
>
>
> We could try next week after I send the whole patch.
> How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
About three hours.
What kind of driver you used in guest?virtio-net-pmd or virtio-net?
> As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
How to dump the status of vring in guest?
> Could you please also reply to that mail?
>
Which mail?
> For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
I found that if we add this notify the performance is better(growth of 100kpps when use 64byte UDP packets)
> People could temporarily apply this patch as a work around.
>
> Or anyone
>
OK.I'm also not sure about this bug.I think i should do something to found the real reason.
>
>> --
>> Regards,
>> Haifeng
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-30 8:20 ` Xu, Qian Q
@ 2015-01-30 10:33 ` Linhaifeng
2015-02-01 6:04 ` Linhaifeng
1 sibling, 0 replies; 15+ messages in thread
From: Linhaifeng @ 2015-01-30 10:33 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei, dev
On 2015/1/30 16:20, Xu, Qian Q wrote:
> Haifeng
> Could you give more information so that we can reproduce your issue? Thanks.
> 1. What's your dpdk package, based on which branch, with Huawei's vhost-user's patches?
Not with Huawei's patches.I implement a demo before Huawei's patches with OVDK's vhost_dequeue_burst and vhost_enqueue_burst.
Now I'm trying to run vhost-user with dpdk vhost example(master branch).
> 2. What's your step and command to launch vhost sample?
BTW.How to run vhost example with vm2vm mode?
Is VM2VM means i can send packet from vm1 to vm2?
I setup with follow steps but can't send packet in VM:
mount -t hugetlbfs nodev /mnt/huge -o pagesize=1G
mount -t hugetlbfs nodev /dev/hugepages -o pagesize=2M
echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
modprobe uio
insmod ${RTE_SDK}/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
dpdk_nic_bind.py -b igb_uio 82:00.0 82:00.1
rmmod vhost_net
modprobe cuse
insmod ${RTE_SDK}/lib/librte_vhost/eventfd_link/eventfd_link.ko
${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /mnt/huge -m 2048 -- -p 0x1 --vm2vm 1
qemu-wrap.py -enable-kvm -mem-path /mnt/huge/ -mem-prealloc -smp 2 \
-netdev tap,id=hostnet1,vhost=on,ifname=port0 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:00:00:00:01 -hda /mnt/sdb/linhf/vm1.img -m 2048 -vnc :0
qemu-wrap.py -enable-kvm -mem-path /mnt/huge/ -mem-prealloc -smp 2 \
-netdev tap,id=hostnet1,vhost=on,ifname=port0 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:00:00:00:02 -hda /mnt/sdb/linhf/vm2.img -m 2048 -vnc :1
> 3. What is mz? Your internal tool? I can't yum install mz or download mz tool.
http://www.perihel.at/sec/mz/
> 4. As to your test scenario, I understand it in this way: virtio1 in VM1, virtio2 in VM2, then let virtio1 send packages to virtio2, the problem is that after 3 hours, virtio2 can't receive packets, but virtio1 is still sending packets, am I right? So mz is like a packet generator to send packets, right?
Yes,you are right.
>
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/1/29 21:00, Xie, Huawei wrote:
>>
>>
>>> -----Original Message-----
>>> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
>>> Sent: Thursday, January 29, 2015 8:39 PM
>>> To: Xie, Huawei; dev@dpdk.org
>>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
>>> when there is no buffer
>>>
>>>
>>>
>>> On 2015/1/29 18:39, Xie, Huawei wrote:
>>>
>>>>> - if (count == 0)
>>>>> + /* If there is no buffers we should notify guest to fill.
>>>>> + * This is need when guest use virtio_net driver(not pmd).
>>>>> + */
>>>>> + if (count == 0) {
>>>>> + if (!(vq->avail->flags &
>>>>> VRING_AVAIL_F_NO_INTERRUPT))
>>>>> + eventfd_write((int)vq->kickfd, 1);
>>>>> return 0;
>>>>> + }
>>>>
>>>> Haifeng:
>>>> Is it the root cause and is it protocol required?
>>>> Could you give a detailed description for that scenario?
>>>>
>>>
>>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
>>> VM1 execute follow script:
>>> for((i=0;i<999999999;i++));
>>> do
>>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b
>>> 00:00:00:00:00:02 -c
>>> 10000000 -p 512
>>> sleep 4
>>> done
>>>
>>> VM2 execute follow command to watch:
>>> watch -d ifconfig
>>>
>>> After many hours VM2 stop to receive data.
>>>
>>> Could you test it ?
>>
>>
>> We could try next week after I send the whole patch.
>> How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
>
> About three hours.
> What kind of driver you used in guest?virtio-net-pmd or virtio-net?
>
>
>> As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
>
> How to dump the status of vring in guest?
>
>> Could you please also reply to that mail?
>>
>
> Which mail?
>
>
>> For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
>
> I found that if we add this notify the performance is better(growth of 100kpps when use 64byte UDP packets)
>
>> People could temporarily apply this patch as a work around.
>>
>> Or anyone
>>
>
> OK.I'm also not sure about this bug.I think i should do something to found the real reason.
>
>>
>>> --
>>> Regards,
>>> Haifeng
>>
>>
>>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-01-30 8:20 ` Xu, Qian Q
2015-01-30 10:33 ` Linhaifeng
@ 2015-02-01 6:04 ` Linhaifeng
[not found] ` <82F45D86ADE5454A95A89742C8D1410E01CA1DA3@shsmsx102.ccr.corp.intel.com>
1 sibling, 1 reply; 15+ messages in thread
From: Linhaifeng @ 2015-02-01 6:04 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei, dev
Hi,xie & xu
I found that the new code had try to notify guest after send each packet after 2bbb811.
So this bug not exist now.
static inline uint32_t __attribute__((always_inline))
virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
struct rte_mbuf **pkts, uint32_t count)
{
... ...
for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
... ...
/* Kick the guest if necessary. */
if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
eventfd_write((int)vq->kickfd, 1);
}
return count;
}
thank you very much!
On 2015/1/30 16:20, Xu, Qian Q wrote:
> Haifeng
> Could you give more information so that we can reproduce your issue? Thanks.
> 1. What's your dpdk package, based on which branch, with Huawei's vhost-user's patches?
> 2. What's your step and command to launch vhost sample?
> 3. What is mz? Your internal tool? I can't yum install mz or download mz tool.
> 4. As to your test scenario, I understand it in this way: virtio1 in VM1, virtio2 in VM2, then let virtio1 send packages to virtio2, the problem is that after 3 hours, virtio2 can't receive packets, but virtio1 is still sending packets, am I right? So mz is like a packet generator to send packets, right?
>
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Linhaifeng
> Sent: Thursday, January 29, 2015 9:51 PM
> To: Xie, Huawei; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/1/29 21:00, Xie, Huawei wrote:
>>
>>
>>> -----Original Message-----
>>> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
>>> Sent: Thursday, January 29, 2015 8:39 PM
>>> To: Xie, Huawei; dev@dpdk.org
>>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
>>> when there is no buffer
>>>
>>>
>>>
>>> On 2015/1/29 18:39, Xie, Huawei wrote:
>>>
>>>>> - if (count == 0)
>>>>> + /* If there is no buffers we should notify guest to fill.
>>>>> + * This is need when guest use virtio_net driver(not pmd).
>>>>> + */
>>>>> + if (count == 0) {
>>>>> + if (!(vq->avail->flags &
>>>>> VRING_AVAIL_F_NO_INTERRUPT))
>>>>> + eventfd_write((int)vq->kickfd, 1);
>>>>> return 0;
>>>>> + }
>>>>
>>>> Haifeng:
>>>> Is it the root cause and is it protocol required?
>>>> Could you give a detailed description for that scenario?
>>>>
>>>
>>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver.
>>> VM1 execute follow script:
>>> for((i=0;i<999999999;i++));
>>> do
>>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b
>>> 00:00:00:00:00:02 -c
>>> 10000000 -p 512
>>> sleep 4
>>> done
>>>
>>> VM2 execute follow command to watch:
>>> watch -d ifconfig
>>>
>>> After many hours VM2 stop to receive data.
>>>
>>> Could you test it ?
>>
>>
>> We could try next week after I send the whole patch.
>> How many hours? Is it reproducible at your side? I inject packets through packet generator to guest for more than ten hours, haven't met issues.
>
> About three hours.
> What kind of driver you used in guest?virtio-net-pmd or virtio-net?
>
>
>> As I said in another mail sent to you, could you dump the status of vring if you still have the spot?
>
> How to dump the status of vring in guest?
>
>> Could you please also reply to that mail?
>>
>
> Which mail?
>
>
>> For the patch, if we have no root cause, I prefer not to apply it, so that we don't send more interrupts than needed to guest to affect performance.
>
> I found that if we add this notify the performance is better(growth of 100kpps when use 64byte UDP packets)
>
>> People could temporarily apply this patch as a work around.
>>
>> Or anyone
>>
>
> OK.I'm also not sure about this bug.I think i should do something to found the real reason.
>
>>
>>> --
>>> Regards,
>>> Haifeng
>>
>>
>>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
[not found] ` <54D0926D.9010304@huawei.com>
@ 2015-02-04 1:38 ` Xu, Qian Q
2015-02-06 4:02 ` Linhaifeng
0 siblings, 1 reply; 15+ messages in thread
From: Xu, Qian Q @ 2015-02-04 1:38 UTC (permalink / raw)
To: Linhaifeng, Xie, Huawei; +Cc: liuyongan, dev
Haifeng
1. Get the latest dpdk master branch code, apply huawei's patchset of vhost-user. The first patch is http://dpdk.org/dev/patchwork/patch/2796/, totally 12patches, date is 1/30/2015.
2. Update the config/common_linuxapp and build the samples, see my script for reference. If
cd ./dpdk
export RTE_SDK=$PWD
export RTE_TARGET=x86_64-native-linuxapp-gcc
sed -i 's/CONFIG_RTE_LIBRTE_VHOST=.*$/CONFIG_RTE_LIBRTE_VHOST=y/' ./config/commo n_linuxapp
make install -j38 T=x86_64-native-linuxapp-gcc
cd $RTE_SDK/lib/librte_vhost
make
cd ./eventfd_link
make
cd $RTE_SDK/examples/vhost
make
3. Launch the vhost-user sample, then you will see there is a vhost-net under you dpdk folder for socket use. If you meet error as can't setup mempool, you can update one line in examples/vhost/main.c, '#define MAX_QUEUES 512'--->' #define MAX_QUEUE 128'.
#!/bin/sh
modprobe kvm
modprobe kvm_intel
awk '/Hugepagesize/ {print $2}' /proc/meminfo
awk '/HugePages_Total/ { print $2 }' /proc/meminfo
umount `awk '/hugetlbfs/ { print $2 }' /proc/mounts`
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge -o pagesize=1G #1G or 2M page, both ok.
rm -f /dev/vhost-net
rmmod vhost-net
modprobe fuse
modprobe cuse
rmmod eventfd_link
rmmod igb_uio
cd ./dpdk
pwd
insmod lib/librte_vhost/eventfd_link/eventfd_link.ko
modprobe uio
rmmod rte_kni
rmmod igb_uio
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
./tools/dpdk_nic_bind.py --bind=igb_uio 0000:08:00.1
taskset -c 1-3 examples/vhost/build/vhost-switch -c 0xf -n 4 --huge-dir /mnt/huge --socket-mem 1024,1024 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 2
#Make sure the vm2vm is 1 or 2 to make vm to vm communication work. Mergeable can be 1(to enable jumbo frame) or 0(disable jumbo frame).
4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
Below is my VM1 startup command, for your reference, similar for VM2.
/home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01, -nographic
5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
Pls let me know if any questions, issues.
-----Original Message-----
From: Linhaifeng [mailto:haifeng.lin@huawei.com]
Sent: Tuesday, February 03, 2015 5:19 PM
To: Xu, Qian Q; Xie, Huawei
Cc: lilijun; liuyongan@huawei.com
Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
Yes,the lasted codes will not happen.
On 2015/2/3 16:53, Xu, Qian Q wrote:
> If you'd like to use DPDK plus vhost-user's patch, I can send you my steps for setup, do u need it?
of course! pls!
I'd like to use it.Thank you very much!
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-04 1:38 ` Xu, Qian Q
@ 2015-02-06 4:02 ` Linhaifeng
2015-02-06 5:54 ` Xu, Qian Q
0 siblings, 1 reply; 15+ messages in thread
From: Linhaifeng @ 2015-02-06 4:02 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei; +Cc: liuyongan, dev
On 2015/2/4 9:38, Xu, Qian Q wrote:
> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
> Below is my VM1 startup command, for your reference, similar for VM2.
> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>
> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>
> Pls let me know if any questions, issues.
Hi xie & xu
When I try to start VM vhost-switch crashed.
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
VHOST_CONFIG: mmap qemu guest failed.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-06 4:02 ` Linhaifeng
@ 2015-02-06 5:54 ` Xu, Qian Q
2015-02-06 11:02 ` Linhaifeng
2015-02-07 4:26 ` Linhaifeng
0 siblings, 2 replies; 15+ messages in thread
From: Xu, Qian Q @ 2015-02-06 5:54 UTC (permalink / raw)
To: Linhaifeng, Xie, Huawei; +Cc: liuyongan, dev
Haifeng
Are you using the latest dpdk branch with vhost-user patches? I have never met the issue.
When is the vhost sample crashed? When you start VM or when you run sth in VM? Is your qemu 2.2? How about your memory info? Could you give more details about your steps?
-----Original Message-----
From: Linhaifeng [mailto:haifeng.lin@huawei.com]
Sent: Friday, February 06, 2015 12:02 PM
To: Xu, Qian Q; Xie, Huawei
Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
On 2015/2/4 9:38, Xu, Qian Q wrote:
> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
> Below is my VM1 startup command, for your reference, similar for VM2.
> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>
> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>
> Pls let me know if any questions, issues.
Hi xie & xu
When I try to start VM vhost-switch crashed.
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
VHOST_CONFIG: mmap qemu guest failed.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-06 5:54 ` Xu, Qian Q
@ 2015-02-06 11:02 ` Linhaifeng
2015-02-07 4:26 ` Linhaifeng
1 sibling, 0 replies; 15+ messages in thread
From: Linhaifeng @ 2015-02-06 11:02 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei; +Cc: liuyongan, dev
On 2015/2/6 13:54, Xu, Qian Q wrote:
> Haifeng
> Are you using the latest dpdk branch with vhost-user patches? I have never met the issue.
> When is the vhost sample crashed? When you start VM or when you run sth in VM? Is your qemu 2.2? How about your memory info? Could you give more details about your steps?
>
>
Hi,Xu
what is sth means?
I use the dpdk branch of a09f3e4c50467512970519943d26d9c5753584e0 and qemu branch of v2.2.0.
Here is my host information:
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # free
total used free shared buffers cached
Mem: 82450600 22555172 59895428 0 1506132 3205304
-/+ buffers/cache: 17843736 64606864
Swap: 0 0 0
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # cat /proc/meminfo |grep Huge
AnonHugePages: 20480 kB
HugePages_Total: 8192
HugePages_Free: 7052
HugePages_Rsvd: 396
HugePages_Surp: 0
Hugepagesize: 2048 kB
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # uname -a
Linux linux-mRFnwZ 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013 (70ed288) x86_64 x86_64 x86_64 GNU/Linux
Here is my guest infomation:
SUSE 11 SP3 with kernel 3.0.76-0.8-default
step:
umount /dev/hugepages/
rmmod igb_uio
rmmod rte_kni
mount -t hugetlbfs nodev /dev/hugepages -o pagesize=2M
echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
export RTE_SDK=/mnt/sdc/linhf/dpdk-vhost-user/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
modprobe uio
insmod ${RTE_SDK}/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
dpdk_nic_bind.py -b igb_uio 02:00.0 02:00.1
rmmod vhost_net
modprobe cuse
insmod ${RTE_SDK}/lib/librte_vhost/eventfd_link/eventfd_link.ko
${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
qemu-system-x86_64 -name vm1 -enable-kvm -smp 2 -m 1024 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem \
-chardev socket,id=chr0,path=/mnt/sdc/linhf/dpdk-vhost-user/vhost-net \
-netdev type=vhost-user,id=net0,chardev=chr0 -device virtio-net-pci,netdev=net0,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
-chardev socket,id=chr1,path=/mnt/sdc/linhf/dpdk-vhost-user/vhost-net \
-netdev type=vhost-user,id=net1,chardev=chr1 -device virtio-net-pci,netdev=net1,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
-drive file=/mnt/sdc/linhf/vm1.img -vnc :0
qemu-system-x86_64: -netdev type=vhost-user,id=net0,chardev=chr0: chardev "chr0" went up
qemu-system-x86_64: -netdev type=vhost-user,id=net1,chardev=chr1: chardev "chr1" went up
EAL: Detected lcore 0 as core 0 on socket 1
EAL: Detected lcore 1 as core 1 on socket 1
EAL: Detected lcore 2 as core 9 on socket 1
EAL: Detected lcore 3 as core 10 on socket 1
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 9 on socket 0
EAL: Detected lcore 7 as core 10 on socket 0
EAL: Detected lcore 8 as core 0 on socket 1
EAL: Detected lcore 9 as core 1 on socket 1
EAL: Detected lcore 10 as core 9 on socket 1
EAL: Detected lcore 11 as core 10 on socket 1
EAL: Detected lcore 12 as core 0 on socket 0
EAL: Detected lcore 13 as core 1 on socket 0
EAL: Detected lcore 14 as core 9 on socket 0
EAL: Detected lcore 15 as core 10 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7f3b57400000 (size = 0x200000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f3b57000000 (size = 0x200000)
EAL: Ask a virtual area of 0x1ffe00000 bytes
EAL: Virtual area found at 0x7f3957000000 (size = 0x1ffe00000)
EAL: Requesting 1024 pages of size 2MB from socket 1
EAL: WARNING: clock_gettime cannot use CLOCK_MONOTONIC_RAW and HPET is not available - clock timings may be less accurate.
EAL: TSC frequency is ~2400234 KHz
EAL: Master core 8 is ready (tid=5a596800)
EAL: Core 9 is ready (tid=58d34700)
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f3b57200000
EAL: PCI memory mapped at 0x7f3b57280000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f3b57284000
EAL: PCI memory mapped at 0x7f3b57304000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 1
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
pf queue num: 0, configured vmdq pool num: 64, each vmdq pool has 2 queues
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f3957aebbc0 hw_ring=0x7f3b57028580 dma_addr=0xedf428580
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
... ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f39579ca040 hw_ring=0x7f39a2eb3b80 dma_addr=0xf2b6b3b80
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=127.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst size no less than 32.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f39579c7f00 hw_ring=0x7f39a2ec3c00 dma_addr=0xf2b6c3c00
PMD: set_tx_function(): Using full-featured tx code path
PMD: set_tx_function(): - txq_flags = e01 [IXGBE_SIMPLE_FLAGS=f01]
PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f39579c5dc0 hw_ring=0x7f39a2ed3c00 dma_addr=0xf2b6d3c00
PMD: set_tx_function(): Using full-featured tx code path
PMD: set_tx_function(): - txq_flags = e01 [IXGBE_SIMPLE_FLAGS=f01]
PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
VHOST_PORT: Max virtio devices supported: 64
VHOST_PORT: Port 0 MAC: 00 1b 21 69 f7 c8
VHOST_PORT: Skipping disabled port 1
VHOST_DATA: Procesing on Core 9 started
VHOST_CONFIG: socket created, fd:15
VHOST_CONFIG: bind to vhost-net
VHOST_CONFIG: new virtio connection is 16
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: new virtio connection is 17
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:18
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:19
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:20
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:21
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:22
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:18
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
VHOST_CONFIG: mmap qemu guest failed.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
./run_dpdk_vhost.sh: line 19: 20796 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian Q; Xie, Huawei
> Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/2/4 9:38, Xu, Qian Q wrote:
>> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
>> Below is my VM1 startup command, for your reference, similar for VM2.
>> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>>
>> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>>
>> Pls let me know if any questions, issues.
>
> Hi xie & xu
>
> When I try to start VM vhost-switch crashed.
>
> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
> VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
> VHOST_CONFIG: mmap qemu guest failed.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-06 5:54 ` Xu, Qian Q
2015-02-06 11:02 ` Linhaifeng
@ 2015-02-07 4:26 ` Linhaifeng
2015-02-09 2:57 ` Xu, Qian Q
1 sibling, 1 reply; 15+ messages in thread
From: Linhaifeng @ 2015-02-07 4:26 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei; +Cc: liuyongan, dev
On 2015/2/6 13:54, Xu, Qian Q wrote:
> Haifeng
> Are you using the latest dpdk branch with vhost-user patches? I have never met the issue.
> When is the vhost sample crashed? When you start VM or when you run sth in VM? Is your qemu 2.2? How about your memory info? Could you give more details about your steps?
>
>
I have knew why you never met the issue.Because vhost-switch will notify guest after send every packets(performance is not every well).
static inline int __attribute__((always_inline))
virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m)
{
...
ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1/*you cant try to fill with rx_count*/);
..
}
>
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian Q; Xie, Huawei
> Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/2/4 9:38, Xu, Qian Q wrote:
>> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
>> Below is my VM1 startup command, for your reference, similar for VM2.
>> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>>
>> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>>
>> Pls let me know if any questions, issues.
>
> Hi xie & xu
>
> When I try to start VM vhost-switch crashed.
>
> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
> VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
> VHOST_CONFIG: mmap qemu guest failed.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-07 4:26 ` Linhaifeng
@ 2015-02-09 2:57 ` Xu, Qian Q
2015-02-09 4:11 ` Linhaifeng
0 siblings, 1 reply; 15+ messages in thread
From: Xu, Qian Q @ 2015-02-09 2:57 UTC (permalink / raw)
To: Linhaifeng, Xie, Huawei; +Cc: liuyongan, dev
Haifeng,
No matter mergeable =0 or 1, I have not met the issue that the vhost-user crash when start VM. Have u changed the code? As you said below, vhost-switch will notify guest after sending every packet, yes, it's the current code, and Huawei, Xie will plan to optimize it in future. Is the crash caused by changing code or any other step?
What do you want for the vhost-user, changing the notification mechanism?
Thx. By the way, sth means something.
-----Original Message-----
From: Linhaifeng [mailto:haifeng.lin@huawei.com]
Sent: Saturday, February 07, 2015 12:27 PM
To: Xu, Qian Q; Xie, Huawei
Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
On 2015/2/6 13:54, Xu, Qian Q wrote:
> Haifeng
> Are you using the latest dpdk branch with vhost-user patches? I have never met the issue.
> When is the vhost sample crashed? When you start VM or when you run sth in VM? Is your qemu 2.2? How about your memory info? Could you give more details about your steps?
>
>
I have knew why you never met the issue.Because vhost-switch will notify guest after send every packets(performance is not every well).
static inline int __attribute__((always_inline)) virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) {
...
ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1/*you cant try to fill with rx_count*/);
..
}
>
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian Q; Xie, Huawei
> Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
> when there is no buffer
>
>
>
> On 2015/2/4 9:38, Xu, Qian Q wrote:
>> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
>> Below is my VM1 startup command, for your reference, similar for VM2.
>> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1
>> -cpu host -enable-kvm -m 2048 -object
>> memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on
>> -numa node,memdev=mem -mem-prealloc -smp 2 -drive
>> file=/home/img/dpdk1-vm1.img -chardev
>> socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev
>> type=vhost-user,id=mynet1,chardev=char0,vhostforce -device
>> virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>>
>> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>>
>> Pls let me know if any questions, issues.
>
> Hi xie & xu
>
> When I try to start VM vhost-switch crashed.
>
> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
> VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000
> off:0x0
> VHOST_CONFIG: mmap qemu guest failed.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
>
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
2015-02-09 2:57 ` Xu, Qian Q
@ 2015-02-09 4:11 ` Linhaifeng
0 siblings, 0 replies; 15+ messages in thread
From: Linhaifeng @ 2015-02-09 4:11 UTC (permalink / raw)
To: Xu, Qian Q, Xie, Huawei; +Cc: liuyongan, dev
On 2015/2/9 10:57, Xu, Qian Q wrote:
> Haifeng,
> No matter mergeable =0 or 1, I have not met the issue that the vhost-user crash when start VM. Have u changed the code? As you said below, vhost-switch will notify guest after sending every packet, yes, it's the current code, and Huawei, Xie will plan to optimize it in future. Is the crash caused by changing code or any other step?
> What do you want for the vhost-user, changing the notification mechanism?
> Thx. By the way, sth means something.
>
Yes,I have modify the code for compile errors(I replace it with memset(&msgh, 0, sizeof msgh)).
The issue is failed to mmap(memory size not align to hugepage's size).I guess this is qemu's bug.
In file included from /mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/virtio-net.c:34:
/usr/include/linux/vhost.h:33: error: expected specifier-qualifier-list before ‘pid_t’
== Build lib/librte_port
cc1: warnings being treated as errors
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-:.c: In function ‘read_fd_message’:
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:141: error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:141: error: (near initialization for ‘msgh.msg_namelen’)
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c: In function ‘send_fd_message’:
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:213: error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:213: error: (near initialization for ‘msgh.msg_namelen’)
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c: In function ‘vserver_new_vq_conn’:
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:276: error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/vhost-net-user.c:276: error: (near initialization for ‘vdev_ctx.fh’)
make[5]: *** [vhost_user/vhost-net-user.o] Error 1
make[5]: *** Waiting for unfinished jobs....
cc1: warnings being treated as errors
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c: In function ‘user_set_mem_table’:
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c:104: error: missing initializer
/mnt/sdc/linhf/dpdk-vhost-user/dpdk/lib/librte_vhost/vhost_user/virtio-net-user.c:104: error: (near initialization for ‘tmp[0].mapped_address’)
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
> Sent: Saturday, February 07, 2015 12:27 PM
> To: Xu, Qian Q; Xie, Huawei
> Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer
>
>
>
> On 2015/2/6 13:54, Xu, Qian Q wrote:
>> Haifeng
>> Are you using the latest dpdk branch with vhost-user patches? I have never met the issue.
>> When is the vhost sample crashed? When you start VM or when you run sth in VM? Is your qemu 2.2? How about your memory info? Could you give more details about your steps?
>>
>>
>
> I have knew why you never met the issue.Because vhost-switch will notify guest after send every packets(performance is not every well).
>
> static inline int __attribute__((always_inline)) virtio_tx_local(struct vhost_dev *vdev, struct rte_mbuf *m) {
> ...
> ret = rte_vhost_enqueue_burst(tdev, VIRTIO_RXQ, &m, 1/*you cant try to fill with rx_count*/);
> ..
>
> }
>
>>
>> -----Original Message-----
>> From: Linhaifeng [mailto:haifeng.lin@huawei.com]
>> Sent: Friday, February 06, 2015 12:02 PM
>> To: Xu, Qian Q; Xie, Huawei
>> Cc: lilijun; liuyongan@huawei.com; dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer
>> when there is no buffer
>>
>>
>>
>> On 2015/2/4 9:38, Xu, Qian Q wrote:
>>> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu version>2.1 to enable the vhost-user server's feature. Old qemu such as 1.5,1.6 didn't support it.
>>> Below is my VM1 startup command, for your reference, similar for VM2.
>>> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1
>>> -cpu host -enable-kvm -m 2048 -object
>>> memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on
>>> -numa node,memdev=mem -mem-prealloc -smp 2 -drive
>>> file=/home/img/dpdk1-vm1.img -chardev
>>> socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev
>>> type=vhost-user,id=mynet1,chardev=char0,vhostforce -device
>>> virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>>>
>>> 5. Then in the VM, you can have the same operations as before, send packet from virtio1 to virtio2.
>>>
>>> Pls let me know if any questions, issues.
>>
>> Hi xie & xu
>>
>> When I try to start VM vhost-switch crashed.
>>
>> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
>> VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000
>> off:0x0
>> VHOST_CONFIG: mmap qemu guest failed.
>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
>> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
>> run_dpdk_vhost.sh: line 19: 1854 Segmentation fault ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>>
>>
>>
>
> --
> Regards,
> Haifeng
>
>
> .
>
--
Regards,
Haifeng
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2015-02-09 4:12 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-29 10:30 [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there is no buffer linhaifeng
2015-01-29 10:39 ` Xie, Huawei
2015-01-29 12:39 ` Linhaifeng
2015-01-29 13:00 ` Xie, Huawei
2015-01-29 13:50 ` Linhaifeng
2015-01-30 8:20 ` Xu, Qian Q
2015-01-30 10:33 ` Linhaifeng
2015-02-01 6:04 ` Linhaifeng
[not found] ` <82F45D86ADE5454A95A89742C8D1410E01CA1DA3@shsmsx102.ccr.corp.intel.com>
[not found] ` <54CF6BB3.7080002@huawei.com>
[not found] ` <C37D651A908B024F974696C65296B57B0F37C3F7@SHSMSX101.ccr.corp.intel.com>
[not found] ` <54D08AFA.2030404@huawei.com>
[not found] ` <82F45D86ADE5454A95A89742C8D1410E01CA3197@shsmsx102.ccr.corp.intel.com>
[not found] ` <54D0926D.9010304@huawei.com>
2015-02-04 1:38 ` Xu, Qian Q
2015-02-06 4:02 ` Linhaifeng
2015-02-06 5:54 ` Xu, Qian Q
2015-02-06 11:02 ` Linhaifeng
2015-02-07 4:26 ` Linhaifeng
2015-02-09 2:57 ` Xu, Qian Q
2015-02-09 4:11 ` Linhaifeng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).