DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] high jitter in dpdk kni
@ 2016-01-23 14:17 Masoud Moshref Javadi
  2016-01-25  8:29 ` Freynet, Marc (Nokia - FR)
  2016-01-25 13:37 ` Jay Rolette
  0 siblings, 2 replies; 8+ messages in thread
From: Masoud Moshref Javadi @ 2016-01-23 14:17 UTC (permalink / raw)
  To: users

I see jitter in KNI RTT. I have two servers. I run kni sample application
on one, configure its IP and ping an external interface.

sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
sudo ifconfig vEth0 192.168.1.2/24
ping 192.168.1.3

This is the ping result:
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms

The ping delay between two servers without kni is 0.170ms.
I'm using dpdk 2.2.

Any thought on how to keep KNI delay predictable?

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-23 14:17 [dpdk-users] high jitter in dpdk kni Masoud Moshref Javadi
@ 2016-01-25  8:29 ` Freynet, Marc (Nokia - FR)
  2016-01-25 13:15   ` Masoud Moshref Javadi
  2016-01-25 13:37 ` Jay Rolette
  1 sibling, 1 reply; 8+ messages in thread
From: Freynet, Marc (Nokia - FR) @ 2016-01-25  8:29 UTC (permalink / raw)
  To: EXT Masoud Moshref Javadi, users

Hi,

Some months ago we had a problem with the KNI driver.
Our DPDK application forwards to the Linux kernel through KNI the SCTP PDU received from the Eth NIC.
In one configuration, the SCTP port Source and Destination were constant on all SCTP connections.
This is a known SCTP issue as in the multi processor environment, the SCTP stack uses the port Source and Destination to hash the processor that will run the SCTP context in the kernel.
SCTP cannot use the IP address as a hash key thanks to the SCTP multi homing feature.

As all SCTP PDU on all SCTP connections where processed by the same Core, this creates a bottle neck and the KNI interface starts creating Jitter and even was losing PDU when sending the SCTP PDU to the IP Linux kernel stack.

I am wondering if in a previous release it you not be possible to add KNI a kind of load sharing with different queues on different cores to forward the received PDU to the IP Linux stack.

​"Bowl of rice will raise a benefactor, a bucket of rice will raise a enemy.", Chinese proverb.

FREYNET Marc
Alcatel-Lucent France
Centre de Villarceaux
Route de Villejust
91620 NOZAY France

Tel:  +33 (0)1 6040 1960
Intranet: 2103 1960

marc.freynet@nokia.com

-----Original Message-----
From: users [mailto:users-bounces@dpdk.org] On Behalf Of EXT Masoud Moshref Javadi
Sent: samedi 23 janvier 2016 15:18
To: users@dpdk.org
Subject: [dpdk-users] high jitter in dpdk kni

I see jitter in KNI RTT. I have two servers. I run kni sample application on one, configure its IP and ping an external interface.

sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
sudo ifconfig vEth0 192.168.1.2/24
ping 192.168.1.3

This is the ping result:
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms

The ping delay between two servers without kni is 0.170ms.
I'm using dpdk 2.2.

Any thought on how to keep KNI delay predictable?

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-25  8:29 ` Freynet, Marc (Nokia - FR)
@ 2016-01-25 13:15   ` Masoud Moshref Javadi
  2016-01-25 13:43     ` Andriy Berestovskyy
  2016-01-25 14:43     ` Freynet, Marc (Nokia - FR)
  0 siblings, 2 replies; 8+ messages in thread
From: Masoud Moshref Javadi @ 2016-01-25 13:15 UTC (permalink / raw)
  To: Freynet, Marc (Nokia - FR), users

I see. But this should not be the issue in Ping.
By the way, I checked the timestamps of sending and receiving times at the
kni sample application (kni_ingress and kni_egress methods). The RTT at
that level is OK (0.1ms) and it seems the jitter comes from the kni kernel
module.

On Mon, Jan 25, 2016 at 12:29 AM Freynet, Marc (Nokia - FR) <
marc.freynet@nokia.com> wrote:

> Hi,
>
> Some months ago we had a problem with the KNI driver.
> Our DPDK application forwards to the Linux kernel through KNI the SCTP PDU
> received from the Eth NIC.
> In one configuration, the SCTP port Source and Destination were constant
> on all SCTP connections.
> This is a known SCTP issue as in the multi processor environment, the SCTP
> stack uses the port Source and Destination to hash the processor that will
> run the SCTP context in the kernel.
> SCTP cannot use the IP address as a hash key thanks to the SCTP multi
> homing feature.
>
> As all SCTP PDU on all SCTP connections where processed by the same Core,
> this creates a bottle neck and the KNI interface starts creating Jitter and
> even was losing PDU when sending the SCTP PDU to the IP Linux kernel stack.
>
> I am wondering if in a previous release it you not be possible to add KNI
> a kind of load sharing with different queues on different cores to forward
> the received PDU to the IP Linux stack.
>
> ​"Bowl of rice will raise a benefactor, a bucket of rice will raise a
> enemy.", Chinese proverb.
>
> FREYNET Marc
> Alcatel-Lucent France
> Centre de Villarceaux
> Route de Villejust
> 91620 NOZAY France
>
> Tel:  +33 (0)1 6040 1960
> Intranet: 2103 1960
>
> marc.freynet@nokia.com
>
> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of EXT Masoud
> Moshref Javadi
> Sent: samedi 23 janvier 2016 15:18
> To: users@dpdk.org
> Subject: [dpdk-users] high jitter in dpdk kni
>
> I see jitter in KNI RTT. I have two servers. I run kni sample application
> on one, configure its IP and ping an external interface.
>
> sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
> sudo ifconfig vEth0 192.168.1.2/24
> ping 192.168.1.3
>
> This is the ping result:
> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
> 64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
> 64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
> 64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
> 64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
> 64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
> 64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms
>
> The ping delay between two servers without kni is 0.170ms.
> I'm using dpdk 2.2.
>
> Any thought on how to keep KNI delay predictable?
>
> Thanks
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-23 14:17 [dpdk-users] high jitter in dpdk kni Masoud Moshref Javadi
  2016-01-25  8:29 ` Freynet, Marc (Nokia - FR)
@ 2016-01-25 13:37 ` Jay Rolette
  1 sibling, 0 replies; 8+ messages in thread
From: Jay Rolette @ 2016-01-25 13:37 UTC (permalink / raw)
  To: Masoud Moshref Javadi; +Cc: users

I dug into essentially the same issue last year. This post explains what I
found and what I did to improve the situation:

http://dpdk.org/ml/archives/dev/2015-June/018850.html

Jay

On Sat, Jan 23, 2016 at 8:17 AM, Masoud Moshref Javadi <
masood.moshref.j@gmail.com> wrote:

> I see jitter in KNI RTT. I have two servers. I run kni sample application
> on one, configure its IP and ping an external interface.
>
> sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
> sudo ifconfig vEth0 192.168.1.2/24
> ping 192.168.1.3
>
> This is the ping result:
> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
> 64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
> 64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
> 64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
> 64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
> 64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
> 64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms
>
> The ping delay between two servers without kni is 0.170ms.
> I'm using dpdk 2.2.
>
> Any thought on how to keep KNI delay predictable?
>
> Thanks
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-25 13:15   ` Masoud Moshref Javadi
@ 2016-01-25 13:43     ` Andriy Berestovskyy
  2016-01-25 13:49       ` Masoud Moshref Javadi
  2016-01-25 14:43     ` Freynet, Marc (Nokia - FR)
  1 sibling, 1 reply; 8+ messages in thread
From: Andriy Berestovskyy @ 2016-01-25 13:43 UTC (permalink / raw)
  To: Masoud Moshref Javadi; +Cc: users

Hi Masoud,
Try a low latency kernel (i.e. linux-image-lowlatency).

Andriy

On Mon, Jan 25, 2016 at 2:15 PM, Masoud Moshref Javadi
<masood.moshref.j@gmail.com> wrote:
> I see. But this should not be the issue in Ping.
> By the way, I checked the timestamps of sending and receiving times at the
> kni sample application (kni_ingress and kni_egress methods). The RTT at
> that level is OK (0.1ms) and it seems the jitter comes from the kni kernel
> module.
>
> On Mon, Jan 25, 2016 at 12:29 AM Freynet, Marc (Nokia - FR) <
> marc.freynet@nokia.com> wrote:
>
>> Hi,
>>
>> Some months ago we had a problem with the KNI driver.
>> Our DPDK application forwards to the Linux kernel through KNI the SCTP PDU
>> received from the Eth NIC.
>> In one configuration, the SCTP port Source and Destination were constant
>> on all SCTP connections.
>> This is a known SCTP issue as in the multi processor environment, the SCTP
>> stack uses the port Source and Destination to hash the processor that will
>> run the SCTP context in the kernel.
>> SCTP cannot use the IP address as a hash key thanks to the SCTP multi
>> homing feature.
>>
>> As all SCTP PDU on all SCTP connections where processed by the same Core,
>> this creates a bottle neck and the KNI interface starts creating Jitter and
>> even was losing PDU when sending the SCTP PDU to the IP Linux kernel stack.
>>
>> I am wondering if in a previous release it you not be possible to add KNI
>> a kind of load sharing with different queues on different cores to forward
>> the received PDU to the IP Linux stack.
>>
>> "Bowl of rice will raise a benefactor, a bucket of rice will raise a
>> enemy.", Chinese proverb.
>>
>> FREYNET Marc
>> Alcatel-Lucent France
>> Centre de Villarceaux
>> Route de Villejust
>> 91620 NOZAY France
>>
>> Tel:  +33 (0)1 6040 1960
>> Intranet: 2103 1960
>>
>> marc.freynet@nokia.com
>>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of EXT Masoud
>> Moshref Javadi
>> Sent: samedi 23 janvier 2016 15:18
>> To: users@dpdk.org
>> Subject: [dpdk-users] high jitter in dpdk kni
>>
>> I see jitter in KNI RTT. I have two servers. I run kni sample application
>> on one, configure its IP and ping an external interface.
>>
>> sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
>> sudo ifconfig vEth0 192.168.1.2/24
>> ping 192.168.1.3
>>
>> This is the ping result:
>> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
>> 64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
>> 64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
>> 64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
>> 64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
>> 64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
>> 64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms
>>
>> The ping delay between two servers without kni is 0.170ms.
>> I'm using dpdk 2.2.
>>
>> Any thought on how to keep KNI delay predictable?
>>
>> Thanks
>>



-- 
Andriy Berestovskyy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-25 13:43     ` Andriy Berestovskyy
@ 2016-01-25 13:49       ` Masoud Moshref Javadi
  2016-01-25 14:15         ` Jay Rolette
  0 siblings, 1 reply; 8+ messages in thread
From: Masoud Moshref Javadi @ 2016-01-25 13:49 UTC (permalink / raw)
  To: Andriy Berestovskyy; +Cc: users

Thanks.
For me the easiest solution is to sacrifice a core and set
CONFIG_RTE_KNI_PREEMPT_DEFAULT
to 'n'. I can confirm that this reduced the delay to 0.1ms


On Mon, Jan 25, 2016 at 5:43 AM Andriy Berestovskyy <aber@semihalf.com>
wrote:

> Hi Masoud,
> Try a low latency kernel (i.e. linux-image-lowlatency).
>
> Andriy
>
> On Mon, Jan 25, 2016 at 2:15 PM, Masoud Moshref Javadi
> <masood.moshref.j@gmail.com> wrote:
> > I see. But this should not be the issue in Ping.
> > By the way, I checked the timestamps of sending and receiving times at
> the
> > kni sample application (kni_ingress and kni_egress methods). The RTT at
> > that level is OK (0.1ms) and it seems the jitter comes from the kni
> kernel
> > module.
> >
> > On Mon, Jan 25, 2016 at 12:29 AM Freynet, Marc (Nokia - FR) <
> > marc.freynet@nokia.com> wrote:
> >
> >> Hi,
> >>
> >> Some months ago we had a problem with the KNI driver.
> >> Our DPDK application forwards to the Linux kernel through KNI the SCTP
> PDU
> >> received from the Eth NIC.
> >> In one configuration, the SCTP port Source and Destination were constant
> >> on all SCTP connections.
> >> This is a known SCTP issue as in the multi processor environment, the
> SCTP
> >> stack uses the port Source and Destination to hash the processor that
> will
> >> run the SCTP context in the kernel.
> >> SCTP cannot use the IP address as a hash key thanks to the SCTP multi
> >> homing feature.
> >>
> >> As all SCTP PDU on all SCTP connections where processed by the same
> Core,
> >> this creates a bottle neck and the KNI interface starts creating Jitter
> and
> >> even was losing PDU when sending the SCTP PDU to the IP Linux kernel
> stack.
> >>
> >> I am wondering if in a previous release it you not be possible to add
> KNI
> >> a kind of load sharing with different queues on different cores to
> forward
> >> the received PDU to the IP Linux stack.
> >>
> >> "Bowl of rice will raise a benefactor, a bucket of rice will raise a
> >> enemy.", Chinese proverb.
> >>
> >> FREYNET Marc
> >> Alcatel-Lucent France
> >> Centre de Villarceaux
> >> Route de Villejust
> >> 91620 NOZAY France
> >>
> >> Tel:  +33 (0)1 6040 1960
> >> Intranet: 2103 1960
> >>
> >> marc.freynet@nokia.com
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces@dpdk.org] On Behalf Of EXT Masoud
> >> Moshref Javadi
> >> Sent: samedi 23 janvier 2016 15:18
> >> To: users@dpdk.org
> >> Subject: [dpdk-users] high jitter in dpdk kni
> >>
> >> I see jitter in KNI RTT. I have two servers. I run kni sample
> application
> >> on one, configure its IP and ping an external interface.
> >>
> >> sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
> >> sudo ifconfig vEth0 192.168.1.2/24
> >> ping 192.168.1.3
> >>
> >> This is the ping result:
> >> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms
> >>
> >> The ping delay between two servers without kni is 0.170ms.
> >> I'm using dpdk 2.2.
> >>
> >> Any thought on how to keep KNI delay predictable?
> >>
> >> Thanks
> >>
>
>
>
> --
> Andriy Berestovskyy
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-25 13:49       ` Masoud Moshref Javadi
@ 2016-01-25 14:15         ` Jay Rolette
  0 siblings, 0 replies; 8+ messages in thread
From: Jay Rolette @ 2016-01-25 14:15 UTC (permalink / raw)
  To: Masoud Moshref Javadi; +Cc: users

On Mon, Jan 25, 2016 at 7:49 AM, Masoud Moshref Javadi <
masood.moshref.j@gmail.com> wrote:

> Thanks.
> For me the easiest solution is to sacrifice a core and set
> CONFIG_RTE_KNI_PREEMPT_DEFAULT
> to 'n'. I can confirm that this reduced the delay to 0.1ms
>

Make sure you take a look at the throughput you get with that change. In my
case, I found it reduced the latency, but lowered my aggregate throughput:

http://dpdk.org/ml/archives/dev/2015-June/018858.html

Jay

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] high jitter in dpdk kni
  2016-01-25 13:15   ` Masoud Moshref Javadi
  2016-01-25 13:43     ` Andriy Berestovskyy
@ 2016-01-25 14:43     ` Freynet, Marc (Nokia - FR)
  1 sibling, 0 replies; 8+ messages in thread
From: Freynet, Marc (Nokia - FR) @ 2016-01-25 14:43 UTC (permalink / raw)
  To: EXT Masoud Moshref Javadi, users

Ø  I see. But this should not be the issue in Ping.
May be you could check the CPU per core and specifically the core where the KNI linux kernel thread  runs.

​"Bowl of rice will raise a benefactor, a bucket of rice will raise a enemy.", Chinese proverb.

FREYNET Marc
Alcatel-Lucent France
Centre de Villarceaux
Route de Villejust
91620 NOZAY France

Tel:  +33 (0)1 6040 1960
Intranet: 2103 1960

marc.freynet@nokia.com

From: EXT Masoud Moshref Javadi [mailto:masood.moshref.j@gmail.com]
Sent: lundi 25 janvier 2016 14:15
To: Freynet, Marc (Nokia - FR); users@dpdk.org
Subject: Re: [dpdk-users] high jitter in dpdk kni

I see. But this should not be the issue in Ping.
By the way, I checked the timestamps of sending and receiving times at the kni sample application (kni_ingress and kni_egress methods). The RTT at that level is OK (0.1ms) and it seems the jitter comes from the kni kernel module.

On Mon, Jan 25, 2016 at 12:29 AM Freynet, Marc (Nokia - FR) <marc.freynet@nokia.com<mailto:marc.freynet@nokia.com>> wrote:
Hi,

Some months ago we had a problem with the KNI driver.
Our DPDK application forwards to the Linux kernel through KNI the SCTP PDU received from the Eth NIC.
In one configuration, the SCTP port Source and Destination were constant on all SCTP connections.
This is a known SCTP issue as in the multi processor environment, the SCTP stack uses the port Source and Destination to hash the processor that will run the SCTP context in the kernel.
SCTP cannot use the IP address as a hash key thanks to the SCTP multi homing feature.

As all SCTP PDU on all SCTP connections where processed by the same Core, this creates a bottle neck and the KNI interface starts creating Jitter and even was losing PDU when sending the SCTP PDU to the IP Linux kernel stack.

I am wondering if in a previous release it you not be possible to add KNI a kind of load sharing with different queues on different cores to forward the received PDU to the IP Linux stack.

​"Bowl of rice will raise a benefactor, a bucket of rice will raise a enemy.", Chinese proverb.

FREYNET Marc
Alcatel-Lucent France
Centre de Villarceaux
Route de Villejust
91620 NOZAY France

Tel:  +33 (0)1 6040 1960
Intranet: 2103 1960

marc.freynet@nokia.com<mailto:marc.freynet@nokia.com>

-----Original Message-----
From: users [mailto:users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>] On Behalf Of EXT Masoud Moshref Javadi
Sent: samedi 23 janvier 2016 15:18
To: users@dpdk.org<mailto:users@dpdk.org>
Subject: [dpdk-users] high jitter in dpdk kni

I see jitter in KNI RTT. I have two servers. I run kni sample application on one, configure its IP and ping an external interface.

sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
sudo ifconfig vEth0 192.168.1.2/24<http://192.168.1.2/24>
ping 192.168.1.3

This is the ping result:
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=5 ttl=64 time=1.93 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=6 ttl=64 time=0.907 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=7 ttl=64 time=3.15 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=8 ttl=64 time=1.96 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=9 ttl=64 time=3.95 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=10 ttl=64 time=2.90 ms
64 bytes from 192.168.1.2<http://192.168.1.2>: icmp_seq=11 ttl=64 time=0.933 ms

The ping delay between two servers without kni is 0.170ms.
I'm using dpdk 2.2.

Any thought on how to keep KNI delay predictable?

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-01-25 14:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-23 14:17 [dpdk-users] high jitter in dpdk kni Masoud Moshref Javadi
2016-01-25  8:29 ` Freynet, Marc (Nokia - FR)
2016-01-25 13:15   ` Masoud Moshref Javadi
2016-01-25 13:43     ` Andriy Berestovskyy
2016-01-25 13:49       ` Masoud Moshref Javadi
2016-01-25 14:15         ` Jay Rolette
2016-01-25 14:43     ` Freynet, Marc (Nokia - FR)
2016-01-25 13:37 ` Jay Rolette

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).