* [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
@ 2015-06-28 10:34 Pavel Odintsov
2015-06-28 23:35 ` Keunhong Lee
0 siblings, 1 reply; 15+ messages in thread
From: Pavel Odintsov @ 2015-06-28 10:34 UTC (permalink / raw)
To: dev
Hello, folks!
We have execute bunch of tests for receive data with Intel XL710 40GE
NIC. We want to achieve wire speed on this platform for traffic
capture.
But we definitely can't do it. We tried with different versions of
DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
We achieved only 40Mpps and could do more.
Could anybody help us with this issue? Looks like this NIC's could not
work on wire speed :(
Platform: Intel Xeon E5 e5 2670 + XL 710.
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-28 10:34 [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's Pavel Odintsov
@ 2015-06-28 23:35 ` Keunhong Lee
2015-06-29 6:59 ` Pavel Odintsov
0 siblings, 1 reply; 15+ messages in thread
From: Keunhong Lee @ 2015-06-28 23:35 UTC (permalink / raw)
To: Pavel Odintsov; +Cc: dev
DISCLAIMER: This information is not verified. This is truly my personal
opinion.
As I know, intel 82599 is the only 10G NIC which supports line rate with
minimum sized packets (64 byte).
According to our internal tests, Mellanox's 40G NICs even support less than
30Mpps.
I think 40 Mpps is the hardware capacity.
Keunhong.
2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> Hello, folks!
>
> We have execute bunch of tests for receive data with Intel XL710 40GE
> NIC. We want to achieve wire speed on this platform for traffic
> capture.
>
> But we definitely can't do it. We tried with different versions of
> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
>
> We achieved only 40Mpps and could do more.
>
> Could anybody help us with this issue? Looks like this NIC's could not
> work on wire speed :(
>
> Platform: Intel Xeon E5 e5 2670 + XL 710.
>
> --
> Sincerely yours, Pavel Odintsov
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-28 23:35 ` Keunhong Lee
@ 2015-06-29 6:59 ` Pavel Odintsov
2015-06-29 15:06 ` Keunhong Lee
0 siblings, 1 reply; 15+ messages in thread
From: Pavel Odintsov @ 2015-06-29 6:59 UTC (permalink / raw)
To: Keunhong Lee; +Cc: dev
Hello!
Lee, thank you so much for sharing your experience! What do you think
about 40GE version of 82599?
On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com> wrote:
> DISCLAIMER: This information is not verified. This is truly my personal
> opinion.
>
> As I know, intel 82599 is the only 10G NIC which supports line rate with
> minimum sized packets (64 byte).
> According to our internal tests, Mellanox's 40G NICs even support less than
> 30Mpps.
> I think 40 Mpps is the hardware capacity.
>
> Keunhong.
>
>
>
> 2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>>
>> Hello, folks!
>>
>> We have execute bunch of tests for receive data with Intel XL710 40GE
>> NIC. We want to achieve wire speed on this platform for traffic
>> capture.
>>
>> But we definitely can't do it. We tried with different versions of
>> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
>>
>> We achieved only 40Mpps and could do more.
>>
>> Could anybody help us with this issue? Looks like this NIC's could not
>> work on wire speed :(
>>
>> Platform: Intel Xeon E5 e5 2670 + XL 710.
>>
>> --
>> Sincerely yours, Pavel Odintsov
>
>
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-29 6:59 ` Pavel Odintsov
@ 2015-06-29 15:06 ` Keunhong Lee
2015-06-29 15:38 ` Andrew Theurer
0 siblings, 1 reply; 15+ messages in thread
From: Keunhong Lee @ 2015-06-29 15:06 UTC (permalink / raw)
To: Pavel Odintsov; +Cc: dev
I have not used XL710 or i40e.
I have no opinion for those NICs.
Keunhong.
2015-06-29 15:59 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> Hello!
>
> Lee, thank you so much for sharing your experience! What do you think
> about 40GE version of 82599?
>
> On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com> wrote:
> > DISCLAIMER: This information is not verified. This is truly my personal
> > opinion.
> >
> > As I know, intel 82599 is the only 10G NIC which supports line rate with
> > minimum sized packets (64 byte).
> > According to our internal tests, Mellanox's 40G NICs even support less
> than
> > 30Mpps.
> > I think 40 Mpps is the hardware capacity.
> >
> > Keunhong.
> >
> >
> >
> > 2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> >>
> >> Hello, folks!
> >>
> >> We have execute bunch of tests for receive data with Intel XL710 40GE
> >> NIC. We want to achieve wire speed on this platform for traffic
> >> capture.
> >>
> >> But we definitely can't do it. We tried with different versions of
> >> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
> >>
> >> We achieved only 40Mpps and could do more.
> >>
> >> Could anybody help us with this issue? Looks like this NIC's could not
> >> work on wire speed :(
> >>
> >> Platform: Intel Xeon E5 e5 2670 + XL 710.
> >>
> >> --
> >> Sincerely yours, Pavel Odintsov
> >
> >
>
>
>
> --
> Sincerely yours, Pavel Odintsov
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-29 15:06 ` Keunhong Lee
@ 2015-06-29 15:38 ` Andrew Theurer
2015-06-29 15:41 ` Pavel Odintsov
0 siblings, 1 reply; 15+ messages in thread
From: Andrew Theurer @ 2015-06-29 15:38 UTC (permalink / raw)
To: Keunhong Lee; +Cc: dev
On Mon, Jun 29, 2015 at 10:06 AM, Keunhong Lee <dlrmsghd@gmail.com> wrote:
> I have not used XL710 or i40e.
> I have no opinion for those NICs.
>
> Keunhong.
>
> 2015-06-29 15:59 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>
> > Hello!
> >
> > Lee, thank you so much for sharing your experience! What do you think
> > about 40GE version of 82599?
> >
> > On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com>
> wrote:
> > > DISCLAIMER: This information is not verified. This is truly my personal
> > > opinion.
> > >
> > > As I know, intel 82599 is the only 10G NIC which supports line rate
> with
> > > minimum sized packets (64 byte).
> > > According to our internal tests, Mellanox's 40G NICs even support less
> > than
> > > 30Mpps.
> > > I think 40 Mpps is the hardware capacity.
>
This is approximately what I see as well.
> > >
> > > Keunhong.
> > >
> > >
> > >
> > > 2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> > >>
> > >> Hello, folks!
> > >>
> > >> We have execute bunch of tests for receive data with Intel XL710 40GE
> > >> NIC. We want to achieve wire speed on this platform for traffic
> > >> capture.
> > >>
> > >> But we definitely can't do it. We tried with different versions of
> > >> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
> > >>
> > >> We achieved only 40Mpps and could do more.
> > >>
> > >> Could anybody help us with this issue? Looks like this NIC's could not
> > >> work on wire speed :(
> > >>
> > >> Platform: Intel Xeon E5 e5 2670 + XL 710.
> > >>
> > >> --
> > >> Sincerely yours, Pavel Odintsov
> > >
> > >
> >
> >
> >
> > --
> > Sincerely yours, Pavel Odintsov
> >
>
-Andrew
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-29 15:38 ` Andrew Theurer
@ 2015-06-29 15:41 ` Pavel Odintsov
2015-07-01 12:06 ` Vladimir Medvedkin
0 siblings, 1 reply; 15+ messages in thread
From: Pavel Odintsov @ 2015-06-29 15:41 UTC (permalink / raw)
To: Andrew Theurer; +Cc: dev
Hello, Andrew!
What NIC have you used? Is it XL710?
On Mon, Jun 29, 2015 at 6:38 PM, Andrew Theurer <atheurer@redhat.com> wrote:
>
>
> On Mon, Jun 29, 2015 at 10:06 AM, Keunhong Lee <dlrmsghd@gmail.com> wrote:
>>
>> I have not used XL710 or i40e.
>> I have no opinion for those NICs.
>>
>> Keunhong.
>>
>> 2015-06-29 15:59 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>>
>> > Hello!
>> >
>> > Lee, thank you so much for sharing your experience! What do you think
>> > about 40GE version of 82599?
>> >
>> > On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com>
>> > wrote:
>> > > DISCLAIMER: This information is not verified. This is truly my
>> > > personal
>> > > opinion.
>> > >
>> > > As I know, intel 82599 is the only 10G NIC which supports line rate
>> > > with
>> > > minimum sized packets (64 byte).
>> > > According to our internal tests, Mellanox's 40G NICs even support less
>> > than
>> > > 30Mpps.
>> > > I think 40 Mpps is the hardware capacity.
>
>
> This is approximately what I see as well.
>
>>
>> > >
>> > > Keunhong.
>> > >
>> > >
>> > >
>> > > 2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>> > >>
>> > >> Hello, folks!
>> > >>
>> > >> We have execute bunch of tests for receive data with Intel XL710 40GE
>> > >> NIC. We want to achieve wire speed on this platform for traffic
>> > >> capture.
>> > >>
>> > >> But we definitely can't do it. We tried with different versions of
>> > >> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
>> > >>
>> > >> We achieved only 40Mpps and could do more.
>> > >>
>> > >> Could anybody help us with this issue? Looks like this NIC's could
>> > >> not
>> > >> work on wire speed :(
>> > >>
>> > >> Platform: Intel Xeon E5 e5 2670 + XL 710.
>> > >>
>> > >> --
>> > >> Sincerely yours, Pavel Odintsov
>> > >
>> > >
>> >
>> >
>> >
>> > --
>> > Sincerely yours, Pavel Odintsov
>> >
>
> -Andrew
>
>
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-06-29 15:41 ` Pavel Odintsov
@ 2015-07-01 12:06 ` Vladimir Medvedkin
2015-07-01 12:44 ` Pavel Odintsov
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Medvedkin @ 2015-07-01 12:06 UTC (permalink / raw)
To: Pavel Odintsov; +Cc: dev
Hi Pavel,
Looks like you ran into pcie bottleneck. So let's calculate xl710 rx only
case.
Assume we have 32byte descriptors (if we want more offload).
DMA makes one pcie transaction with packet payload, one descriptor
writeback and one memory request for free descriptors for every 4 packets.
For Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6
DLL + 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
itself) + 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
descriptors). Note that we do not take into account PCIe ACK/NACK/FC Update
DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1 byte
in 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns.
Thus in theory pcie 3.0 x8 may transfer not more than 50mpps.
Correct me if I'm wrong.
Regards,
Vladimir
2015-06-29 18:41 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> Hello, Andrew!
>
> What NIC have you used? Is it XL710?
>
> On Mon, Jun 29, 2015 at 6:38 PM, Andrew Theurer <atheurer@redhat.com>
> wrote:
> >
> >
> > On Mon, Jun 29, 2015 at 10:06 AM, Keunhong Lee <dlrmsghd@gmail.com>
> wrote:
> >>
> >> I have not used XL710 or i40e.
> >> I have no opinion for those NICs.
> >>
> >> Keunhong.
> >>
> >> 2015-06-29 15:59 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> >>
> >> > Hello!
> >> >
> >> > Lee, thank you so much for sharing your experience! What do you think
> >> > about 40GE version of 82599?
> >> >
> >> > On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com>
> >> > wrote:
> >> > > DISCLAIMER: This information is not verified. This is truly my
> >> > > personal
> >> > > opinion.
> >> > >
> >> > > As I know, intel 82599 is the only 10G NIC which supports line rate
> >> > > with
> >> > > minimum sized packets (64 byte).
> >> > > According to our internal tests, Mellanox's 40G NICs even support
> less
> >> > than
> >> > > 30Mpps.
> >> > > I think 40 Mpps is the hardware capacity.
> >
> >
> > This is approximately what I see as well.
> >
> >>
> >> > >
> >> > > Keunhong.
> >> > >
> >> > >
> >> > >
> >> > > 2015-06-28 19:34 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com
> >:
> >> > >>
> >> > >> Hello, folks!
> >> > >>
> >> > >> We have execute bunch of tests for receive data with Intel XL710
> 40GE
> >> > >> NIC. We want to achieve wire speed on this platform for traffic
> >> > >> capture.
> >> > >>
> >> > >> But we definitely can't do it. We tried with different versions of
> >> > >> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
> >> > >>
> >> > >> We achieved only 40Mpps and could do more.
> >> > >>
> >> > >> Could anybody help us with this issue? Looks like this NIC's could
> >> > >> not
> >> > >> work on wire speed :(
> >> > >>
> >> > >> Platform: Intel Xeon E5 e5 2670 + XL 710.
> >> > >>
> >> > >> --
> >> > >> Sincerely yours, Pavel Odintsov
> >> > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Sincerely yours, Pavel Odintsov
> >> >
> >
> > -Andrew
> >
> >
>
>
>
> --
> Sincerely yours, Pavel Odintsov
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 12:06 ` Vladimir Medvedkin
@ 2015-07-01 12:44 ` Pavel Odintsov
2015-07-01 12:59 ` Bruce Richardson
0 siblings, 1 reply; 15+ messages in thread
From: Pavel Odintsov @ 2015-07-01 12:44 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev
Thanks for answer, Vladimir! So we need look for x16 NIC if we want
achieve 40GE line rate...
On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
> Hi Pavel,
>
> Looks like you ran into pcie bottleneck. So let's calculate xl710 rx only
> case.
> Assume we have 32byte descriptors (if we want more offload).
> DMA makes one pcie transaction with packet payload, one descriptor writeback
> and one memory request for free descriptors for every 4 packets. For
> Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6 DLL +
> 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet itself) +
> 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
> descriptors). Note that we do not take into account PCIe ACK/NACK/FC Update
> DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1 byte in
> 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns. Thus
> in theory pcie 3.0 x8 may transfer not more than 50mpps.
> Correct me if I'm wrong.
>
> Regards,
> Vladimir
>
>
> 2015-06-29 18:41 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>>
>> Hello, Andrew!
>>
>> What NIC have you used? Is it XL710?
>>
>> On Mon, Jun 29, 2015 at 6:38 PM, Andrew Theurer <atheurer@redhat.com>
>> wrote:
>> >
>> >
>> > On Mon, Jun 29, 2015 at 10:06 AM, Keunhong Lee <dlrmsghd@gmail.com>
>> > wrote:
>> >>
>> >> I have not used XL710 or i40e.
>> >> I have no opinion for those NICs.
>> >>
>> >> Keunhong.
>> >>
>> >> 2015-06-29 15:59 GMT+09:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>> >>
>> >> > Hello!
>> >> >
>> >> > Lee, thank you so much for sharing your experience! What do you think
>> >> > about 40GE version of 82599?
>> >> >
>> >> > On Mon, Jun 29, 2015 at 2:35 AM, Keunhong Lee <dlrmsghd@gmail.com>
>> >> > wrote:
>> >> > > DISCLAIMER: This information is not verified. This is truly my
>> >> > > personal
>> >> > > opinion.
>> >> > >
>> >> > > As I know, intel 82599 is the only 10G NIC which supports line rate
>> >> > > with
>> >> > > minimum sized packets (64 byte).
>> >> > > According to our internal tests, Mellanox's 40G NICs even support
>> >> > > less
>> >> > than
>> >> > > 30Mpps.
>> >> > > I think 40 Mpps is the hardware capacity.
>> >
>> >
>> > This is approximately what I see as well.
>> >
>> >>
>> >> > >
>> >> > > Keunhong.
>> >> > >
>> >> > >
>> >> > >
>> >> > > 2015-06-28 19:34 GMT+09:00 Pavel Odintsov
>> >> > > <pavel.odintsov@gmail.com>:
>> >> > >>
>> >> > >> Hello, folks!
>> >> > >>
>> >> > >> We have execute bunch of tests for receive data with Intel XL710
>> >> > >> 40GE
>> >> > >> NIC. We want to achieve wire speed on this platform for traffic
>> >> > >> capture.
>> >> > >>
>> >> > >> But we definitely can't do it. We tried with different versions of
>> >> > >> DPDK: 1.4, 1.6, 1.8, 2.0. And have not success.
>> >> > >>
>> >> > >> We achieved only 40Mpps and could do more.
>> >> > >>
>> >> > >> Could anybody help us with this issue? Looks like this NIC's could
>> >> > >> not
>> >> > >> work on wire speed :(
>> >> > >>
>> >> > >> Platform: Intel Xeon E5 e5 2670 + XL 710.
>> >> > >>
>> >> > >> --
>> >> > >> Sincerely yours, Pavel Odintsov
>> >> > >
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Sincerely yours, Pavel Odintsov
>> >> >
>> >
>> > -Andrew
>> >
>> >
>>
>>
>>
>> --
>> Sincerely yours, Pavel Odintsov
>
>
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 12:44 ` Pavel Odintsov
@ 2015-07-01 12:59 ` Bruce Richardson
2015-07-01 13:05 ` Pavel Odintsov
0 siblings, 1 reply; 15+ messages in thread
From: Bruce Richardson @ 2015-07-01 12:59 UTC (permalink / raw)
To: Pavel Odintsov; +Cc: dev
On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
> achieve 40GE line rate...
>
Note that this would only apply for your minimal i.e. 64-byte, packet sizes.
Once you go up to larger e.g. 128B packets, your PCI bandwidth requirements
are lower and you can easier achieve line rate.
/Bruce
> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
> > Hi Pavel,
> >
> > Looks like you ran into pcie bottleneck. So let's calculate xl710 rx only
> > case.
> > Assume we have 32byte descriptors (if we want more offload).
> > DMA makes one pcie transaction with packet payload, one descriptor writeback
> > and one memory request for free descriptors for every 4 packets. For
> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6 DLL +
> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet itself) +
> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
> > descriptors). Note that we do not take into account PCIe ACK/NACK/FC Update
> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1 byte in
> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns. Thus
> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
> > Correct me if I'm wrong.
> >
> > Regards,
> > Vladimir
> >
> >
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 12:59 ` Bruce Richardson
@ 2015-07-01 13:05 ` Pavel Odintsov
2015-07-01 13:40 ` Vladimir Medvedkin
0 siblings, 1 reply; 15+ messages in thread
From: Pavel Odintsov @ 2015-07-01 13:05 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
Yes, Bruce, we understand this. But we are working with huge SYN
attacks processing and they are 64byte only :(
On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
>> achieve 40GE line rate...
>>
> Note that this would only apply for your minimal i.e. 64-byte, packet sizes.
> Once you go up to larger e.g. 128B packets, your PCI bandwidth requirements
> are lower and you can easier achieve line rate.
>
> /Bruce
>
>> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
>> > Hi Pavel,
>> >
>> > Looks like you ran into pcie bottleneck. So let's calculate xl710 rx only
>> > case.
>> > Assume we have 32byte descriptors (if we want more offload).
>> > DMA makes one pcie transaction with packet payload, one descriptor writeback
>> > and one memory request for free descriptors for every 4 packets. For
>> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6 DLL +
>> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet itself) +
>> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
>> > descriptors). Note that we do not take into account PCIe ACK/NACK/FC Update
>> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1 byte in
>> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns. Thus
>> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
>> > Correct me if I'm wrong.
>> >
>> > Regards,
>> > Vladimir
>> >
>> >
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 13:05 ` Pavel Odintsov
@ 2015-07-01 13:40 ` Vladimir Medvedkin
2015-07-01 14:22 ` Anuj Kalia
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Medvedkin @ 2015-07-01 13:40 UTC (permalink / raw)
To: Pavel Odintsov; +Cc: dev
In case with syn flood you should take into account return syn-ack traffic,
which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
faster. And don't forget about DLLP's generated by rx traffic, which
saturates host-to-NIC bus.
2015-07-01 16:05 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> Yes, Bruce, we understand this. But we are working with huge SYN
> attacks processing and they are 64byte only :(
>
> On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
> >> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
> >> achieve 40GE line rate...
> >>
> > Note that this would only apply for your minimal i.e. 64-byte, packet
> sizes.
> > Once you go up to larger e.g. 128B packets, your PCI bandwidth
> requirements
> > are lower and you can easier achieve line rate.
> >
> > /Bruce
> >
> >> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <
> medvedkinv@gmail.com> wrote:
> >> > Hi Pavel,
> >> >
> >> > Looks like you ran into pcie bottleneck. So let's calculate xl710 rx
> only
> >> > case.
> >> > Assume we have 32byte descriptors (if we want more offload).
> >> > DMA makes one pcie transaction with packet payload, one descriptor
> writeback
> >> > and one memory request for free descriptors for every 4 packets. For
> >> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6
> DLL +
> >> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
> itself) +
> >> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
> >> > descriptors). Note that we do not take into account PCIe ACK/NACK/FC
> Update
> >> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1
> byte in
> >> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns.
> Thus
> >> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
> >> > Correct me if I'm wrong.
> >> >
> >> > Regards,
> >> > Vladimir
> >> >
> >> >
>
>
>
> --
> Sincerely yours, Pavel Odintsov
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 13:40 ` Vladimir Medvedkin
@ 2015-07-01 14:22 ` Anuj Kalia
2015-07-01 17:32 ` Vladimir Medvedkin
0 siblings, 1 reply; 15+ messages in thread
From: Anuj Kalia @ 2015-07-01 14:22 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev
Vladimir,
Few possible fixes to your PCIe analysis (let me know if I'm wrong):
- ECRC is probably disabled (check using sudo lspci -vvv | grep
CGenEn-), so TLP header is 26 bytes
- Descriptor writeback can be batched using high value of WTHRESH,
which is what DPDK uses by default
- Read request contains full TLP header (26 bytes)
Assuming WTHRESH = 4, bytes transferred from NIC to host per packet =
26 + 64 (packet itself) +
(26 + 32) / 4 (batched descriptor writeback) +
(26 / 4) (read request for new descriptors) =
111 bytes / packet
This corresponds to 70.9 Mpps over PCIe 3.0 x8. Assuming 5% DLLP
overhead, rate = 67.4 Mpps
--Anuj
On Wed, Jul 1, 2015 at 9:40 AM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
> In case with syn flood you should take into account return syn-ack traffic,
> which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
> faster. And don't forget about DLLP's generated by rx traffic, which
> saturates host-to-NIC bus.
>
> 2015-07-01 16:05 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>
>> Yes, Bruce, we understand this. But we are working with huge SYN
>> attacks processing and they are 64byte only :(
>>
>> On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
>> <bruce.richardson@intel.com> wrote:
>> > On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>> >> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
>> >> achieve 40GE line rate...
>> >>
>> > Note that this would only apply for your minimal i.e. 64-byte, packet
>> sizes.
>> > Once you go up to larger e.g. 128B packets, your PCI bandwidth
>> requirements
>> > are lower and you can easier achieve line rate.
>> >
>> > /Bruce
>> >
>> >> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <
>> medvedkinv@gmail.com> wrote:
>> >> > Hi Pavel,
>> >> >
>> >> > Looks like you ran into pcie bottleneck. So let's calculate xl710 rx
>> only
>> >> > case.
>> >> > Assume we have 32byte descriptors (if we want more offload).
>> >> > DMA makes one pcie transaction with packet payload, one descriptor
>> writeback
>> >> > and one memory request for free descriptors for every 4 packets. For
>> >> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY + 6
>> DLL +
>> >> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
>> itself) +
>> >> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
>> >> > descriptors). Note that we do not take into account PCIe ACK/NACK/FC
>> Update
>> >> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits 1
>> byte in
>> >> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20 ns.
>> Thus
>> >> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
>> >> > Correct me if I'm wrong.
>> >> >
>> >> > Regards,
>> >> > Vladimir
>> >> >
>> >> >
>>
>>
>>
>> --
>> Sincerely yours, Pavel Odintsov
>>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 14:22 ` Anuj Kalia
@ 2015-07-01 17:32 ` Vladimir Medvedkin
2015-07-01 18:01 ` Anuj Kalia
0 siblings, 1 reply; 15+ messages in thread
From: Vladimir Medvedkin @ 2015-07-01 17:32 UTC (permalink / raw)
To: Anuj Kalia; +Cc: dev
Hi Anuj,
Thanks for fixes!
I have 2 comments
- from i40e_ethdev.h : #define I40E_DEFAULT_RX_WTHRESH 0
- (26 + 32) / 4 (batched descriptor writeback) should be (26 + 4 * 32) / 4
(batched descriptor writeback)
, thus we have 135 bytes/packet
This corresponds to 58.8 Mpps
Regards,
Vladimir
2015-07-01 17:22 GMT+03:00 Anuj Kalia <anujkaliaiitd@gmail.com>:
> Vladimir,
>
> Few possible fixes to your PCIe analysis (let me know if I'm wrong):
> - ECRC is probably disabled (check using sudo lspci -vvv | grep
> CGenEn-), so TLP header is 26 bytes
> - Descriptor writeback can be batched using high value of WTHRESH,
> which is what DPDK uses by default
> - Read request contains full TLP header (26 bytes)
>
> Assuming WTHRESH = 4, bytes transferred from NIC to host per packet =
> 26 + 64 (packet itself) +
> (26 + 32) / 4 (batched descriptor writeback) +
> (26 / 4) (read request for new descriptors) =
> 111 bytes / packet
>
> This corresponds to 70.9 Mpps over PCIe 3.0 x8. Assuming 5% DLLP
> overhead, rate = 67.4 Mpps
>
> --Anuj
>
>
>
> On Wed, Jul 1, 2015 at 9:40 AM, Vladimir Medvedkin <medvedkinv@gmail.com>
> wrote:
> > In case with syn flood you should take into account return syn-ack
> traffic,
> > which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
> > faster. And don't forget about DLLP's generated by rx traffic, which
> > saturates host-to-NIC bus.
> >
> > 2015-07-01 16:05 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
> >
> >> Yes, Bruce, we understand this. But we are working with huge SYN
> >> attacks processing and they are 64byte only :(
> >>
> >> On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
> >> <bruce.richardson@intel.com> wrote:
> >> > On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
> >> >> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
> >> >> achieve 40GE line rate...
> >> >>
> >> > Note that this would only apply for your minimal i.e. 64-byte, packet
> >> sizes.
> >> > Once you go up to larger e.g. 128B packets, your PCI bandwidth
> >> requirements
> >> > are lower and you can easier achieve line rate.
> >> >
> >> > /Bruce
> >> >
> >> >> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <
> >> medvedkinv@gmail.com> wrote:
> >> >> > Hi Pavel,
> >> >> >
> >> >> > Looks like you ran into pcie bottleneck. So let's calculate xl710
> rx
> >> only
> >> >> > case.
> >> >> > Assume we have 32byte descriptors (if we want more offload).
> >> >> > DMA makes one pcie transaction with packet payload, one descriptor
> >> writeback
> >> >> > and one memory request for free descriptors for every 4 packets.
> For
> >> >> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY +
> 6
> >> DLL +
> >> >> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
> >> itself) +
> >> >> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
> >> >> > descriptors). Note that we do not take into account PCIe
> ACK/NACK/FC
> >> Update
> >> >> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits
> 1
> >> byte in
> >> >> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20
> ns.
> >> Thus
> >> >> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
> >> >> > Correct me if I'm wrong.
> >> >> >
> >> >> > Regards,
> >> >> > Vladimir
> >> >> >
> >> >> >
> >>
> >>
> >>
> >> --
> >> Sincerely yours, Pavel Odintsov
> >>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 17:32 ` Vladimir Medvedkin
@ 2015-07-01 18:01 ` Anuj Kalia
2015-07-03 8:35 ` Pavel Odintsov
0 siblings, 1 reply; 15+ messages in thread
From: Anuj Kalia @ 2015-07-01 18:01 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: dev
Thanks for the comments.
On Wed, Jul 1, 2015 at 1:32 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
> Hi Anuj,
>
> Thanks for fixes!
> I have 2 comments
> - from i40e_ethdev.h : #define I40E_DEFAULT_RX_WTHRESH 0
> - (26 + 32) / 4 (batched descriptor writeback) should be (26 + 4 * 32) / 4
> (batched descriptor writeback)
> , thus we have 135 bytes/packet
>
> This corresponds to 58.8 Mpps
>
> Regards,
> Vladimir
>
> 2015-07-01 17:22 GMT+03:00 Anuj Kalia <anujkaliaiitd@gmail.com>:
>>
>> Vladimir,
>>
>> Few possible fixes to your PCIe analysis (let me know if I'm wrong):
>> - ECRC is probably disabled (check using sudo lspci -vvv | grep
>> CGenEn-), so TLP header is 26 bytes
>> - Descriptor writeback can be batched using high value of WTHRESH,
>> which is what DPDK uses by default
>> - Read request contains full TLP header (26 bytes)
>>
>> Assuming WTHRESH = 4, bytes transferred from NIC to host per packet =
>> 26 + 64 (packet itself) +
>> (26 + 32) / 4 (batched descriptor writeback) +
>> (26 / 4) (read request for new descriptors) =
>> 111 bytes / packet
>>
>> This corresponds to 70.9 Mpps over PCIe 3.0 x8. Assuming 5% DLLP
>> overhead, rate = 67.4 Mpps
>>
>> --Anuj
>>
>>
>>
>> On Wed, Jul 1, 2015 at 9:40 AM, Vladimir Medvedkin <medvedkinv@gmail.com>
>> wrote:
>> > In case with syn flood you should take into account return syn-ack
>> > traffic,
>> > which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
>> > faster. And don't forget about DLLP's generated by rx traffic, which
>> > saturates host-to-NIC bus.
>> >
>> > 2015-07-01 16:05 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>> >
>> >> Yes, Bruce, we understand this. But we are working with huge SYN
>> >> attacks processing and they are 64byte only :(
>> >>
>> >> On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
>> >> <bruce.richardson@intel.com> wrote:
>> >> > On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>> >> >> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
>> >> >> achieve 40GE line rate...
>> >> >>
>> >> > Note that this would only apply for your minimal i.e. 64-byte, packet
>> >> sizes.
>> >> > Once you go up to larger e.g. 128B packets, your PCI bandwidth
>> >> requirements
>> >> > are lower and you can easier achieve line rate.
>> >> >
>> >> > /Bruce
>> >> >
>> >> >> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <
>> >> medvedkinv@gmail.com> wrote:
>> >> >> > Hi Pavel,
>> >> >> >
>> >> >> > Looks like you ran into pcie bottleneck. So let's calculate xl710
>> >> >> > rx
>> >> only
>> >> >> > case.
>> >> >> > Assume we have 32byte descriptors (if we want more offload).
>> >> >> > DMA makes one pcie transaction with packet payload, one descriptor
>> >> writeback
>> >> >> > and one memory request for free descriptors for every 4 packets.
>> >> >> > For
>> >> >> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY +
>> >> >> > 6
>> >> DLL +
>> >> >> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
>> >> itself) +
>> >> >> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
>> >> >> > descriptors). Note that we do not take into account PCIe
>> >> >> > ACK/NACK/FC
>> >> Update
>> >> >> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits
>> >> >> > 1
>> >> byte in
>> >> >> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20
>> >> >> > ns.
>> >> Thus
>> >> >> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
>> >> >> > Correct me if I'm wrong.
>> >> >> >
>> >> >> > Regards,
>> >> >> > Vladimir
>> >> >> >
>> >> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Sincerely yours, Pavel Odintsov
>> >>
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's
2015-07-01 18:01 ` Anuj Kalia
@ 2015-07-03 8:35 ` Pavel Odintsov
0 siblings, 0 replies; 15+ messages in thread
From: Pavel Odintsov @ 2015-07-03 8:35 UTC (permalink / raw)
To: Anuj Kalia; +Cc: dev
Hello, folks!
We have found root of issue.
Intel do not offer wire speed for 64b packets in XL710 at all.
As mentioned in data sheet
http://www.intel.ru/content/dam/www/public/us/en/documents/product-briefs/xl710-10-40-gbe-controller-brief.pdf
we have:
Small packet performance: Maintains wire-rate throughput on smaller
payload sizes (>128 Bytes for 40 GbE and >64 Bytes for 10 GbE
Could anybody recommend NIC's which could truly achieve wire rate for 40GE?
On Wed, Jul 1, 2015 at 9:01 PM, Anuj Kalia <anujkaliaiitd@gmail.com> wrote:
> Thanks for the comments.
>
> On Wed, Jul 1, 2015 at 1:32 PM, Vladimir Medvedkin <medvedkinv@gmail.com> wrote:
>> Hi Anuj,
>>
>> Thanks for fixes!
>> I have 2 comments
>> - from i40e_ethdev.h : #define I40E_DEFAULT_RX_WTHRESH 0
>> - (26 + 32) / 4 (batched descriptor writeback) should be (26 + 4 * 32) / 4
>> (batched descriptor writeback)
>> , thus we have 135 bytes/packet
>>
>> This corresponds to 58.8 Mpps
>>
>> Regards,
>> Vladimir
>>
>> 2015-07-01 17:22 GMT+03:00 Anuj Kalia <anujkaliaiitd@gmail.com>:
>>>
>>> Vladimir,
>>>
>>> Few possible fixes to your PCIe analysis (let me know if I'm wrong):
>>> - ECRC is probably disabled (check using sudo lspci -vvv | grep
>>> CGenEn-), so TLP header is 26 bytes
>>> - Descriptor writeback can be batched using high value of WTHRESH,
>>> which is what DPDK uses by default
>>> - Read request contains full TLP header (26 bytes)
>>>
>>> Assuming WTHRESH = 4, bytes transferred from NIC to host per packet =
>>> 26 + 64 (packet itself) +
>>> (26 + 32) / 4 (batched descriptor writeback) +
>>> (26 / 4) (read request for new descriptors) =
>>> 111 bytes / packet
>>>
>>> This corresponds to 70.9 Mpps over PCIe 3.0 x8. Assuming 5% DLLP
>>> overhead, rate = 67.4 Mpps
>>>
>>> --Anuj
>>>
>>>
>>>
>>> On Wed, Jul 1, 2015 at 9:40 AM, Vladimir Medvedkin <medvedkinv@gmail.com>
>>> wrote:
>>> > In case with syn flood you should take into account return syn-ack
>>> > traffic,
>>> > which generates PCIe DLLP's from NIC to host, thus pcie bandwith exceeds
>>> > faster. And don't forget about DLLP's generated by rx traffic, which
>>> > saturates host-to-NIC bus.
>>> >
>>> > 2015-07-01 16:05 GMT+03:00 Pavel Odintsov <pavel.odintsov@gmail.com>:
>>> >
>>> >> Yes, Bruce, we understand this. But we are working with huge SYN
>>> >> attacks processing and they are 64byte only :(
>>> >>
>>> >> On Wed, Jul 1, 2015 at 3:59 PM, Bruce Richardson
>>> >> <bruce.richardson@intel.com> wrote:
>>> >> > On Wed, Jul 01, 2015 at 03:44:57PM +0300, Pavel Odintsov wrote:
>>> >> >> Thanks for answer, Vladimir! So we need look for x16 NIC if we want
>>> >> >> achieve 40GE line rate...
>>> >> >>
>>> >> > Note that this would only apply for your minimal i.e. 64-byte, packet
>>> >> sizes.
>>> >> > Once you go up to larger e.g. 128B packets, your PCI bandwidth
>>> >> requirements
>>> >> > are lower and you can easier achieve line rate.
>>> >> >
>>> >> > /Bruce
>>> >> >
>>> >> >> On Wed, Jul 1, 2015 at 3:06 PM, Vladimir Medvedkin <
>>> >> medvedkinv@gmail.com> wrote:
>>> >> >> > Hi Pavel,
>>> >> >> >
>>> >> >> > Looks like you ran into pcie bottleneck. So let's calculate xl710
>>> >> >> > rx
>>> >> only
>>> >> >> > case.
>>> >> >> > Assume we have 32byte descriptors (if we want more offload).
>>> >> >> > DMA makes one pcie transaction with packet payload, one descriptor
>>> >> writeback
>>> >> >> > and one memory request for free descriptors for every 4 packets.
>>> >> >> > For
>>> >> >> > Transaction Layer Packet (TLP) there is 30 bytes overhead (4 PHY +
>>> >> >> > 6
>>> >> DLL +
>>> >> >> > 16 header + 4 ECRC). So for 1 rx packet dma sends 30 + 64(packet
>>> >> itself) +
>>> >> >> > 30 + 32 (writeback descriptor) + (16 / 4) (read request for new
>>> >> >> > descriptors). Note that we do not take into account PCIe
>>> >> >> > ACK/NACK/FC
>>> >> Update
>>> >> >> > DLLP. So we have 160 bytes per packet. One lane PCIe 3.0 transmits
>>> >> >> > 1
>>> >> byte in
>>> >> >> > 1 ns, so x8 transmits 8 bytes in 1 ns. 1 packet transmits in 20
>>> >> >> > ns.
>>> >> Thus
>>> >> >> > in theory pcie 3.0 x8 may transfer not more than 50mpps.
>>> >> >> > Correct me if I'm wrong.
>>> >> >> >
>>> >> >> > Regards,
>>> >> >> > Vladimir
>>> >> >> >
>>> >> >> >
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Sincerely yours, Pavel Odintsov
>>> >>
>>
>>
--
Sincerely yours, Pavel Odintsov
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2015-07-03 8:35 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-28 10:34 [dpdk-dev] Could not achieve wire speed for 40GE with any DPDK version on XL710 NIC's Pavel Odintsov
2015-06-28 23:35 ` Keunhong Lee
2015-06-29 6:59 ` Pavel Odintsov
2015-06-29 15:06 ` Keunhong Lee
2015-06-29 15:38 ` Andrew Theurer
2015-06-29 15:41 ` Pavel Odintsov
2015-07-01 12:06 ` Vladimir Medvedkin
2015-07-01 12:44 ` Pavel Odintsov
2015-07-01 12:59 ` Bruce Richardson
2015-07-01 13:05 ` Pavel Odintsov
2015-07-01 13:40 ` Vladimir Medvedkin
2015-07-01 14:22 ` Anuj Kalia
2015-07-01 17:32 ` Vladimir Medvedkin
2015-07-01 18:01 ` Anuj Kalia
2015-07-03 8:35 ` Pavel Odintsov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).