DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
@ 2015-01-21 17:02 Andrey Korolyov
  2015-01-22 17:11 ` Andrey Korolyov
  0 siblings, 1 reply; 9+ messages in thread
From: Andrey Korolyov @ 2015-01-21 17:02 UTC (permalink / raw)
  To: dev; +Cc: discuss

Hello,

I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to
drop packets earlier than a regular Linux ixgbe 10G interface, setup
follows:

receiver/forwarder:
- 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively
- n-dpdk-rxqs=6, rx scattering is not enabled
- x520 da
- 3.10/3.18 host kernel
- during 'legacy mode' testing, queue interrupts are scattered through all cores

sender:
- 16-core E52630, netmap framework for packet generation
- pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d
10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R
11000000, results in 11Mpps 60-byte packet flood, there are constant
values during test.

OVS contains only single drop rule at the moment:
ovs-ofctl add-flow br0 in_port=1,actions=DROP

Packet generator was launched for tens of seconds for both Linux stack
and OVS+DPDK cases, resulting in zero drop/error count on the
interface in first, along with same counter values on pktgen and host
interface stat (means that the none of generated packets are
unaccounted).

I selected rate for about 11M because OVS starts to do packet drop
around this value, after same short test interface stat shows
following:

statistics          : {collisions=0, rx_bytes=22003928768,
rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0,
rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0,
tx_errors=0, tx_packets=0}

pktgen side:
Sent 354506080 packets, 60 bytes each, in 32.23 seconds.
Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps)

If rate will be increased up to 13-14Mpps, the relative error/overall
ratio will rise up to a one third. So far OVS on dpdk shows perfect
results and I do not want to reject this solution due to exhaustive
behavior like described one, so I`m open for any suggestions to
improve the situation (except using 1.7 branch :) ).

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-01-21 17:02 [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0 Andrey Korolyov
@ 2015-01-22 17:11 ` Andrey Korolyov
       [not found]   ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Andrey Korolyov @ 2015-01-22 17:11 UTC (permalink / raw)
  To: dev; +Cc: discuss

On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
> Hello,
>
> I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to
> drop packets earlier than a regular Linux ixgbe 10G interface, setup
> follows:
>
> receiver/forwarder:
> - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively
> - n-dpdk-rxqs=6, rx scattering is not enabled
> - x520 da
> - 3.10/3.18 host kernel
> - during 'legacy mode' testing, queue interrupts are scattered through all cores
>
> sender:
> - 16-core E52630, netmap framework for packet generation
> - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d
> 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R
> 11000000, results in 11Mpps 60-byte packet flood, there are constant
> values during test.
>
> OVS contains only single drop rule at the moment:
> ovs-ofctl add-flow br0 in_port=1,actions=DROP
>
> Packet generator was launched for tens of seconds for both Linux stack
> and OVS+DPDK cases, resulting in zero drop/error count on the
> interface in first, along with same counter values on pktgen and host
> interface stat (means that the none of generated packets are
> unaccounted).
>
> I selected rate for about 11M because OVS starts to do packet drop
> around this value, after same short test interface stat shows
> following:
>
> statistics          : {collisions=0, rx_bytes=22003928768,
> rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0,
> rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0,
> tx_errors=0, tx_packets=0}
>
> pktgen side:
> Sent 354506080 packets, 60 bytes each, in 32.23 seconds.
> Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps)
>
> If rate will be increased up to 13-14Mpps, the relative error/overall
> ratio will rise up to a one third. So far OVS on dpdk shows perfect
> results and I do not want to reject this solution due to exhaustive
> behavior like described one, so I`m open for any suggestions to
> improve the situation (except using 1.7 branch :) ).

At a glance it looks like there is a problem with pmd threads, as they
starting to consume about five thousandth of sys% on a dedicated cores
during flood but in theory they should not. Any ideas for
debugging/improving this situation are very welcomed!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
       [not found]   ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
@ 2015-02-03 17:02     ` Traynor, Kevin
  2015-02-03 17:21       ` Andrey Korolyov
  0 siblings, 1 reply; 9+ messages in thread
From: Traynor, Kevin @ 2015-02-03 17:02 UTC (permalink / raw)
  To: Andrey Korolyov, dev; +Cc: discuss


> -----Original Message-----
> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> Sent: Monday, February 2, 2015 10:53 AM
> To: dev@dpdk.org
> Cc: discuss@openvswitch.org; Traynor, Kevin
> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> 
> On Thu, Jan 22, 2015 at 8:11 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
> > On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
> >> Hello,
> >>
> >> I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to
> >> drop packets earlier than a regular Linux ixgbe 10G interface, setup
> >> follows:
> >>
> >> receiver/forwarder:
> >> - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively
> >> - n-dpdk-rxqs=6, rx scattering is not enabled
> >> - x520 da
> >> - 3.10/3.18 host kernel
> >> - during 'legacy mode' testing, queue interrupts are scattered through all cores
> >>
> >> sender:
> >> - 16-core E52630, netmap framework for packet generation
> >> - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d
> >> 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R
> >> 11000000, results in 11Mpps 60-byte packet flood, there are constant
> >> values during test.
> >>
> >> OVS contains only single drop rule at the moment:
> >> ovs-ofctl add-flow br0 in_port=1,actions=DROP
> >>
> >> Packet generator was launched for tens of seconds for both Linux stack
> >> and OVS+DPDK cases, resulting in zero drop/error count on the
> >> interface in first, along with same counter values on pktgen and host
> >> interface stat (means that the none of generated packets are
> >> unaccounted).
> >>
> >> I selected rate for about 11M because OVS starts to do packet drop
> >> around this value, after same short test interface stat shows
> >> following:
> >>
> >> statistics          : {collisions=0, rx_bytes=22003928768,
> >> rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0,
> >> rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0,
> >> tx_errors=0, tx_packets=0}
> >>
> >> pktgen side:
> >> Sent 354506080 packets, 60 bytes each, in 32.23 seconds.
> >> Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps)
> >>
> >> If rate will be increased up to 13-14Mpps, the relative error/overall
> >> ratio will rise up to a one third. So far OVS on dpdk shows perfect
> >> results and I do not want to reject this solution due to exhaustive
> >> behavior like described one, so I`m open for any suggestions to
> >> improve the situation (except using 1.7 branch :) ).
> >
> > At a glance it looks like there is a problem with pmd threads, as they
> > starting to consume about five thousandth of sys% on a dedicated cores
> > during flood but in theory they should not. Any ideas for
> > debugging/improving this situation are very welcomed!
> 
> Over the time from a last message I tried a couple of different
> configurations, but packet loss starting to happen as early as at
> 7-8Mpps. Looks like that the bulk processing which has been in
> OVS-DPDK distro is missing from series of patches
> (http://openvswitch.org/pipermail/dev/2014-December/049722.html,
> http://openvswitch.org/pipermail/dev/2014-December/049723.html).
> Before implementing this, I would like to know if there can be any
> obvious (not for me unfortunately) clues on this performance issue.

These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to? 
By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked 
patch doesn't change this, just the DPDK version.

Main things to consider are to isocpu's, pin the pmd thread and keep everything 
on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are 
doing those things already.

> 
> Thanks!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-03 17:02     ` Traynor, Kevin
@ 2015-02-03 17:21       ` Andrey Korolyov
  2015-02-06 14:43         ` Andrey Korolyov
  2015-02-12 15:05         ` Traynor, Kevin
  0 siblings, 2 replies; 9+ messages in thread
From: Andrey Korolyov @ 2015-02-03 17:21 UTC (permalink / raw)
  To: Traynor, Kevin; +Cc: dev, discuss

> These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
> By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
> patch doesn't change this, just the DPDK version.

Sorry, I referred the wrong part there: bulk transmission, which is
clearly not involved in my case. The idea was that the conditionally
enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
it`s probably will mask issue instead of solving it directly. By my
understanding, strict drop rule should have a zero impact on a main
ovs thread (and this is true) and work just fine with a line rate
(this is not).

>
> Main things to consider are to isocpu's, pin the pmd thread and keep everything
> on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are
> doing those things already.

Yes, with all tuning improvements I was able to do this, but bare
Linux stack on same machine is able to handle 12Mpps and there are
absolutely no hints of what exactly is being congested.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-03 17:21       ` Andrey Korolyov
@ 2015-02-06 14:43         ` Andrey Korolyov
  2015-02-12 15:05         ` Traynor, Kevin
  1 sibling, 0 replies; 9+ messages in thread
From: Andrey Korolyov @ 2015-02-06 14:43 UTC (permalink / raw)
  To: Traynor, Kevin; +Cc: dev, discuss

On Tue, Feb 3, 2015 at 8:21 PM, Andrey Korolyov <andrey@xdel.ru> wrote:
>> These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
>> By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
>> patch doesn't change this, just the DPDK version.
>
> Sorry, I referred the wrong part there: bulk transmission, which is
> clearly not involved in my case. The idea was that the conditionally
> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> it`s probably will mask issue instead of solving it directly. By my
> understanding, strict drop rule should have a zero impact on a main
> ovs thread (and this is true) and work just fine with a line rate
> (this is not).
>
>>
>> Main things to consider are to isocpu's, pin the pmd thread and keep everything
>> on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are
>> doing those things already.
>
> Yes, with all tuning improvements I was able to do this, but bare
> Linux stack on same machine is able to handle 12Mpps and there are
> absolutely no hints of what exactly is being congested.

Also both action=NORMAL & action=output:<non-dpdk port> do manage flow
control in such a way that the generator side reaches line (14.8Mpps)
rate on 60b data packets, though very high drop ratio persists. With
action=DROP or action=output:X, where X is another dpdk port, flow
control establishes somewhere at the 13Mpps. Of course, using regular
host interface or NORMAL action generates a lot of context switches,
mainly by miniflow_extract() and emc_..(), the difference in a syscall
distribution between congested (line rate is reached) and
non-congested link is unobservable.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-03 17:21       ` Andrey Korolyov
  2015-02-06 14:43         ` Andrey Korolyov
@ 2015-02-12 15:05         ` Traynor, Kevin
  2015-02-12 15:15           ` Andrey Korolyov
  1 sibling, 1 reply; 9+ messages in thread
From: Traynor, Kevin @ 2015-02-12 15:05 UTC (permalink / raw)
  To: Andrey Korolyov; +Cc: dev, discuss

> -----Original Message-----
> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> Sent: Tuesday, February 3, 2015 5:21 PM
> To: Traynor, Kevin
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> 
> > These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
> > By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
> > patch doesn't change this, just the DPDK version.
> 
> Sorry, I referred the wrong part there: bulk transmission, which is
> clearly not involved in my case. The idea was that the conditionally
> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> it`s probably will mask issue instead of solving it directly. By my
> understanding, strict drop rule should have a zero impact on a main
> ovs thread (and this is true) and work just fine with a line rate
> (this is not).

I've set a similar drop rule and I'm seeing the first packet drops occurring 
at 13.9 mpps for 64 byte pkts. I'm not sure if there is a config that can be 
changed or if it just the cost of the emc/lookups

> 
> >
> > Main things to consider are to isocpu's, pin the pmd thread and keep everything
> > on 1 NUMA socket. At 11 mpps without packet loss on that processor I suspect you are
> > doing those things already.
> 
> Yes, with all tuning improvements I was able to do this, but bare
> Linux stack on same machine is able to handle 12Mpps and there are
> absolutely no hints of what exactly is being congested.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-12 15:05         ` Traynor, Kevin
@ 2015-02-12 15:15           ` Andrey Korolyov
  2015-02-13 10:58             ` Traynor, Kevin
  0 siblings, 1 reply; 9+ messages in thread
From: Andrey Korolyov @ 2015-02-12 15:15 UTC (permalink / raw)
  To: Traynor, Kevin; +Cc: dev, discuss

On Thu, Feb 12, 2015 at 6:05 PM, Traynor, Kevin <kevin.traynor@intel.com> wrote:
>> -----Original Message-----
>> From: Andrey Korolyov [mailto:andrey@xdel.ru]
>> Sent: Tuesday, February 3, 2015 5:21 PM
>> To: Traynor, Kevin
>> Cc: dev@dpdk.org; discuss@openvswitch.org
>> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
>>
>> > These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
>> > By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
>> > patch doesn't change this, just the DPDK version.
>>
>> Sorry, I referred the wrong part there: bulk transmission, which is
>> clearly not involved in my case. The idea was that the conditionally
>> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
>> it`s probably will mask issue instead of solving it directly. By my
>> understanding, strict drop rule should have a zero impact on a main
>> ovs thread (and this is true) and work just fine with a line rate
>> (this is not).
>
> I've set a similar drop rule and I'm seeing the first packet drops occurring
> at 13.9 mpps for 64 byte pkts. I'm not sure if there is a config that can be
> changed or if it just the cost of the emc/lookups
>

Do you mind to compare this case with forward to the dummy port
(ifconfig dummy0; ovs-vsctl add-port br0 dummy0; ip link set dev
dummy0 up; flush rule table; create a single forward rule; start an
attack)? As I mentioned there are no signs of syscall congestion for a
drop or dpdk-dpdk forward case.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-12 15:15           ` Andrey Korolyov
@ 2015-02-13 10:58             ` Traynor, Kevin
  2015-02-16 22:37               ` Andrey Korolyov
  0 siblings, 1 reply; 9+ messages in thread
From: Traynor, Kevin @ 2015-02-13 10:58 UTC (permalink / raw)
  To: Andrey Korolyov; +Cc: dev, discuss

> -----Original Message-----
> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> Sent: Thursday, February 12, 2015 3:16 PM
> To: Traynor, Kevin
> Cc: dev@dpdk.org; discuss@openvswitch.org
> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> 
> On Thu, Feb 12, 2015 at 6:05 PM, Traynor, Kevin <kevin.traynor@intel.com> wrote:
> >> -----Original Message-----
> >> From: Andrey Korolyov [mailto:andrey@xdel.ru]
> >> Sent: Tuesday, February 3, 2015 5:21 PM
> >> To: Traynor, Kevin
> >> Cc: dev@dpdk.org; discuss@openvswitch.org
> >> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
> >>
> >> > These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
> >> > By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
> >> > patch doesn't change this, just the DPDK version.
> >>
> >> Sorry, I referred the wrong part there: bulk transmission, which is
> >> clearly not involved in my case. The idea was that the conditionally
> >> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
> >> it`s probably will mask issue instead of solving it directly. By my
> >> understanding, strict drop rule should have a zero impact on a main
> >> ovs thread (and this is true) and work just fine with a line rate
> >> (this is not).
> >
> > I've set a similar drop rule and I'm seeing the first packet drops occurring
> > at 13.9 mpps for 64 byte pkts. I'm not sure if there is a config that can be
> > changed or if it just the cost of the emc/lookups
> >
> 
> Do you mind to compare this case with forward to the dummy port
> (ifconfig dummy0; ovs-vsctl add-port br0 dummy0; ip link set dev
> dummy0 up; flush rule table; create a single forward rule; start an
> attack)? As I mentioned there are no signs of syscall congestion for a
> drop or dpdk-dpdk forward case.

Assuming I've understood your setup, I get a very low rate (~1.1 mpps) 
without packet loss as I'm sending the packets from a dpdk port to a 
socket for the dummy port 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0
  2015-02-13 10:58             ` Traynor, Kevin
@ 2015-02-16 22:37               ` Andrey Korolyov
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Korolyov @ 2015-02-16 22:37 UTC (permalink / raw)
  To: Traynor, Kevin; +Cc: dev, discuss

On Fri, Feb 13, 2015 at 1:58 PM, Traynor, Kevin <kevin.traynor@intel.com> wrote:
>> -----Original Message-----
>> From: Andrey Korolyov [mailto:andrey@xdel.ru]
>> Sent: Thursday, February 12, 2015 3:16 PM
>> To: Traynor, Kevin
>> Cc: dev@dpdk.org; discuss@openvswitch.org
>> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
>>
>> On Thu, Feb 12, 2015 at 6:05 PM, Traynor, Kevin <kevin.traynor@intel.com> wrote:
>> >> -----Original Message-----
>> >> From: Andrey Korolyov [mailto:andrey@xdel.ru]
>> >> Sent: Tuesday, February 3, 2015 5:21 PM
>> >> To: Traynor, Kevin
>> >> Cc: dev@dpdk.org; discuss@openvswitch.org
>> >> Subject: Re: Packet drops during non-exhaustive flood with OVS and 1.8.0
>> >>
>> >> > These patches are to enable DPDK 1.8 only. What 'bulk processing' are you referring to?
>> >> > By default there is a batch size of 192 in netdev-dpdk for rx from the NIC - the linked
>> >> > patch doesn't change this, just the DPDK version.
>> >>
>> >> Sorry, I referred the wrong part there: bulk transmission, which is
>> >> clearly not involved in my case. The idea was that the conditionally
>> >> enabling prefetch for rx queues (BULK_ALLOC) may help somehow, but
>> >> it`s probably will mask issue instead of solving it directly. By my
>> >> understanding, strict drop rule should have a zero impact on a main
>> >> ovs thread (and this is true) and work just fine with a line rate
>> >> (this is not).
>> >
>> > I've set a similar drop rule and I'm seeing the first packet drops occurring
>> > at 13.9 mpps for 64 byte pkts. I'm not sure if there is a config that can be
>> > changed or if it just the cost of the emc/lookups
>> >
>>
>> Do you mind to compare this case with forward to the dummy port
>> (ifconfig dummy0; ovs-vsctl add-port br0 dummy0; ip link set dev
>> dummy0 up; flush rule table; create a single forward rule; start an
>> attack)? As I mentioned there are no signs of syscall congestion for a
>> drop or dpdk-dpdk forward case.
>
> Assuming I've understood your setup, I get a very low rate (~1.1 mpps)
> without packet loss as I'm sending the packets from a dpdk port to a
> socket for the dummy port

Yes, but in other hand flow control from the dpdk port allows line
rate to come in, despite the actual loss during the transfer inside
receiving instance. With drop/output to dpdkY port the horizontal
asymptote with congestion is somewhat lower and this is hardly
explainable.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2015-02-16 22:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-21 17:02 [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0 Andrey Korolyov
2015-01-22 17:11 ` Andrey Korolyov
     [not found]   ` <CABYiri_q4QqWrhTQL-UZ1wf3FLz-wj9SbtBJrDntz2Bw4cEPoQ@mail.gmail.com>
2015-02-03 17:02     ` Traynor, Kevin
2015-02-03 17:21       ` Andrey Korolyov
2015-02-06 14:43         ` Andrey Korolyov
2015-02-12 15:05         ` Traynor, Kevin
2015-02-12 15:15           ` Andrey Korolyov
2015-02-13 10:58             ` Traynor, Kevin
2015-02-16 22:37               ` Andrey Korolyov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).