* [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
@ 2017-11-11 17:18 Ilya Matveychikov
2017-11-12 2:43 ` Thomas Monjalon
2017-11-13 10:39 ` Adrien Mazarguil
0 siblings, 2 replies; 8+ messages in thread
From: Ilya Matveychikov @ 2017-11-11 17:18 UTC (permalink / raw)
To: dev
Folks,
Are you serious with it:
typedef uint16_t (*eth_rx_burst_t)(void *rxq,
struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
typedef uint16_t (*eth_tx_burst_t)(void *txq,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
Ilya.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-11 17:18 [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design Ilya Matveychikov
@ 2017-11-12 2:43 ` Thomas Monjalon
2017-11-13 10:39 ` Adrien Mazarguil
1 sibling, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2017-11-12 2:43 UTC (permalink / raw)
To: Ilya Matveychikov; +Cc: dev
11/11/2017 18:18, Ilya Matveychikov:
> Folks,
>
> Are you serious with it:
No, DPDK is not serious, just a hobby for few crazy folks ;)
Usually, we have fun discussions when someone explains clearly an issue.
Please do not hesitate to bring your lights,
we will check how seriously we can improve.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-11 17:18 [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design Ilya Matveychikov
2017-11-12 2:43 ` Thomas Monjalon
@ 2017-11-13 10:39 ` Adrien Mazarguil
2017-11-13 10:56 ` Ilya Matveychikov
2017-11-13 10:58 ` Ananyev, Konstantin
1 sibling, 2 replies; 8+ messages in thread
From: Adrien Mazarguil @ 2017-11-13 10:39 UTC (permalink / raw)
To: Ilya Matveychikov; +Cc: dev
On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
> Folks,
>
> Are you serious with it:
>
> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
> struct rte_mbuf **rx_pkts,
> uint16_t nb_pkts);
> typedef uint16_t (*eth_tx_burst_t)(void *txq,
> struct rte_mbuf **tx_pkts,
> uint16_t nb_pkts);
>
> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
>
> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
> struct rte_mbuf **rx_pkts,
> uint16_t nb_pkts);
> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
> struct rte_mbuf **tx_pkts,
> uint16_t nb_pkts);
I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
for that indirection, doing it twice would be redundant.
Basically the cost of storing a back-pointer to dev or a queue index in each
Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
wherever we can.
I'm not saying its the only solution nor the right approach, it's only one
possible explanation for this API.
--
Adrien Mazarguil
6WIND
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-13 10:39 ` Adrien Mazarguil
@ 2017-11-13 10:56 ` Ilya Matveychikov
2017-11-13 17:15 ` Adrien Mazarguil
2017-11-13 10:58 ` Ananyev, Konstantin
1 sibling, 1 reply; 8+ messages in thread
From: Ilya Matveychikov @ 2017-11-13 10:56 UTC (permalink / raw)
To: Adrien Mazarguil; +Cc: dev
> On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
>
> On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
>> Folks,
>>
>> Are you serious with it:
>>
>> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
>> struct rte_mbuf **rx_pkts,
>> uint16_t nb_pkts);
>> typedef uint16_t (*eth_tx_burst_t)(void *txq,
>> struct rte_mbuf **tx_pkts,
>> uint16_t nb_pkts);
>>
>> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
>>
>> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
>> struct rte_mbuf **rx_pkts,
>> uint16_t nb_pkts);
>> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
>> struct rte_mbuf **tx_pkts,
>> uint16_t nb_pkts);
>
> I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
> for that indirection, doing it twice would be redundant.
No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
index.
>
> Basically the cost of storing a back-pointer to dev or a queue index in each
> Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
> wherever we can.
Not sure about it. More data to store - more cache space to occupy. Note that every queue has
at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
by it’s default to 1024. So we may have 4k extra for each port....
>
> I'm not saying its the only solution nor the right approach, it's only one
> possible explanation for this API.
Thank you for your try.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-13 10:39 ` Adrien Mazarguil
2017-11-13 10:56 ` Ilya Matveychikov
@ 2017-11-13 10:58 ` Ananyev, Konstantin
1 sibling, 0 replies; 8+ messages in thread
From: Ananyev, Konstantin @ 2017-11-13 10:58 UTC (permalink / raw)
To: Adrien Mazarguil, Ilya Matveychikov; +Cc: dev
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Adrien Mazarguil
> Sent: Monday, November 13, 2017 10:39 AM
> To: Ilya Matveychikov <matvejchikov@gmail.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
>
> On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
> > Folks,
> >
> > Are you serious with it:
> >
> > typedef uint16_t (*eth_rx_burst_t)(void *rxq,
> > struct rte_mbuf **rx_pkts,
> > uint16_t nb_pkts);
> > typedef uint16_t (*eth_tx_burst_t)(void *txq,
> > struct rte_mbuf **tx_pkts,
> > uint16_t nb_pkts);
> >
> > I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the
> device. So the question is - why not to use something like:
> >
> > typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
> > struct rte_mbuf **rx_pkts,
> > uint16_t nb_pkts);
> > typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
> > struct rte_mbuf **tx_pkts,
> > uint16_t nb_pkts);
>
> I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
> for that indirection, doing it twice would be redundant.
>
> Basically the cost of storing a back-pointer to dev or a queue index in each
> Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
> wherever we can.
For many PMDs we don't need that pointer at all to do RX/TX -
information stored inside the queue is sufficient.
Konstantin
>
> I'm not saying its the only solution nor the right approach, it's only one
> possible explanation for this API.
>
> --
> Adrien Mazarguil
> 6WIND
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-13 10:56 ` Ilya Matveychikov
@ 2017-11-13 17:15 ` Adrien Mazarguil
2017-11-13 19:33 ` Ilya Matveychikov
0 siblings, 1 reply; 8+ messages in thread
From: Adrien Mazarguil @ 2017-11-13 17:15 UTC (permalink / raw)
To: Ilya Matveychikov; +Cc: dev
On Mon, Nov 13, 2017 at 02:56:23PM +0400, Ilya Matveychikov wrote:
>
> > On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
> >
> > On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
> >> Folks,
> >>
> >> Are you serious with it:
> >>
> >> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
> >> struct rte_mbuf **rx_pkts,
> >> uint16_t nb_pkts);
> >> typedef uint16_t (*eth_tx_burst_t)(void *txq,
> >> struct rte_mbuf **tx_pkts,
> >> uint16_t nb_pkts);
> >>
> >> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
> >>
> >> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
> >> struct rte_mbuf **rx_pkts,
> >> uint16_t nb_pkts);
> >> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
> >> struct rte_mbuf **tx_pkts,
> >> uint16_t nb_pkts);
> >
> > I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
> > for that indirection, doing it twice would be redundant.
>
> No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
> index.
>
> >
> > Basically the cost of storing a back-pointer to dev or a queue index in each
> > Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
> > wherever we can.
>
> Not sure about it. More data to store - more cache space to occupy. Note that every queue has
> at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
> by it’s default to 1024. So we may have 4k extra for each port....
Note that queues are only allocated if requested by application, there's
really not much overhead involved.
Also to echo Konstantin's reply and clarify mine, PMDs normally do not
access this structure from their data plane. This pointer, if needed, is
normally stored away from hot regions accessed during TX/RX, usually at the
end of TX/RX structures and only for the convenience of management
operations. It therefore has no measurable impact on the CPU cache.
--
Adrien Mazarguil
6WIND
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-13 17:15 ` Adrien Mazarguil
@ 2017-11-13 19:33 ` Ilya Matveychikov
2017-11-14 6:24 ` Andrew Rybchenko
0 siblings, 1 reply; 8+ messages in thread
From: Ilya Matveychikov @ 2017-11-13 19:33 UTC (permalink / raw)
To: Adrien Mazarguil; +Cc: dev
> On Nov 13, 2017, at 9:15 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
>
> On Mon, Nov 13, 2017 at 02:56:23PM +0400, Ilya Matveychikov wrote:
>>
>>> On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
>>>
>>> On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
>>>> Folks,
>>>>
>>>> Are you serious with it:
>>>>
>>>> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
>>>> struct rte_mbuf **rx_pkts,
>>>> uint16_t nb_pkts);
>>>> typedef uint16_t (*eth_tx_burst_t)(void *txq,
>>>> struct rte_mbuf **tx_pkts,
>>>> uint16_t nb_pkts);
>>>>
>>>> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
>>>>
>>>> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
>>>> struct rte_mbuf **rx_pkts,
>>>> uint16_t nb_pkts);
>>>> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
>>>> struct rte_mbuf **tx_pkts,
>>>> uint16_t nb_pkts);
>>>
>>> I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
>>> for that indirection, doing it twice would be redundant.
>>
>> No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
>> index.
>>
>>>
>>> Basically the cost of storing a back-pointer to dev or a queue index in each
>>> Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
>>> wherever we can.
>>
>> Not sure about it. More data to store - more cache space to occupy. Note that every queue has
>> at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
>> by it’s default to 1024. So we may have 4k extra for each port....
>
> Note that queues are only allocated if requested by application, there's
> really not much overhead involved.
Yeah, mostly you are right here.
>
> Also to echo Konstantin's reply and clarify mine, PMDs normally do not
> access this structure from their data plane. This pointer, if needed, is
> normally stored away from hot regions accessed during TX/RX, usually at the
> end of TX/RX structures and only for the convenience of management
> operations. It therefore has no measurable impact on the CPU cache.
>
I did a research of how drivers implements rx/tx queues and now I want to share the information
and some thoughts about it (see the info at the end):
1) All drivers have tx/rx queues defined as structures
2) Current design implies that it’s enough to pass opaque rx/tx queue to the driver and frankly
speaking it is. But..
3) Most of drivers wants to get not only the queue’s pointer but at least queue_id and port_id and
most of them wants to have the pointer to internal devices’ data also.
The way each driver solves (3) issue is data duplication. In other words, every queue used to have
such the information (queue_id, port_id and dev_priv pointer) inside.
My question was and still about such the design. Not sure that it’s the best way to do it keeping in
mind that queue_id may be calculated using pointer difference and port_id may be stored just only
once per device. But it’ll require to change internal interface, sure.
And as I promised here is the result of the research on rx/tx queues:
drivers/net/af_packet:
struct pkt_rx_queue { in_port }
drivers/net/ark:
struct aark_rx_queue { queue_index }
struct aark_tx_queue { queue_index }
drivers/net/avp:
struct avp_queue { avp*, dev_data*, queue_id }
drivers/net/bnx2x:
struct bnx2x_rx_queue { queue_id, port_id, sc* }
struct bnx2x_tx_queue { queue_id, port_id, sc* }
drivers/net/bnxt:
struct bnxt_rx_queue { queue_id, port_id, *bp } (rx_tpa is unused)
struct bnxt_tx_queue { queue_id, port_id, *bp }
drivers/net/bonding:
struct bond_rx_queue { queue_id, dev_private* }
struct bond_tx_queue { queue_id, dev_private* }
drivers/net/cxgbe:
struct sge_eth_rxq { sge_rspq { adapter*, eth_dev*, port_id, idx } }
struct sge_eth_txq { eth_dev* }
drivers/net/dpaa:
struct qman_fq { dpaa_intf* }
drivers/net/dpaa2:
struct dpaa2_queue { dev* }
drivers/net/e1000:
struct em_rx_queue { queue_id, port_id }
struct em_tx_queue { queue_id, port_id }
drivers/net/ena:
struct ena_ring { port_id, id, adapter* }
drivers/net/enic:
struct vnic_wq { index, *vdev }
drivers/net/failsafe:
struct rxq { priv*, qid }
struct txq { priv*, qid }
drivers/net/fm10k:
struct fm10k_rx_queue { queue_id, port_id }
struct fm10k_tx_queue { queue_id, port_id }
drivers/net/i40e:
struct i40e_rx_queue { queue_id, port_id, vsi* }
struct i40e_tx_queue { queue_id, port_id, vsi* }
drivers/net/ixgbe:
struct ixgbe_rx_queue { queue_id, port_id }
struct ixgbe_tx_queue { queue_id, port_id }
drivers/net/kni:
struct pmd_queue { internals* }
drivers/net/liquidio:
struct lio_droq { q_no, lio_dev* }
struct lio_instrqueue { lio_dev*, q_index }
drivers/net/mlx4:
struct rxq { priv*, port_id }
struct txq { priv* }
drivers/net/mlx5:
struct mlx5_rxq_data { port_id }
drivers/net/mrvl:
struct mrvl_rxq { queue_id, port_id, priv* }
struct mrvl_txq { queue_id, port_id, priv* }
drivers/net/nfp:
struct nfp_net_rxq { hw*, port_id, qidx }
struct nfp_net_txq { hw*, port_id, qidx }
drivers/net/null:
struct null_queue { internals* }
drivers/net/octeontx:
struct octeontx_rxq { queue_id, port_id, eth_dev* }
struct octeontx_txq { queue_id, eth_dev* }
drivers/net/pcap:
struct pcap_rx_queue { in_port }
drivers/net/qede:
struct qede_rx_queue { queue_id, port_id, *qdev }
struct qede_tx_queue { queue_id, port_id, *qdev }
drivers/net/sfq:
struct sfc_ef10_rxq { dp { queue_id, port_id } }
struct sfc_ef10_txq { dp { queue_id, port_id } }
drivers/net/softnic:
struct pmd_rx_queue { port_id, queue_id }
drivers/net/szedata2:
struct szedata2_rx_queue { rx_channel, port_id }
struct szedata2_tx_queue { tx_channel }
drivers/net/tap:
struct rx_queue { port_id }
drivers/net/thunderx:
struct nicvf_rxq { nic*, queue_id, port_id }
struct nicvf_txq { nic*, queue_id }
drivers/net/vhost:
struct vhost_queue { vid, internal* }
drivers/net/virtio:
struct virtqueue { hw* }
struct virtnet_rx { queue_id, port_id }
struct virtnet_tx { queue_id, port_id }
drivers/net/vmxnet3:
struct vmxnet3_rx_queue { hw*, queue_id, port_id }
struct vmxnet3_tx_queue { hw*, queue_id, port_id }
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
2017-11-13 19:33 ` Ilya Matveychikov
@ 2017-11-14 6:24 ` Andrew Rybchenko
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Rybchenko @ 2017-11-14 6:24 UTC (permalink / raw)
To: Ilya Matveychikov, Adrien Mazarguil; +Cc: dev
On 11/13/2017 10:33 PM, Ilya Matveychikov wrote:
>> On Nov 13, 2017, at 9:15 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
>>
>> On Mon, Nov 13, 2017 at 02:56:23PM +0400, Ilya Matveychikov wrote:
>>>> On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
>>>>
>>>> On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
>>>>> Folks,
>>>>>
>>>>> Are you serious with it:
>>>>>
>>>>> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
>>>>> struct rte_mbuf **rx_pkts,
>>>>> uint16_t nb_pkts);
>>>>> typedef uint16_t (*eth_tx_burst_t)(void *txq,
>>>>> struct rte_mbuf **tx_pkts,
>>>>> uint16_t nb_pkts);
>>>>>
>>>>> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
>>>>>
>>>>> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
>>>>> struct rte_mbuf **rx_pkts,
>>>>> uint16_t nb_pkts);
>>>>> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
>>>>> struct rte_mbuf **tx_pkts,
>>>>> uint16_t nb_pkts);
>>>> I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
>>>> for that indirection, doing it twice would be redundant.
>>> No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
>>> index.
>>>
>>>> Basically the cost of storing a back-pointer to dev or a queue index in each
>>>> Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
>>>> wherever we can.
>>> Not sure about it. More data to store - more cache space to occupy. Note that every queue has
>>> at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
>>> by it’s default to 1024. So we may have 4k extra for each port....
>> Note that queues are only allocated if requested by application, there's
>> really not much overhead involved.
> Yeah, mostly you are right here.
>
>> Also to echo Konstantin's reply and clarify mine, PMDs normally do not
>> access this structure from their data plane. This pointer, if needed, is
>> normally stored away from hot regions accessed during TX/RX, usually at the
>> end of TX/RX structures and only for the convenience of management
>> operations. It therefore has no measurable impact on the CPU cache.
>>
> I did a research of how drivers implements rx/tx queues and now I want to share the information
> and some thoughts about it (see the info at the end):
>
> 1) All drivers have tx/rx queues defined as structures
> 2) Current design implies that it’s enough to pass opaque rx/tx queue to the driver and frankly
> speaking it is. But..
> 3) Most of drivers wants to get not only the queue’s pointer but at least queue_id and port_id and
> most of them wants to have the pointer to internal devices’ data also.
>
> The way each driver solves (3) issue is data duplication. In other words, every queue used to have
> such the information (queue_id, port_id and dev_priv pointer) inside.
>
> My question was and still about such the design. Not sure that it’s the best way to do it keeping in
> mind that queue_id may be calculated using pointer difference and port_id may be stored just only
> once per device. But it’ll require to change internal interface, sure.
>
> And as I promised here is the result of the research on rx/tx queues:
<...>
> drivers/net/sfq:
> struct sfc_ef10_rxq { dp { queue_id, port_id } }
> struct sfc_ef10_txq { dp { queue_id, port_id } }
Which are not used on data/fast path (as Adrien state above). So, it
does not affect cache usage efficient etc.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-11-14 6:24 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-11 17:18 [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design Ilya Matveychikov
2017-11-12 2:43 ` Thomas Monjalon
2017-11-13 10:39 ` Adrien Mazarguil
2017-11-13 10:56 ` Ilya Matveychikov
2017-11-13 17:15 ` Adrien Mazarguil
2017-11-13 19:33 ` Ilya Matveychikov
2017-11-14 6:24 ` Andrew Rybchenko
2017-11-13 10:58 ` Ananyev, Konstantin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).