DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Charles (Chas) Williams" <ciwillia@brocade.com>
To: Jan Blunck <jblunck@infradead.org>
Cc: dev <dev@dpdk.org>, <yongwang@vmware.com>
Subject: Re: [dpdk-dev] [PATCH] net/vmxnet3: fix queue size changes
Date: Wed, 15 Mar 2017 06:06:15 -0400	[thread overview]
Message-ID: <00ca5e74-d3ef-ac13-d6b7-aaabcbb4c6fd@brocade.com> (raw)
In-Reply-To: <CALe+Z02y_1BDJYO7Hxz+FmC1Y+_WXcEYKbVWOwCKcJ2c9dFW+Q@mail.gmail.com>



On 03/15/2017 06:05 AM, Jan Blunck wrote:
> On Wed, Mar 15, 2017 at 10:45 AM, Charles (Chas) Williams
> <ciwillia@brocade.com> wrote:
>>
>>
>> On 03/15/2017 04:18 AM, Jan Blunck wrote:
>>>
>>> On Tue, Mar 14, 2017 at 5:38 PM, Charles (Chas) Williams
>>> <ciwillia@brocade.com> wrote:
>>>>
>>>>
>>>>
>>>> On 03/14/2017 12:11 PM, Jan Blunck wrote:
>>>>>
>>>>>
>>>>> On Mon, Mar 13, 2017 at 11:41 PM, Charles (Chas) Williams
>>>>> <ciwillia@brocade.com> wrote:
>>>>>>
>>>>>>
>>>>>> If the user reconfigures the queues size, then the previosly allocated
>>>>>> memzone may potentially be too small.  Instead, always free the old
>>>>>> memzone and allocate a new one.
>>>>>>
>>>>>> Fixes: dfaff37fc46d ("vmxnet3: import new vmxnet3 poll mode driver
>>>>>> implementation")
>>>>>>
>>>>>> Signed-off-by: Chas Williams <ciwillia@brocade.com>
>>>>>> ---
>>>>>>  drivers/net/vmxnet3/vmxnet3_rxtx.c | 6 +++---
>>>>>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c
>>>>>> b/drivers/net/vmxnet3/vmxnet3_rxtx.c
>>>>>> index 6649c3f..104e040 100644
>>>>>> --- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
>>>>>> +++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
>>>>>> @@ -893,8 +893,8 @@ vmxnet3_recv_pkts(void *rx_queue, struct rte_mbuf
>>>>>> **rx_pkts, uint16_t nb_pkts)
>>>>>>
>>>>>>  /*
>>>>>>   * Create memzone for device rings. malloc can't be used as the
>>>>>> physical
>>>>>> address is
>>>>>> - * needed. If the memzone is already created, then this function
>>>>>> returns
>>>>>> a ptr
>>>>>> - * to the old one.
>>>>>> + * needed. If the memzone already exists, we free it since it may have
>>>>>> been created
>>>>>> + * with a different size.
>>>>>>   */
>>>>>>  static const struct rte_memzone *
>>>>>>  ring_dma_zone_reserve(struct rte_eth_dev *dev, const char *ring_name,
>>>>>> @@ -909,7 +909,7 @@ ring_dma_zone_reserve(struct rte_eth_dev *dev,
>>>>>> const
>>>>>> char *ring_name,
>>>>>>
>>>>>>         mz = rte_memzone_lookup(z_name);
>>>>>>         if (mz)
>>>>>> -               return mz;
>>>>>> +               rte_memzone_free(mz);
>>>>>>
>>>>>>         return rte_memzone_reserve_aligned(z_name, ring_size,
>>>>>>                                            socket_id, 0,
>>>>>> VMXNET3_RING_BA_ALIGN);
>>>>>
>>>>>
>>>>>
>>>>> Chas,
>>>>>
>>>>> Thanks for hunting this one down. Wouldn't the rte_memzone_free()
>>>>> better fit into vmxnet3_cmd_ring_release() ?
>>>>
>>>>
>>>>
>>>> I don't care which way it goes.  I just did what is basically done in
>>>> gpa_zone_reserve() to match the "style".  Tracking the current ring size
>>>> and avoiding reallocating a potentially large chunk of memory seems like
>>>> a better idea.
>>>>
>>>>> Also the ring_dma_zone_reserve() could get replaced by
>>>>> rte_eth_dma_zone_reserve() (see also
>>>>
>>>>
>>>>
>>>> Yes, it probably should get changed to that along with tracking the size.
>>>
>>>
>>> Why don't we always allocate VMXNET3_RX_RING_MAX_SIZE entries? That
>>> way we don't need to reallocate on a later queue setup change?
>>
>>
>> That's using more memory than it needs to use.  It might break someone's
>> application
>> that already runs in some tightly constrained machine.  Failing to shrink
>> the memzone
>> isn't likely to break anything since they have (apparently) already dealt
>> with having
>> less memory available before switching to a smaller queue size.
>>
>> Still, it can be a matter for another day.
>>
>
> Other drivers (ixgbe, e1000, ...) always allocate based on the max
> ring size too. Since the VMXNET3_RX_RING_MAX_SIZE is 4096 I don't
> think it is a huge waste of memory.

OK.  BTW, the RX queues have the same issue.

  reply	other threads:[~2017-03-15 10:06 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-13 22:41 Charles (Chas) Williams
2017-03-14 16:11 ` Jan Blunck
2017-03-14 16:38   ` Charles (Chas) Williams
2017-03-15  8:18     ` Jan Blunck
2017-03-15  9:45       ` Charles (Chas) Williams
2017-03-15 10:05         ` Jan Blunck
2017-03-15 10:06           ` Charles (Chas) Williams [this message]
2017-03-15 12:34           ` Charles (Chas) Williams
2017-03-15 12:35 ` Charles (Chas) Williams
2017-03-15 17:57   ` Yong Wang
2017-03-15 18:30     ` Shrikrishna Khare
2017-03-15 18:19   ` Jan Blunck
2017-03-16 11:38     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00ca5e74-d3ef-ac13-d6b7-aaabcbb4c6fd@brocade.com \
    --to=ciwillia@brocade.com \
    --cc=dev@dpdk.org \
    --cc=jblunck@infradead.org \
    --cc=yongwang@vmware.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).