DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: "Li, Ming3" <ming3.li@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: Re: [PATCH v2] windows/virt2phys: fix block MDL not updated
Date: Tue, 12 Sep 2023 15:18:10 +0300	[thread overview]
Message-ID: <20230912151810.2962113c@sovereign> (raw)
In-Reply-To: <SJ0PR11MB58144B9E5CF684570A75EE30DAF1A@SJ0PR11MB5814.namprd11.prod.outlook.com>

2023-09-12 11:13 (UTC+0000), Li, Ming3:
> > Is any if these bugs are related?
> > If so, please mention "Bugzilla ID: xxxx" in the commit message.
> > 
> > https://bugs.dpdk.org/show_bug.cgi?id=1201
> > https://bugs.dpdk.org/show_bug.cgi?id=1213
> >   
> 
> Sure, will do.
> 
> I cannot reproduce them in my environment, but from the message,
> they both mentioned that some pages not unlocked after exit. So they can be related.
> 
> For example, in Bug 1201, it only exists on Windows 2019, may it be caused by the
> OS limitation so that some memory segment got freed and allocated same virtual address again?
> Maybe someone can use this patch to check if there is 'refresh' behavior from TraceView logs.

I've posted a comment in BZ 1201 (the bugs are from the same user)
inviting to test your patch, let's see.

[...]
> > > To address this, a refresh function has been added. If a block with
> > > the same base address is detected in the driver's context, the MDL's
> > > physical address is compared with the real physical address.
> > > If they don't match, the MDL within the block is released and rebuilt
> > > to store the correct mapping.  
> > 
> > What if the size is different?
> > Should it be updated for the refreshed block along with the MDL?
> >   
> 
> The size of single MDL is always 2MB since it describes a hugepage here. 
> (at least from my observation :))

Your observation is correct, DPDK memalloc layer currently works this way.

> For allocated buffer larger than 2MB, it has
> serval mem segs (related to serval MDLs), most buggy mem segs are those
> possess a whole hugepage, these segments are freed along with the buffer,
> so their MDLs become invalid.
> 
> Since the block is just wrapper for MDL and list entry,
> the refresh action should be applied to the whole block.

There is always a single MDL per block, but it can describe multiple pages
(generally, if used beyond DPDK). Suppose there was a block for one page.
Then this page has been deallocated and allocated again but this time
in the middle of a multi-page region.
With your patch this will work, but that one-page block will be just lost
(never found because its MDL base VA does not match the region start VA).
The downside is that the memory remains locked.

The solution could be to check, when inserting a new block,
if there are existing blocks covered by the new one,
and if so, to free those blocks as they correspond to deallocated regions.
I think this can be done with another patch to limit the scope of this one.

Ideally virt2phys should not be doing this guesswork at all.
DPDK can just tell it when pages are allocated and freed,
but this requires some rework of the userspace part.
Just thinking out loud.

[...]
> > >  	/* Don't lock the same memory twice. */
> > >  	if (block != NULL) {
> > > +		KeAcquireSpinLock(g_lock, &irql);
> > > +		status = virt2phys_block_refresh(block, base, size);
> > > +		KeReleaseSpinLock(g_lock, irql);  
> > 
> > Is it safe to do all the external calls holding this spinlock?
> > I can't confirm from the doc that ZwQueryVirtualMemory(), for example, does
> > not access pageable data.
> > And virt2phys_lock_memory() raises exceptions, although it handles them.
> > Other stuff seems safe.
> > 
> > The rest of the code only takes the lock to access block and process lists, which
> > are allocated from the non-paged pool.
> > Now that I think of it, this may be insufficient because the code and the static
> > variables are not marked as non-paged.
> > 
> > The translation IOCTL performance is not critical, so maybe it is worth replacing
> > the spinlock with just a global mutex, WDYT?  
> 
> In the upcoming v3 patch, the lock will be used for block removal which won't fail.
> 
> I'm relatively new to Windows driver development. From my perspective, the use
> of a spinlock seems appropriate in this driver. Maybe a read-write lock can be
> more effective here?

It is correctness that I am concerned with, not efficiency.
Translating VA to IOVA is not performance-critical,
the spinlock is used just because it seemed sufficient.

Relating the code to the docs [1]:

* The code within a critical region guarded by an spin lock
  must neither be pageable nor make any references to pageable data.

  - Process and block structures are allocated from the non-paged pool - OK.
  - The code is not marked as non-pageable - FAIL, though never fired.

* The code within a critical region guarded by a spin lock can neither
  call any external function that might access pageable data...

  - MDL manipulation and page locking can run at "dispatch" IRQL - OK.
  - ZwQueryVirtualMemory() - unsure

  ... or raise an exception, nor can it generate any exceptions.

  - MmLockPagesInMemory() does generate an exception on failure,
    but it is handled - unsure

* The caller should release the spin lock with KeReleaseSpinLock as
  quickly as possible.

  - Before the patch, there was a fixed number of locked operations - OK.
  - After the patch, there's more work under the lock, although it seems to
    me that all of it can be done at "dispatch" IRQL - unsure.

I've added Tyler from Microsoft, he might know more.

[1]:
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-keacquirespinlock

      parent reply	other threads:[~2023-09-12 12:18 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-11  8:22 [PATCH] " Ric Li
2023-09-11 13:09 ` [PATCH v2] " Ric Li
2023-09-11 21:50   ` Dmitry Kozlyuk
2023-09-12 11:13     ` Li, Ming3
2023-09-12 11:17       ` [PATCH v3] " Ric Li
2023-11-27  1:31         ` Li, Ming3
2023-11-30  4:10         ` Dmitry Kozlyuk
2023-12-04 10:22           ` [PATCH v4] " Ric Li
2023-12-04 10:32           ` Ric Li
2023-09-12 12:18       ` Dmitry Kozlyuk [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230912151810.2962113c@sovereign \
    --to=dmitry.kozliuk@gmail.com \
    --cc=dev@dpdk.org \
    --cc=ming3.li@intel.com \
    --cc=roretzla@linux.microsoft.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).