From: Stephen Hemminger <stephen@networkplumber.org>
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: dev@dpdk.org, Avi Kivity <avi@scylladb.com>,
Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [dpdk-dev] [PATCH] vfio: support iommu group zero
Date: Wed, 9 Dec 2015 13:58:01 -0800 [thread overview]
Message-ID: <20151209135801.17965487@xeon-e3> (raw)
In-Reply-To: <2562631.e9AmeysRzG@xps13>
On Wed, 09 Dec 2015 22:12:33 +0100
Thomas Monjalon <thomas.monjalon@6wind.com> wrote:
> 2015-12-09 09:55, Stephen Hemminger:
> > The current implementation of VFIO will not with the new no-IOMMU mode
> > in 4.4 kernel. The original code assumed that IOMMU group zero would
> > never be used. Group numbers are assigned starting at zero, and up
> > until now the group numbers came from the hardware which is likely
> > to use group 0 for system devices that are not used with DPDK.
> >
> > The fix is to allow 0 as a valid group and rearrange code
> > to split the return value from the group value.
> >
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > ---
> > Why was this ignored? It was originally sent on 26 Oct 15 back
> > when IOMMU discussion was lively.
>
> There was no review of this patch.
> The patch has been marked as deferred recently when it was too late
> to do such feature changes in DPDK code:
> http://dpdk.org/dev/patchwork/patch/8035/
This is why as a fallback the MAINTAINER has to review the patch
or direct a sub-maintainer to do it. I think almost 2 months is
plenty of time for review.
Another alternative policy is to have
a "default yes" policy such that if there are no objections or
discussion things that are submitted early just go in (that
is what ZeroMQ does). http://rfc.zeromq.org/spec:22
* Maintainers SHOULD NOT merge their own patches except in exceptional cases, such as non-responsiveness from other Maintainers for an extended period (more than 1-2 days).
* Maintainers SHALL NOT make value judgments on correct patches.
* Maintainers SHALL merge correct patches from other Contributors rapidly.
* Maintainers SHOULD ask for improvements to incorrect patches and SHOULD reject incorrect patches if the Contributor does not respond constructively.
* Any Contributor who has value judgments on a correct patch SHOULD express these via their own patches.
* Maintainers MAY commit changes to non-source documentation directly to the project.
next prev parent reply other threads:[~2015-12-09 21:57 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-09 17:55 Stephen Hemminger
2015-12-09 21:12 ` Thomas Monjalon
2015-12-09 21:51 ` Stephen Hemminger
2015-12-09 21:58 ` Stephen Hemminger [this message]
2015-12-09 22:49 ` Thomas Monjalon
2015-12-09 23:12 ` Stephen Hemminger
2015-12-09 23:22 ` Alex Williamson
2015-12-10 0:52 ` Stephen Hemminger
2015-12-10 1:52 ` Alex Williamson
2015-12-10 9:57 ` Burakov, Anatoly
2015-12-10 20:30 ` Thomas Monjalon
-- strict thread matches above, loose matches on Subject: below --
2015-10-27 2:34 Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151209135801.17965487@xeon-e3 \
--to=stephen@networkplumber.org \
--cc=alex.williamson@redhat.com \
--cc=avi@scylladb.com \
--cc=dev@dpdk.org \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).