DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Wang, Xiao W" <xiao.w.wang@intel.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	"Ye, Xiaolong" <xiaolong.ye@intel.com>,
	"shahafs@mellanox.com" <shahafs@mellanox.com>,
	"matan@mellanox.com" <matan@mellanox.com>,
	"amorenoz@redhat.com" <amorenoz@redhat.com>,
	"viacheslavo@mellanox.com" <viacheslavo@mellanox.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "lulu@redhat.com" <lulu@redhat.com>, "Xu, Rosen" <rosen.xu@intel.com>
Subject: Re: [dpdk-dev] [PATCH 3/9] vdpa/ifc: add support to vDPA queue enable
Date: Fri, 15 May 2020 18:06:38 +0800	[thread overview]
Message-ID: <8a36b473-34f8-bca8-ccfd-73b1a7c8b6c8@redhat.com> (raw)
In-Reply-To: <BN8PR11MB379529CAD982F382527FB9E7B8BD0@BN8PR11MB3795.namprd11.prod.outlook.com>


On 2020/5/15 下午5:42, Wang, Xiao W wrote:
>
> Hi,
>
> Best Regards,
>
> Xiao
>
> > -----Original Message-----
>
> > From: Jason Wang <jasowang@redhat.com>
>
> > Sent: Friday, May 15, 2020 5:09 PM
>
> > To: Maxime Coquelin <maxime.coquelin@redhat.com>; Ye, Xiaolong
>
> > <xiaolong.ye@intel.com>; shahafs@mellanox.com; matan@mellanox.com;
>
> > amorenoz@redhat.com; Wang, Xiao W <xiao.w.wang@intel.com>;
>
> > viacheslavo@mellanox.com; dev@dpdk.org
>
> > Cc: lulu@redhat.com
>
> > Subject: Re: [PATCH 3/9] vdpa/ifc: add support to vDPA queue enable
>
> >
>
> >
>
> > On 2020/5/14 下午4:02, Maxime Coquelin wrote:
>
> > > This patch adds support to enabling and disabling
>
> > > vrings on a per-vring granularity.
>
> > >
>
> > > Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com 
> <mailto:maxime.coquelin@redhat.com>>
>
> >
>
> >
>
> > A question here, I see in qemu peer_attach() may try to generate
>
> > VHOST_USER_SET_VRING_ENABLE, but just from the name I think it should
>
> > behave as queue_enable defined in virtio specification which is
>
> > explicitly under the control of guest?
>
> >
>
> > (Note, in Cindy's vDPA series, we must invent new vhost_ops to differ
>
> > from this one).
>
> From my view, common_cfg.enable reg is used for registering a queue to 
> hypervisor&vhost, but not ENABLE.
>

Well, what's your definition of "enable" in this context?

Spec said:

queue_enable
    The driver uses this to selectively prevent the device from
    executing requests from this virtqueue. 1 - enabled; 0 - disabled. 

This means, if queue_enable is not set to 1, device can not execute 
request for this specific virtqueue.


> The control queue message VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is for 
> enable/disable queue pairs.
>

But in qemu this is hooked to VHOST_USER_SET_VRING_ENABLE, see 
peer_attach(). And this patch hook VHOST_USER_SET_VRING_ENABLE to 
queue_enable.

This means IFCVF uses queue_enable instead of control vq or other 
register for setting multiqueue stuff? My understanding is that IFCVF 
has dedicated register to do this.

Note setting mq is different from queue_enable, changing the number of 
queues should let the underlayer NIC to properly configure its 
steering/switching/filtering logic to make sure traffic were only sent 
to the queues that is set by driver.

So hooking VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET to qeue_enable looks wrong.


> Think about when virtio net probes, all queues are selected and 
> "enabled" by init_vqs(),
>

I think we're talking about aligning the implementation with spec not 
just make it work for some specific drivers. Driver may choose to not 
enable a virtqueue by not setting 1 to queue_enable.

Thanks


> but MQ is not enabled until virtnet_set_channels() by user config with 
> "ethtool".
>
> Based on this, below reg writing is not OK to enable MQ. IFC HW 
> supports below registers for VF pass-thru case.
>
> Actually, we have specific reg designed to enable MQ in VDPA case.
>
> > > +IFCVF_WRITE_REG16(qid, &cfg->queue_select);
>
> > > +IFCVF_WRITE_REG16(enable, &cfg->queue_enable);
>
> BRs,
>
> Xiao
>
> >
>
> > Thanks
>
> >
>
> >
>
> > > ---
>
> > >drivers/vdpa/ifc/base/ifcvf.c |9 +++++++++
>
> > >drivers/vdpa/ifc/base/ifcvf.h |4 ++++
>
> > >drivers/vdpa/ifc/ifcvf_vdpa.c | 23 ++++++++++++++++++++++-
>
> > >3 files changed, 35 insertions(+), 1 deletion(-)
>
> > >
>
> > > diff --git a/drivers/vdpa/ifc/base/ifcvf.c 
> b/drivers/vdpa/ifc/base/ifcvf.c
>
> > > index 3c0b2dff66..dd4e7468ae 100644
>
> > > --- a/drivers/vdpa/ifc/base/ifcvf.c
>
> > > +++ b/drivers/vdpa/ifc/base/ifcvf.c
>
> > > @@ -327,3 +327,12 @@ ifcvf_get_queue_notify_off(struct ifcvf_hw 
> *hw, int
>
> > qid)
>
> > >return (u8 *)hw->notify_addr[qid] -
>
> > >(u8 *)hw->mem_resource[hw->notify_region].addr;
>
> > >}
>
> > > +
>
> > > +void
>
> > > +ifcvf_queue_enable(struct ifcvf_hw *hw, u16 qid,u16 enable)
>
> > > +{
>
> > > +struct ifcvf_pci_common_cfg *cfg = hw->common_cfg;
>
> > > +
>
> > > +IFCVF_WRITE_REG16(qid, &cfg->queue_select);
>
> > > +IFCVF_WRITE_REG16(enable, &cfg->queue_enable);
>
> > > +}
>
> > > diff --git a/drivers/vdpa/ifc/base/ifcvf.h 
> b/drivers/vdpa/ifc/base/ifcvf.h
>
> > > index eb04a94067..bd85010eff 100644
>
> > > --- a/drivers/vdpa/ifc/base/ifcvf.h
>
> > > +++ b/drivers/vdpa/ifc/base/ifcvf.h
>
> > > @@ -159,4 +159,8 @@ ifcvf_get_notify_region(struct ifcvf_hw *hw);
>
> > >u64
>
> > >ifcvf_get_queue_notify_off(struct ifcvf_hw *hw, int qid);
>
> > >
>
> > > +void
>
> > > +ifcvf_queue_enable(struct ifcvf_hw *hw, u16 qid,u16 enable);
>
> > > +
>
> > > +
>
> > >#endif /* _IFCVF_H_ */
>
> > > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c 
> b/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > index ec97178dcb..55ce0cf13d 100644
>
> > > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > @@ -937,6 +937,27 @@ ifcvf_dev_close(int vid)
>
> > >return 0;
>
> > >}
>
> > >
>
> > > +static int
>
> > > +ifcvf_set_vring_state(int vid, int vring, int state)
>
> > > +{
>
> > > +int did;
>
> > > +struct internal_list *list;
>
> > > +struct ifcvf_internal *internal;
>
> > > +
>
> > > +did = rte_vhost_get_vdpa_device_id(vid);
>
> > > +list = find_internal_resource_by_did(did);
>
> > > +if (list == NULL) {
>
> > > +DRV_LOG(ERR, "Invalid device id: %d", did);
>
> > > +return -1;
>
> > > +}
>
> > > +
>
> > > +internal = list->internal;
>
> > > +
>
> > > +ifcvf_queue_enable(&internal->hw, (uint16_t)vring, (uint16_t) state);
>
> > > +
>
> > > +return 0;
>
> > > +}
>
> > > +
>
> > >static int
>
> > >ifcvf_set_features(int vid)
>
> > >{
>
> > > @@ -1086,7 +1107,7 @@ static struct rte_vdpa_dev_ops ifcvf_ops = {
>
> > >.get_protocol_features = ifcvf_get_protocol_features,
>
> > >.dev_conf = ifcvf_dev_config,
>
> > >.dev_close = ifcvf_dev_close,
>
> > > -.set_vring_state = NULL,
>
> > > +.set_vring_state = ifcvf_set_vring_state,
>
> > >.set_features = ifcvf_set_features,
>
> > >.migration_done = NULL,
>
> > >.get_vfio_group_fd = ifcvf_get_vfio_group_fd,
>


  reply	other threads:[~2020-05-15 10:07 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-14  8:02 [dpdk-dev] [PATCH (v20.08) 0/9] vhost: improve Vhost/vDPA device init Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 1/9] vhost: fix virtio ready flag check Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 2/9] vhost: refactor Virtio ready check Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 3/9] vdpa/ifc: add support to vDPA queue enable Maxime Coquelin
2020-05-15  8:45   ` Ye Xiaolong
2020-05-15  9:09   ` Jason Wang
2020-05-15  9:42     ` Wang, Xiao W
2020-05-15 10:06       ` Jason Wang [this message]
2020-05-15 10:08       ` Jason Wang
2020-05-18  3:09         ` Wang, Xiao W
2020-05-18  3:17           ` Jason Wang
2020-05-14  8:02 ` [dpdk-dev] [PATCH 4/9] vhost: make some vDPA callbacks mandatory Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 5/9] vhost: check vDPA configuration succeed Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 6/9] vhost: add support for virtio status Maxime Coquelin
2020-06-11  2:45   ` Xia, Chenbo
2020-06-16  4:29   ` Xia, Chenbo
2020-06-22 10:18     ` Adrian Moreno
2020-06-22 11:00       ` Xia, Chenbo
2020-05-14  8:02 ` [dpdk-dev] [PATCH 7/9] vdpa/ifc: enable status protocol feature Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 8/9] vdpa/mlx5: " Maxime Coquelin
2020-05-14  8:02 ` [dpdk-dev] [PATCH 9/9] vhost: only use vDPA config workaround if needed Maxime Coquelin
2020-06-07 10:38   ` Matan Azrad
2020-06-08  8:34     ` Maxime Coquelin
2020-06-08  9:19       ` Matan Azrad
2020-06-09  9:04         ` Maxime Coquelin
2020-06-09 11:09           ` Matan Azrad
2020-06-09 11:26             ` Maxime Coquelin
2020-06-09 17:23             ` Maxime Coquelin
2020-06-14  6:08               ` Matan Azrad
2020-06-17  9:39                 ` Maxime Coquelin
2020-06-17 11:04                   ` Matan Azrad
2020-06-17 12:29                     ` Maxime Coquelin
2020-06-18  6:39                       ` Matan Azrad
2020-06-18  7:30                         ` Maxime Coquelin
2020-06-23 10:42                           ` Wang, Xiao W

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8a36b473-34f8-bca8-ccfd-73b1a7c8b6c8@redhat.com \
    --to=jasowang@redhat.com \
    --cc=amorenoz@redhat.com \
    --cc=dev@dpdk.org \
    --cc=lulu@redhat.com \
    --cc=matan@mellanox.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=rosen.xu@intel.com \
    --cc=shahafs@mellanox.com \
    --cc=viacheslavo@mellanox.com \
    --cc=xiao.w.wang@intel.com \
    --cc=xiaolong.ye@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).