From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Ilya Maximets <i.maximets@samsung.com>,
Tiwei Bie <tiwei.bie@intel.com>,
zhihong.wang@intel.com, dev@dpdk.org
Subject: Re: [dpdk-dev] [1/4] net/virtio: fix the control vq support
Date: Wed, 23 Jan 2019 23:02:35 +0100 [thread overview]
Message-ID: <1cbe3d4f-0836-7537-5f24-6e8bda27f40c@redhat.com> (raw)
In-Reply-To: <9799a0cf-52a5-0ea4-03d8-b812b338ce59@samsung.com>
On 1/23/19 5:33 PM, Ilya Maximets wrote:
> Hmm. Nevermind.
> Please, ignore my previous comments to this patch.
> Patch seems compliant to spec, but the spec is not very clear.
Ok, thanks for the review and the folluw-up.
Maxime
> Best regards, Ilya Maximets.
>
> On 23.01.2019 16:09, Ilya Maximets wrote:
>> On 22.01.2019 20:01, Tiwei Bie wrote:
>>> This patch mainly fixed below issues in the packed ring based
>>> control vq support in virtio driver:
>>>
>>> 1. When parsing the used descriptors, we have to track the
>>> number of descs that we need to skip;
>>> 2. vq->vq_free_cnt was decreased twice for a same desc;
>>>
>>> Meanwhile, make the function name consistent with other parts.
>>>
>>> Fixes: ec194c2f1895 ("net/virtio: support packed queue in send command")
>>> Fixes: a4270ea4ff79 ("net/virtio: check head desc with correct wrap counter")
>>>
>>> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
>>> ---
>>> drivers/net/virtio/virtio_ethdev.c | 62 ++++++++++++++----------------
>>> drivers/net/virtio/virtqueue.h | 12 +-----
>>> 2 files changed, 31 insertions(+), 43 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
>>> index ee5a98b7c..a3fe65599 100644
>>> --- a/drivers/net/virtio/virtio_ethdev.c
>>> +++ b/drivers/net/virtio/virtio_ethdev.c
>>> @@ -142,16 +142,17 @@ static const struct rte_virtio_xstats_name_off rte_virtio_txq_stat_strings[] = {
>>> struct virtio_hw_internal virtio_hw_internal[RTE_MAX_ETHPORTS];
>>>
>>> static struct virtio_pmd_ctrl *
>>> -virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>>> - int *dlen, int pkt_num)
>>> +virtio_send_command_packed(struct virtnet_ctl *cvq,
>>> + struct virtio_pmd_ctrl *ctrl,
>>> + int *dlen, int pkt_num)
>>> {
>>> struct virtqueue *vq = cvq->vq;
>>> int head;
>>> struct vring_packed_desc *desc = vq->ring_packed.desc_packed;
>>> struct virtio_pmd_ctrl *result;
>>> - bool avail_wrap_counter, used_wrap_counter;
>>> - uint16_t flags;
>>> + bool avail_wrap_counter;
>>> int sum = 0;
>>> + int nb_descs = 0;
>>> int k;
>>>
>>> /*
>>> @@ -162,11 +163,10 @@ virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>>> */
>>> head = vq->vq_avail_idx;
>>> avail_wrap_counter = vq->avail_wrap_counter;
>>> - used_wrap_counter = vq->used_wrap_counter;
>>> - desc[head].flags = VRING_DESC_F_NEXT;
>>> desc[head].addr = cvq->virtio_net_hdr_mem;
>>> desc[head].len = sizeof(struct virtio_net_ctrl_hdr);
>>> vq->vq_free_cnt--;
>>> + nb_descs++;
>>> if (++vq->vq_avail_idx >= vq->vq_nentries) {
>>> vq->vq_avail_idx -= vq->vq_nentries;
>>> vq->avail_wrap_counter ^= 1;
>>> @@ -177,55 +177,51 @@ virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>>> + sizeof(struct virtio_net_ctrl_hdr)
>>> + sizeof(ctrl->status) + sizeof(uint8_t) * sum;
>>> desc[vq->vq_avail_idx].len = dlen[k];
>>> - flags = VRING_DESC_F_NEXT;
>>
>> Looks like barriers was badly placed here before this patch.
>> Anyway, you need a write barrier here between {addr, len} and flags updates.
>>
>>> + desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT |
>>> + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>>> + VRING_DESC_F_USED(!vq->avail_wrap_counter);
>>> sum += dlen[k];
>>> vq->vq_free_cnt--;
>>> - flags |= VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>>> - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>>> - desc[vq->vq_avail_idx].flags = flags;
>>> - rte_smp_wmb();
>>> - vq->vq_free_cnt--;
>>> + nb_descs++;
>>> if (++vq->vq_avail_idx >= vq->vq_nentries) {
>>> vq->vq_avail_idx -= vq->vq_nentries;
>>> vq->avail_wrap_counter ^= 1;
>>> }
>>> }
>>>
>>> -
>>> desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem
>>> + sizeof(struct virtio_net_ctrl_hdr);
>>> desc[vq->vq_avail_idx].len = sizeof(ctrl->status);
>>> - flags = VRING_DESC_F_WRITE;
>>> - flags |= VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>>> - VRING_DESC_F_USED(!vq->avail_wrap_counter);
>>> - desc[vq->vq_avail_idx].flags = flags;
>>> - flags = VRING_DESC_F_NEXT;
>>> - flags |= VRING_DESC_F_AVAIL(avail_wrap_counter) |
>>> - VRING_DESC_F_USED(!avail_wrap_counter);
>>> - desc[head].flags = flags;
>>> - rte_smp_wmb();
>>> -
>>
>> Same here. We need a write barrier to be sure that {addr, len} written
>> before updating flags.
>>
>> Another way to avoid most of barriers is to work similar to
>> 'flush_shadow_used_ring_packed',
>> i.e. update all the data in a loop - write barrier - update all the flags.
>>
>>> + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE |
>>> + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
>>> + VRING_DESC_F_USED(!vq->avail_wrap_counter);
>>> vq->vq_free_cnt--;
>>> + nb_descs++;
>>> if (++vq->vq_avail_idx >= vq->vq_nentries) {
>>> vq->vq_avail_idx -= vq->vq_nentries;
>>> vq->avail_wrap_counter ^= 1;
>>> }
>>>
>>> + virtio_wmb(vq->hw->weak_barriers);
>>> + desc[head].flags = VRING_DESC_F_NEXT |
>>> + VRING_DESC_F_AVAIL(avail_wrap_counter) |
>>> + VRING_DESC_F_USED(!avail_wrap_counter);
>>> +
>>> + virtio_wmb(vq->hw->weak_barriers);
>>> virtqueue_notify(vq);
>>>
>>> /* wait for used descriptors in virtqueue */
>>> - do {
>>> - rte_rmb();
>>> + while (!desc_is_used(&desc[head], vq))
>>> usleep(100);
>>> - } while (!__desc_is_used(&desc[head], used_wrap_counter));
>>> +
>>> + virtio_rmb(vq->hw->weak_barriers);
>>>
>>> /* now get used descriptors */
>>> - while (desc_is_used(&desc[vq->vq_used_cons_idx], vq)) {
>>> - vq->vq_free_cnt++;
>>> - if (++vq->vq_used_cons_idx >= vq->vq_nentries) {
>>> - vq->vq_used_cons_idx -= vq->vq_nentries;
>>> - vq->used_wrap_counter ^= 1;
>>> - }
>>> + vq->vq_free_cnt += nb_descs;
>>> + vq->vq_used_cons_idx += nb_descs;
>>> + if (vq->vq_used_cons_idx >= vq->vq_nentries) {
>>> + vq->vq_used_cons_idx -= vq->vq_nentries;
>>> + vq->used_wrap_counter ^= 1;
>>> }
>>>
>>> result = cvq->virtio_net_hdr_mz->addr;
>>> @@ -266,7 +262,7 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl,
>>> sizeof(struct virtio_pmd_ctrl));
>>>
>>> if (vtpci_packed_queue(vq->hw)) {
>>> - result = virtio_pq_send_command(cvq, ctrl, dlen, pkt_num);
>>> + result = virtio_send_command_packed(cvq, ctrl, dlen, pkt_num);
>>> goto out_unlock;
>>> }
>>>
>>> diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
>>> index 7fcde5643..ca9d8e6e3 100644
>>> --- a/drivers/net/virtio/virtqueue.h
>>> +++ b/drivers/net/virtio/virtqueue.h
>>> @@ -281,7 +281,7 @@ struct virtio_tx_region {
>>> };
>>>
>>> static inline int
>>> -__desc_is_used(struct vring_packed_desc *desc, bool wrap_counter)
>>> +desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
>>> {
>>> uint16_t used, avail, flags;
>>>
>>> @@ -289,16 +289,9 @@ __desc_is_used(struct vring_packed_desc *desc, bool wrap_counter)
>>> used = !!(flags & VRING_DESC_F_USED(1));
>>> avail = !!(flags & VRING_DESC_F_AVAIL(1));
>>>
>>> - return avail == used && used == wrap_counter;
>>> + return avail == used && used == vq->used_wrap_counter;
>>> }
>>>
>>> -static inline int
>>> -desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq)
>>> -{
>>> - return __desc_is_used(desc, vq->used_wrap_counter);
>>> -}
>>> -
>>> -
>>> static inline void
>>> vring_desc_init_packed(struct virtqueue *vq, int n)
>>> {
>>> @@ -354,7 +347,6 @@ virtqueue_enable_intr_packed(struct virtqueue *vq)
>>> {
>>> uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags;
>>>
>>> -
>>> if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) {
>>> virtio_wmb(vq->hw->weak_barriers);
>>> vq->event_flags_shadow = RING_EVENT_FLAGS_ENABLE;
>>>
next prev parent reply other threads:[~2019-01-23 22:02 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-22 17:01 [dpdk-dev] [PATCH 0/4] Virtio fixes Tiwei Bie
2019-01-22 17:01 ` [dpdk-dev] [PATCH 1/4] net/virtio: fix the control vq support Tiwei Bie
[not found] ` <CGME20190123130903eucas1p2730776e71039a79024dd7602b5dcad7d@eucas1p2.samsung.com>
2019-01-23 13:09 ` [dpdk-dev] [1/4] " Ilya Maximets
[not found] ` <CGME20190123163323eucas1p1baaec2c637cdc656ab9b26dbfd455bae@eucas1p1.samsung.com>
2019-01-23 16:33 ` Ilya Maximets
2019-01-23 22:02 ` Maxime Coquelin [this message]
2019-01-23 22:03 ` [dpdk-dev] [PATCH 1/4] " Maxime Coquelin
2019-01-22 17:01 ` [dpdk-dev] [PATCH 2/4] net/virtio-user: " Tiwei Bie
2019-01-23 22:07 ` Maxime Coquelin
2019-01-22 17:01 ` [dpdk-dev] [PATCH 3/4] net/virtio: use virtio barrier in packed ring Tiwei Bie
[not found] ` <CGME20190123155232eucas1p28bdd3a5c220452b81e21e48c19f3e5a7@eucas1p2.samsung.com>
2019-01-23 15:52 ` [dpdk-dev] [3/4] " Ilya Maximets
2019-01-23 22:09 ` [dpdk-dev] [PATCH 3/4] " Maxime Coquelin
2019-01-22 17:01 ` [dpdk-dev] [PATCH 4/4] net/virtio-user: fix used ring update in cvq handling Tiwei Bie
2019-01-23 22:08 ` Maxime Coquelin
2019-01-23 22:25 ` [dpdk-dev] [PATCH 0/4] Virtio fixes Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1cbe3d4f-0836-7537-5f24-6e8bda27f40c@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=dev@dpdk.org \
--cc=i.maximets@samsung.com \
--cc=tiwei.bie@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).