* [dpdk-dev] virtio optimization idea @ 2015-09-04 8:25 Xie, Huawei 2015-09-04 16:50 ` Xie, Huawei ` (2 more replies) 0 siblings, 3 replies; 11+ messages in thread From: Xie, Huawei @ 2015-09-04 8:25 UTC (permalink / raw) To: dev, Thomas Monjalon, Linhaifeng, Tetsuya Mukawa Cc: ms >> Michael S. Tsirkin Hi: Recently I have done one virtio optimization proof of concept. The optimization includes two parts: 1) avail ring set with fixed descriptors 2) RX vectorization With the optimizations, we could have several times of performance boost for purely vhost-virtio throughput. Here i will only cover the first part, which is the prerequisite for the second part. Let us first take RX for example. Currently when we fill the avail ring with guest mbuf, we need a) allocate one descriptor(for non sg mbuf) from free descriptors b) set the idx of the desc into the entry of avail ring c) set the addr/len field of the descriptor to point to guest blank mbuf data area Those operation takes time, and especially step b results in modifed (M) state of the cache line for the avail ring in the virtio processing core. When vhost processes the avail ring, the cache line transfer from virtio processing core to vhost processing core takes pretty much CPU cycles. To solve this problem, this is the arrangement of RX ring for DPDK pmd(for non-mergable case). avail idx + | +----+----+---+-------------+------+ | 0 | 1 | 2 | ... | 254 | 255 | avail ring +-+--+-+--+-+-+---------+---+--+---+ | | | | | | | | | | | | v v v | v v +-+--+-+--+-+-+---------+---+--+---+ | 0 | 1 | 2 | ... | 254 | 255 | desc ring +----+----+---+-------------+------+ | | +----+----+---+-------------+------+ | 0 | 1 | 2 | | 254 | 255 | used ring +----+----+---+-------------+------+ | + Avail ring is initialized with fixed descriptor and is never changed, i.e, the index value of the nth avail ring entry is always n, which means virtio PMD is actually refilling desc ring only, without having to change avail ring. When vhost fetches avail ring, if not evicted, it is always in its first level cache. When RX receives packets from used ring, we use the used->idx as the desc idx. This requires that vhost processes and returns descs from avail ring to used ring in order, which is true for both current dpdk vhost and kernel vhost implementation. In my understanding, there is no necessity for vhost net to process descriptors OOO. One case could be zero copy, for example, if one descriptor doesn't meet zero copy requirment, we could directly return it to used ring, earlier than the descriptors in front of it. To enforce this, i want to use a reserved bit to indicate in order processing of descriptors. For tx ring, the arrangement is like below. Each transmitted mbuf needs a desc for virtio_net_hdr, so actually we have only 128 free slots. ++ || || +-----+-----+-----+--------------+------+------+------+ | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring with fixed descriptor +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat +-----+-----+-----+--------------+------+------+------+ /huawei ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-04 8:25 [dpdk-dev] virtio optimization idea Xie, Huawei @ 2015-09-04 16:50 ` Xie, Huawei 2015-09-08 8:21 ` Tetsuya Mukawa 2015-09-08 15:39 ` Stephen Hemminger 2015-09-09 7:33 ` Michael S. Tsirkin 2 siblings, 1 reply; 11+ messages in thread From: Xie, Huawei @ 2015-09-04 16:50 UTC (permalink / raw) To: dev, Thomas Monjalon, Linhaifeng, Tetsuya Mukawa Cc: ms >> Michael S. Tsirkin There is some format issue with the ascii chart of the tx ring. Update that chart. Sorry for the trouble. On 9/4/2015 4:25 PM, Xie, Huawei wrote: > Hi: > > Recently I have done one virtio optimization proof of concept. The > optimization includes two parts: > 1) avail ring set with fixed descriptors > 2) RX vectorization > With the optimizations, we could have several times of performance boost > for purely vhost-virtio throughput. > > Here i will only cover the first part, which is the prerequisite for the > second part. > Let us first take RX for example. Currently when we fill the avail ring > with guest mbuf, we need > a) allocate one descriptor(for non sg mbuf) from free descriptors > b) set the idx of the desc into the entry of avail ring > c) set the addr/len field of the descriptor to point to guest blank mbuf > data area > > Those operation takes time, and especially step b results in modifed (M) > state of the cache line for the avail ring in the virtio processing > core. When vhost processes the avail ring, the cache line transfer from > virtio processing core to vhost processing core takes pretty much CPU > cycles. > To solve this problem, this is the arrangement of RX ring for DPDK > pmd(for non-mergable case). > > avail > idx > + > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | ... | 254 | 255 | avail ring > +-+--+-+--+-+-+---------+---+--+---+ > | | | | | | > | | | | | | > v v v | v v > +-+--+-+--+-+-+---------+---+--+---+ > | 0 | 1 | 2 | ... | 254 | 255 | desc ring > +----+----+---+-------------+------+ > | > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | | 254 | 255 | used ring > +----+----+---+-------------+------+ > | > + > Avail ring is initialized with fixed descriptor and is never changed, > i.e, the index value of the nth avail ring entry is always n, which > means virtio PMD is actually refilling desc ring only, without having to > change avail ring. > When vhost fetches avail ring, if not evicted, it is always in its first > level cache. > > When RX receives packets from used ring, we use the used->idx as the > desc idx. This requires that vhost processes and returns descs from > avail ring to used ring in order, which is true for both current dpdk > vhost and kernel vhost implementation. In my understanding, there is no > necessity for vhost net to process descriptors OOO. One case could be > zero copy, for example, if one descriptor doesn't meet zero copy > requirment, we could directly return it to used ring, earlier than the > descriptors in front of it. > To enforce this, i want to use a reserved bit to indicate in order > processing of descriptors. > > For tx ring, the arrangement is like below. Each transmitted mbuf needs > a desc for virtio_net_hdr, so actually we have only 128 free slots. > > > > ++ > || > || > +-----+-----+-----+--------------+------+------+------+ > | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | | | || | | | > v v v || v v v > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | | | || | | | > v v v || v v v > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat > > > > /huawei > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-04 16:50 ` Xie, Huawei @ 2015-09-08 8:21 ` Tetsuya Mukawa 2015-09-08 9:42 ` Xie, Huawei 0 siblings, 1 reply; 11+ messages in thread From: Tetsuya Mukawa @ 2015-09-08 8:21 UTC (permalink / raw) To: Xie, Huawei, dev, Thomas Monjalon, Linhaifeng Cc: ms >> Michael S. Tsirkin On 2015/09/05 1:50, Xie, Huawei wrote: > There is some format issue with the ascii chart of the tx ring. Update > that chart. > Sorry for the trouble. Hi XIe, Thanks for sharing a way to optimize virtio. I have a few questions. > > On 9/4/2015 4:25 PM, Xie, Huawei wrote: >> Hi: >> >> Recently I have done one virtio optimization proof of concept. The >> optimization includes two parts: >> 1) avail ring set with fixed descriptors >> 2) RX vectorization >> With the optimizations, we could have several times of performance boost >> for purely vhost-virtio throughput. When you check performance, have you optimized only virtio-net driver? If so, can we optimize vhost backend(librte_vhost) also using your optimization way? >> >> Here i will only cover the first part, which is the prerequisite for the >> second part. >> Let us first take RX for example. Currently when we fill the avail ring >> with guest mbuf, we need >> a) allocate one descriptor(for non sg mbuf) from free descriptors >> b) set the idx of the desc into the entry of avail ring >> c) set the addr/len field of the descriptor to point to guest blank mbuf >> data area >> >> Those operation takes time, and especially step b results in modifed (M) >> state of the cache line for the avail ring in the virtio processing >> core. When vhost processes the avail ring, the cache line transfer from >> virtio processing core to vhost processing core takes pretty much CPU >> cycles. >> To solve this problem, this is the arrangement of RX ring for DPDK >> pmd(for non-mergable case). >> >> avail >> idx >> + >> | >> +----+----+---+-------------+------+ >> | 0 | 1 | 2 | ... | 254 | 255 | avail ring >> +-+--+-+--+-+-+---------+---+--+---+ >> | | | | | | >> | | | | | | >> v v v | v v >> +-+--+-+--+-+-+---------+---+--+---+ >> | 0 | 1 | 2 | ... | 254 | 255 | desc ring >> +----+----+---+-------------+------+ >> | >> | >> +----+----+---+-------------+------+ >> | 0 | 1 | 2 | | 254 | 255 | used ring >> +----+----+---+-------------+------+ >> | >> + >> Avail ring is initialized with fixed descriptor and is never changed, >> i.e, the index value of the nth avail ring entry is always n, which >> means virtio PMD is actually refilling desc ring only, without having to >> change avail ring. For example, avail ring is like below. struct vring_avail { uint16_t flags; uint16_t idx; uint16_t ring[QUEUE_SIZE]; }; My understanding is that virtio-net driver still needs to change avail_ring.idx, but don't need to change avail_ring.ring[]. Is this correct? Tetsuya >> When vhost fetches avail ring, if not evicted, it is always in its first >> level cache. >> >> When RX receives packets from used ring, we use the used->idx as the >> desc idx. This requires that vhost processes and returns descs from >> avail ring to used ring in order, which is true for both current dpdk >> vhost and kernel vhost implementation. In my understanding, there is no >> necessity for vhost net to process descriptors OOO. One case could be >> zero copy, for example, if one descriptor doesn't meet zero copy >> requirment, we could directly return it to used ring, earlier than the >> descriptors in front of it. >> To enforce this, i want to use a reserved bit to indicate in order >> processing of descriptors. >> >> For tx ring, the arrangement is like below. Each transmitted mbuf needs >> a desc for virtio_net_hdr, so actually we have only 128 free slots. >> >> >> >> ++ >> || >> || >> +-----+-----+-----+--------------+------+------+------+ >> | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | | | || | | | >> v v v || v v v >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | | | || | | | >> v v v || v v v >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat >> >> >> >> /huawei >> ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-08 8:21 ` Tetsuya Mukawa @ 2015-09-08 9:42 ` Xie, Huawei 0 siblings, 0 replies; 11+ messages in thread From: Xie, Huawei @ 2015-09-08 9:42 UTC (permalink / raw) To: Tetsuya Mukawa, dev, Thomas Monjalon, Linhaifeng Cc: ms >> Michael S. Tsirkin On 9/8/2015 4:21 PM, Tetsuya Mukawa wrote: > On 2015/09/05 1:50, Xie, Huawei wrote: >> There is some format issue with the ascii chart of the tx ring. Update >> that chart. >> Sorry for the trouble. > Hi XIe, > > Thanks for sharing a way to optimize virtio. > I have a few questions. > >> On 9/4/2015 4:25 PM, Xie, Huawei wrote: >>> Hi: >>> >>> Recently I have done one virtio optimization proof of concept. The >>> optimization includes two parts: >>> 1) avail ring set with fixed descriptors >>> 2) RX vectorization >>> With the optimizations, we could have several times of performance boost >>> for purely vhost-virtio throughput. > When you check performance, have you optimized only virtio-net driver? > If so, can we optimize vhost backend(librte_vhost) also using your > optimization way? We could do some optimization to vhost based on the same vring layout, but as vhost needs to support legacy virtio as well, it couldn't make this assumption. >>> Here i will only cover the first part, which is the prerequisite for the >>> second part. >>> Let us first take RX for example. Currently when we fill the avail ring >>> with guest mbuf, we need >>> a) allocate one descriptor(for non sg mbuf) from free descriptors >>> b) set the idx of the desc into the entry of avail ring >>> c) set the addr/len field of the descriptor to point to guest blank mbuf >>> data area >>> >>> Those operation takes time, and especially step b results in modifed (M) >>> state of the cache line for the avail ring in the virtio processing >>> core. When vhost processes the avail ring, the cache line transfer from >>> virtio processing core to vhost processing core takes pretty much CPU >>> cycles. >>> To solve this problem, this is the arrangement of RX ring for DPDK >>> pmd(for non-mergable case). >>> >>> avail >>> idx >>> + >>> | >>> +----+----+---+-------------+------+ >>> | 0 | 1 | 2 | ... | 254 | 255 | avail ring >>> +-+--+-+--+-+-+---------+---+--+---+ >>> | | | | | | >>> | | | | | | >>> v v v | v v >>> +-+--+-+--+-+-+---------+---+--+---+ >>> | 0 | 1 | 2 | ... | 254 | 255 | desc ring >>> +----+----+---+-------------+------+ >>> | >>> | >>> +----+----+---+-------------+------+ >>> | 0 | 1 | 2 | | 254 | 255 | used ring >>> +----+----+---+-------------+------+ >>> | >>> + >>> Avail ring is initialized with fixed descriptor and is never changed, >>> i.e, the index value of the nth avail ring entry is always n, which >>> means virtio PMD is actually refilling desc ring only, without having to >>> change avail ring. > For example, avail ring is like below. > struct vring_avail { > uint16_t flags; > uint16_t idx; > uint16_t ring[QUEUE_SIZE]; > }; > > My understanding is that virtio-net driver still needs to change > avail_ring.idx, but don't need to change avail_ring.ring[]. > Is this correct? Yes, avail ring is initialized once and never gets updated. It is like virtio frontend is only using descriptor ring. > > Tetsuya > >>> When vhost fetches avail ring, if not evicted, it is always in its first >>> level cache. >>> >>> When RX receives packets from used ring, we use the used->idx as the >>> desc idx. This requires that vhost processes and returns descs from >>> avail ring to used ring in order, which is true for both current dpdk >>> vhost and kernel vhost implementation. In my understanding, there is no >>> necessity for vhost net to process descriptors OOO. One case could be >>> zero copy, for example, if one descriptor doesn't meet zero copy >>> requirment, we could directly return it to used ring, earlier than the >>> descriptors in front of it. >>> To enforce this, i want to use a reserved bit to indicate in order >>> processing of descriptors. >>> >>> For tx ring, the arrangement is like below. Each transmitted mbuf needs >>> a desc for virtio_net_hdr, so actually we have only 128 free slots. >>> >>> >>> >>> ++ >>> || >>> || >>> +-----+-----+-----+--------------+------+------+------+ >>> | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring >>> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >>> | | | || | | | >>> v v v || v v v >>> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >>> | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr >>> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >>> | | | || | | | >>> v v v || v v v >>> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >>> | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat >>> >>> >>> >>> /huawei >>> > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-04 8:25 [dpdk-dev] virtio optimization idea Xie, Huawei 2015-09-04 16:50 ` Xie, Huawei @ 2015-09-08 15:39 ` Stephen Hemminger 2015-09-08 15:52 ` Xie, Huawei 2015-09-09 7:33 ` Michael S. Tsirkin 2 siblings, 1 reply; 11+ messages in thread From: Stephen Hemminger @ 2015-09-08 15:39 UTC (permalink / raw) To: Xie, Huawei; +Cc: dev, ms >> Michael S. Tsirkin On Fri, 4 Sep 2015 08:25:05 +0000 "Xie, Huawei" <huawei.xie@intel.com> wrote: > Hi: > > Recently I have done one virtio optimization proof of concept. The > optimization includes two parts: > 1) avail ring set with fixed descriptors > 2) RX vectorization > With the optimizations, we could have several times of performance boost > for purely vhost-virtio throughput. > > Here i will only cover the first part, which is the prerequisite for the > second part. > Let us first take RX for example. Currently when we fill the avail ring > with guest mbuf, we need > a) allocate one descriptor(for non sg mbuf) from free descriptors > b) set the idx of the desc into the entry of avail ring > c) set the addr/len field of the descriptor to point to guest blank mbuf > data area > > Those operation takes time, and especially step b results in modifed (M) > state of the cache line for the avail ring in the virtio processing > core. When vhost processes the avail ring, the cache line transfer from > virtio processing core to vhost processing core takes pretty much CPU > cycles. > To solve this problem, this is the arrangement of RX ring for DPDK > pmd(for non-mergable case). > > avail > idx > + > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | ... | 254 | 255 | avail ring > +-+--+-+--+-+-+---------+---+--+---+ > | | | | | | > | | | | | | > v v v | v v > +-+--+-+--+-+-+---------+---+--+---+ > | 0 | 1 | 2 | ... | 254 | 255 | desc ring > +----+----+---+-------------+------+ > | > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | | 254 | 255 | used ring > +----+----+---+-------------+------+ > | > + > Avail ring is initialized with fixed descriptor and is never changed, > i.e, the index value of the nth avail ring entry is always n, which > means virtio PMD is actually refilling desc ring only, without having to > change avail ring. > When vhost fetches avail ring, if not evicted, it is always in its first > level cache. > > When RX receives packets from used ring, we use the used->idx as the > desc idx. This requires that vhost processes and returns descs from > avail ring to used ring in order, which is true for both current dpdk > vhost and kernel vhost implementation. In my understanding, there is no > necessity for vhost net to process descriptors OOO. One case could be > zero copy, for example, if one descriptor doesn't meet zero copy > requirment, we could directly return it to used ring, earlier than the > descriptors in front of it. > To enforce this, i want to use a reserved bit to indicate in order > processing of descriptors. > > For tx ring, the arrangement is like below. Each transmitted mbuf needs > a desc for virtio_net_hdr, so actually we have only 128 free slots. > > > > ++ > > || > > || > > +-----+-----+-----+--------------+------+------+------+ > > | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring > with fixed descriptor > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | | | || | | > | > v v v || v v > v > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring > for virtio_net_hdr > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | | | || | | > | > v v v || v v > v > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring > for tx dat > > +-----+-----+-----+--------------+------+------+------+ > Does this still work with Linux (or BSD) guest/host. If you are assuming both virtio/vhost are DPDK this is never going to be usable. On a related note, have you looked at getting virtio to support the new standard (not legacy) mode? ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-08 15:39 ` Stephen Hemminger @ 2015-09-08 15:52 ` Xie, Huawei 2015-09-17 15:41 ` Xie, Huawei 0 siblings, 1 reply; 11+ messages in thread From: Xie, Huawei @ 2015-09-08 15:52 UTC (permalink / raw) To: Stephen Hemminger; +Cc: dev, ms >> Michael S. Tsirkin On 9/8/2015 11:39 PM, Stephen Hemminger wrote: > On Fri, 4 Sep 2015 08:25:05 +0000 > "Xie, Huawei" <huawei.xie@intel.com> wrote: > >> Hi: >> >> Recently I have done one virtio optimization proof of concept. The >> optimization includes two parts: >> 1) avail ring set with fixed descriptors >> 2) RX vectorization >> With the optimizations, we could have several times of performance boost >> for purely vhost-virtio throughput. >> >> Here i will only cover the first part, which is the prerequisite for the >> second part. >> Let us first take RX for example. Currently when we fill the avail ring >> with guest mbuf, we need >> a) allocate one descriptor(for non sg mbuf) from free descriptors >> b) set the idx of the desc into the entry of avail ring >> c) set the addr/len field of the descriptor to point to guest blank mbuf >> data area >> >> Those operation takes time, and especially step b results in modifed (M) >> state of the cache line for the avail ring in the virtio processing >> core. When vhost processes the avail ring, the cache line transfer from >> virtio processing core to vhost processing core takes pretty much CPU >> cycles. >> To solve this problem, this is the arrangement of RX ring for DPDK >> pmd(for non-mergable case). >> >> avail >> idx >> + >> | >> +----+----+---+-------------+------+ >> | 0 | 1 | 2 | ... | 254 | 255 | avail ring >> +-+--+-+--+-+-+---------+---+--+---+ >> | | | | | | >> | | | | | | >> v v v | v v >> +-+--+-+--+-+-+---------+---+--+---+ >> | 0 | 1 | 2 | ... | 254 | 255 | desc ring >> +----+----+---+-------------+------+ >> | >> | >> +----+----+---+-------------+------+ >> | 0 | 1 | 2 | | 254 | 255 | used ring >> +----+----+---+-------------+------+ >> | >> + >> Avail ring is initialized with fixed descriptor and is never changed, >> i.e, the index value of the nth avail ring entry is always n, which >> means virtio PMD is actually refilling desc ring only, without having to >> change avail ring. >> When vhost fetches avail ring, if not evicted, it is always in its first >> level cache. >> >> When RX receives packets from used ring, we use the used->idx as the >> desc idx. This requires that vhost processes and returns descs from >> avail ring to used ring in order, which is true for both current dpdk >> vhost and kernel vhost implementation. In my understanding, there is no >> necessity for vhost net to process descriptors OOO. One case could be >> zero copy, for example, if one descriptor doesn't meet zero copy >> requirment, we could directly return it to used ring, earlier than the >> descriptors in front of it. >> To enforce this, i want to use a reserved bit to indicate in order >> processing of descriptors. >> >> For tx ring, the arrangement is like below. Each transmitted mbuf needs >> a desc for virtio_net_hdr, so actually we have only 128 free slots. >> >> >> >> ++ >> >> || >> >> || >> >> +-----+-----+-----+--------------+------+------+------+ >> >> | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring >> with fixed descriptor >> >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> >> | | | || | | >> | >> v v v || v v >> v >> >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> >> | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring >> for virtio_net_hdr >> >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> >> | | | || | | >> | >> v v v || v v >> v >> >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> >> | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring >> for tx dat >> >> +-----+-----+-----+--------------+------+------+------+ >> > Does this still work with Linux (or BSD) guest/host. > If you are assuming both virtio/vhost are DPDK this is never going > to be usable. It works with both dpdk vhost and kernel vhost implementations. But to enforce this, we had better add a new feature bit. > > On a related note, have you looked at getting virtio to support the > new standard (not legacy) mode? Yes, we add it to our plan to support virtio 1.0. > > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-08 15:52 ` Xie, Huawei @ 2015-09-17 15:41 ` Xie, Huawei 0 siblings, 0 replies; 11+ messages in thread From: Xie, Huawei @ 2015-09-17 15:41 UTC (permalink / raw) To: Stephen Hemminger; +Cc: dev, virtualization, ms >> Michael S. Tsirkin On 9/8/2015 11:54 PM, Xie, Huawei wrote: > On 9/8/2015 11:39 PM, Stephen Hemminger wrote: >> On Fri, 4 Sep 2015 08:25:05 +0000 >> "Xie, Huawei" <huawei.xie@intel.com> wrote: >> >>> Hi: >>> >>> Recently I have done one virtio optimization proof of concept. The >>> optimization includes two parts: >>> 1) avail ring set with fixed descriptors >>> 2) RX vectorization >>> With the optimizations, we could have several times of performance boost >>> for purely vhost-virtio throughput. >>> >>> Here i will only cover the first part, which is the prerequisite for the >>> second part. >>> Let us first take RX for example. Currently when we fill the avail ring >>> with guest mbuf, we need >>> a) allocate one descriptor(for non sg mbuf) from free descriptors >>> b) set the idx of the desc into the entry of avail ring >>> c) set the addr/len field of the descriptor to point to guest blank mbuf >>> data area >>> >>> Those operation takes time, and especially step b results in modifed (M) >>> state of the cache line for the avail ring in the virtio processing >>> core. When vhost processes the avail ring, the cache line transfer from >>> virtio processing core to vhost processing core takes pretty much CPU >>> cycles. >>> To solve this problem, this is the arrangement of RX ring for DPDK >>> pmd(for non-mergable case). >>> >>> avail >>> idx >>> + >>> | >>> +----+----+---+-------------+------+ >>> | 0 | 1 | 2 | ... | 254 | 255 | avail ring >>> +-+--+-+--+-+-+---------+---+--+---+ >>> | | | | | | >>> | | | | | | >>> v v v | v v >>> +-+--+-+--+-+-+---------+---+--+---+ >>> | 0 | 1 | 2 | ... | 254 | 255 | desc ring >>> +----+----+---+-------------+------+ >>> | >>> | >>> +----+----+---+-------------+------+ >>> | 0 | 1 | 2 | | 254 | 255 | used ring >>> +----+----+---+-------------+------+ >>> | >>> + >>> Avail ring is initialized with fixed descriptor and is never changed, >>> i.e, the index value of the nth avail ring entry is always n, which >>> means virtio PMD is actually refilling desc ring only, without having to >>> change avail ring. >>> When vhost fetches avail ring, if not evicted, it is always in its first >>> level cache. >>> >>> When RX receives packets from used ring, we use the used->idx as the >>> desc idx. This requires that vhost processes and returns descs from >>> avail ring to used ring in order, which is true for both current dpdk >>> vhost and kernel vhost implementation. In my understanding, there is no >>> necessity for vhost net to process descriptors OOO. One case could be >>> zero copy, for example, if one descriptor doesn't meet zero copy >>> requirment, we could directly return it to used ring, earlier than the >>> descriptors in front of it. >>> To enforce this, i want to use a reserved bit to indicate in order >>> processing of descriptors. >>> >>> For tx ring, the arrangement is like below. Each transmitted mbuf needs >>> a desc for virtio_net_hdr, so actually we have only 128 free slots. >>> >>> >>> ++ || || +-----+-----+-----+--------------+------+------+------+ | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat >>> >>> >> Does this still work with Linux (or BSD) guest/host. >> If you are assuming both virtio/vhost are DPDK this is never going >> to be usable. > It works with both dpdk vhost and kernel vhost implementations. > But to enforce this, we had better add a new feature bit. Hi Stephen, some update about compatibility: This optimization in theory is compliant with current kernel vhost, qemu, and dpdk vhost implementations. Today i run dpdk virtio PMD with qemu and kernel vhost, and it works fine. >> On a related note, have you looked at getting virtio to support the >> new standard (not legacy) mode? > Yes, we add it to our plan to support virtio 1.0. >> > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-04 8:25 [dpdk-dev] virtio optimization idea Xie, Huawei 2015-09-04 16:50 ` Xie, Huawei 2015-09-08 15:39 ` Stephen Hemminger @ 2015-09-09 7:33 ` Michael S. Tsirkin 2015-09-10 6:32 ` Xie, Huawei 2 siblings, 1 reply; 11+ messages in thread From: Michael S. Tsirkin @ 2015-09-09 7:33 UTC (permalink / raw) To: Xie, Huawei; +Cc: dev, virtualization On Fri, Sep 04, 2015 at 08:25:05AM +0000, Xie, Huawei wrote: > Hi: > > Recently I have done one virtio optimization proof of concept. The > optimization includes two parts: > 1) avail ring set with fixed descriptors > 2) RX vectorization > With the optimizations, we could have several times of performance boost > for purely vhost-virtio throughput. Thanks! I'm very happy to see people work on the virtio ring format optimizations. I think it's best to analyze each optimization separately, unless you see a reason why they would only give benefit when applied together. Also ideally, we'd need a unit test to show the performance impact. We've been using the tests in tools/virtio/ under linux, feel free to enhance these to simulate more workloads, or to suggest something else entirely. > Here i will only cover the first part, which is the prerequisite for the > second part. > Let us first take RX for example. Currently when we fill the avail ring > with guest mbuf, we need > a) allocate one descriptor(for non sg mbuf) from free descriptors > b) set the idx of the desc into the entry of avail ring > c) set the addr/len field of the descriptor to point to guest blank mbuf > data area > > Those operation takes time, and especially step b results in modifed (M) > state of the cache line for the avail ring in the virtio processing > core. When vhost processes the avail ring, the cache line transfer from > virtio processing core to vhost processing core takes pretty much CPU > cycles. > To solve this problem, this is the arrangement of RX ring for DPDK > pmd(for non-mergable case). > > avail > idx > + > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | ... | 254 | 255 | avail ring > +-+--+-+--+-+-+---------+---+--+---+ > | | | | | | > | | | | | | > v v v | v v > +-+--+-+--+-+-+---------+---+--+---+ > | 0 | 1 | 2 | ... | 254 | 255 | desc ring > +----+----+---+-------------+------+ > | > | > +----+----+---+-------------+------+ > | 0 | 1 | 2 | | 254 | 255 | used ring > +----+----+---+-------------+------+ > | > + > Avail ring is initialized with fixed descriptor and is never changed, > i.e, the index value of the nth avail ring entry is always n, which > means virtio PMD is actually refilling desc ring only, without having to > change avail ring. > When vhost fetches avail ring, if not evicted, it is always in its first > level cache. > > When RX receives packets from used ring, we use the used->idx as the > desc idx. This requires that vhost processes and returns descs from > avail ring to used ring in order, which is true for both current dpdk > vhost and kernel vhost implementation. In my understanding, there is no > necessity for vhost net to process descriptors OOO. One case could be > zero copy, for example, if one descriptor doesn't meet zero copy > requirment, we could directly return it to used ring, earlier than the > descriptors in front of it. > To enforce this, i want to use a reserved bit to indicate in order > processing of descriptors. So what's the point in changing the idx for the used ring? You need to communicate the length to the guest anyway, don't you? > For tx ring, the arrangement is like below. Each transmitted mbuf needs > a desc for virtio_net_hdr, so actually we have only 128 free slots. Just fix this one. Support ANY_LAYOUT and then you can put data linearly. And/or support INDIRECT_DESC and then you can use an indirect descriptor. > > > ++ > > || > > || > > +-----+-----+-----+--------------+------+------+------+ > > | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring > with fixed descriptor > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | | | || | | > | > v v v || v v > v > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring > for virtio_net_hdr > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | | | || | | > | > v v v || v v > v > > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > > | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring > for tx dat > > +-----+-----+-----+--------------+------+------+------+ > This one came out corrupted. > > /huawei Please Cc virtio related discussion more widely. I added the virtualization mailing list. So what you want to do is avoid changing the avail ring, isn't it enough to pre-format it and cache the values in the guest? Host can then keep using avail ring without changes, it will stay in cache. Something like the below for guest should do the trick (untested): Signed-off-by: Michael S. Tsirkin <mst@redhat.com> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 096b857..9363b50 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -91,6 +91,7 @@ struct vring_virtqueue { bool last_add_time_valid; ktime_t last_add_time; #endif + u16 *avail; /* Tokens for callbacks. */ void *data[]; @@ -236,7 +237,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, /* Put entry in available array (but don't update avail->idx until they * do sync). */ avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & (vq->vring.num - 1); - vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); + if (vq->avail[avail] != head) { + vq->avail[avail] = head; + vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); + } /* Descriptors and available array need to be set before we expose the * new available array entries. */ @@ -724,6 +728,11 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL); if (!vq) return NULL; + vq->avail = kzalloc(sizeof (*vq->avail) * num, GFP_KERNEL); + if (!va->avail) { + kfree(vq); + return NULL; + } vring_init(&vq->vring, num, pages, vring_align); vq->vq.callback = callback; -- MST ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-09 7:33 ` Michael S. Tsirkin @ 2015-09-10 6:32 ` Xie, Huawei 2015-09-10 7:20 ` Michael S. Tsirkin 0 siblings, 1 reply; 11+ messages in thread From: Xie, Huawei @ 2015-09-10 6:32 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: dev, virtualization On 9/9/2015 3:34 PM, Michael S. Tsirkin wrote: > On Fri, Sep 04, 2015 at 08:25:05AM +0000, Xie, Huawei wrote: >> Hi: >> >> Recently I have done one virtio optimization proof of concept. The >> optimization includes two parts: >> 1) avail ring set with fixed descriptors >> 2) RX vectorization >> With the optimizations, we could have several times of performance boost >> for purely vhost-virtio throughput. > Thanks! > I'm very happy to see people work on the virtio ring format > optimizations. > > I think it's best to analyze each optimization separately, > unless you see a reason why they would only give benefit when applied > together. Agree. Will split the patch to see each change's benefit. Of course it is also very common two give much more gain than the sum of individual. > > Also ideally, we'd need a unit test to show the performance impact. > We've been using the tests in tools/virtio/ under linux, > feel free to enhance these to simulate more workloads, or > to suggest something else entirely. If possible, we would provide perf test case under tools/virtio. I am interested if the optimization could help kernel virtio-net driver performance, if not the other is the bottleneck. > > Avail ring is initialized with fixed descriptor and is never changed, > i.e, the index value of the nth avail ring entry is always n, which > means virtio PMD is actually refilling desc ring only, without having to > change avail ring. > When vhost fetches avail ring, if not evicted, it is always in its first > level cache. > > When RX receives packets from used ring, we use the used->idx as the > desc idx. This requires that vhost processes and returns descs from > avail ring to used ring in order, which is true for both current dpdk > vhost and kernel vhost implementation. In my understanding, there is no > necessity for vhost net to process descriptors OOO. One case could be > zero copy, for example, if one descriptor doesn't meet zero copy > requirment, we could directly return it to used ring, earlier than the > descriptors in front of it. > To enforce this, i want to use a reserved bit to indicate in order > processing of descriptors. > So what's the point in changing the idx for the used ring? > You need to communicate the length to the guest anyway, don't you? For guest virtio driver, we only use the length field in the entry of the used ring and don't use the index in the entry. Instead, use used->idx & 255 as the desc idx for RX, and used->idx & 127 as the desc idx for TX. For vhost driver, as it couldn't assume fixed ring layout(to support legacy virtio), it needs to update the idx in the used ring entry. > >> For tx ring, the arrangement is like below. Each transmitted mbuf needs >> a desc for virtio_net_hdr, so actually we have only 128 free slots. > Just fix this one. Support ANY_LAYOUT and then you can put data > linearly. And/or support INDIRECT_DESC and then you can > use an indirect descriptor. Would check those two features. > >> >> > This one came out corrupted. Actually i ever replied to the original mail and fixed it. Copy it here again. ++ || || +-----+-----+-----+--------------+------+------+------+ | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | | | || | | | v v v || v v v +--+--+--+--+-----+---+------+---+--+---+------+--+---+ | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat >> >> /huawei > > Please Cc virtio related discussion more widely. > I added the virtualization mailing list. > > > So what you want to do is avoid changing the avail > ring, isn't it enough to pre-format it and cache > the values in the guest? > > Host can then keep using avail ring without changes, it will stay in cache. > Something like the below for guest should do the trick (untested): Good optimization to tackle the cache line transfer issue. Your change could avoid changing avail ring if it points to same head index. It could improve kernel virtio-net driver's performance if the avail ring doesn't change in running. We would also investigating applying this idea to DPDK virtio PMD slow path. For the optimization i did, which i call fast path, the driver "knows" in theory the avail ring willn't never get changed, so it doesn't a) allocate and free descriptors b) care the avail ring. Based on the fact the DPDK pmd is actually using the simple desc ring below, we could directly map avail->idx and used->idx to desc idx, and use vector instruction to do parallel processing. +-+--+-+--+-+-+---------+---+--+---+ | 0 | 1 | 2 | ... | 254 | 255 | rx desc ring +----+----+---+-------------+------+ > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 096b857..9363b50 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -91,6 +91,7 @@ struct vring_virtqueue { > bool last_add_time_valid; > ktime_t last_add_time; > #endif > + u16 *avail; > > /* Tokens for callbacks. */ > void *data[]; > @@ -236,7 +237,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, > /* Put entry in available array (but don't update avail->idx until they > * do sync). */ > avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & (vq->vring.num - 1); > - vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > + if (vq->avail[avail] != head) { > + vq->avail[avail] = head; > + vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > + } > > /* Descriptors and available array need to be set before we expose the > * new available array entries. */ > @@ -724,6 +728,11 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, > vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL); > if (!vq) > return NULL; > + vq->avail = kzalloc(sizeof (*vq->avail) * num, GFP_KERNEL); > + if (!va->avail) { > + kfree(vq); > + return NULL; > + } > > vring_init(&vq->vring, num, pages, vring_align); > vq->vq.callback = callback; > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-10 6:32 ` Xie, Huawei @ 2015-09-10 7:20 ` Michael S. Tsirkin 2015-09-14 3:08 ` Xie, Huawei 0 siblings, 1 reply; 11+ messages in thread From: Michael S. Tsirkin @ 2015-09-10 7:20 UTC (permalink / raw) To: Xie, Huawei; +Cc: dev, virtualization On Thu, Sep 10, 2015 at 06:32:35AM +0000, Xie, Huawei wrote: > On 9/9/2015 3:34 PM, Michael S. Tsirkin wrote: > > On Fri, Sep 04, 2015 at 08:25:05AM +0000, Xie, Huawei wrote: > >> Hi: > >> > >> Recently I have done one virtio optimization proof of concept. The > >> optimization includes two parts: > >> 1) avail ring set with fixed descriptors > >> 2) RX vectorization > >> With the optimizations, we could have several times of performance boost > >> for purely vhost-virtio throughput. > > Thanks! > > I'm very happy to see people work on the virtio ring format > > optimizations. > > > > I think it's best to analyze each optimization separately, > > unless you see a reason why they would only give benefit when applied > > together. > > Agree. Will split the patch to see each change's benefit. Of course it > is also very common two give much more gain than the sum of individual. > > > > > Also ideally, we'd need a unit test to show the performance impact. > > We've been using the tests in tools/virtio/ under linux, > > feel free to enhance these to simulate more workloads, or > > to suggest something else entirely. > If possible, we would provide perf test case under tools/virtio. > I am interested if the optimization could help kernel virtio-net driver > performance, if not the other is the bottleneck. > > > > Avail ring is initialized with fixed descriptor and is never changed, > > i.e, the index value of the nth avail ring entry is always n, which > > means virtio PMD is actually refilling desc ring only, without having to > > change avail ring. > > When vhost fetches avail ring, if not evicted, it is always in its first > > level cache. > > > > When RX receives packets from used ring, we use the used->idx as the > > desc idx. This requires that vhost processes and returns descs from > > avail ring to used ring in order, which is true for both current dpdk > > vhost and kernel vhost implementation. In my understanding, there is no > > necessity for vhost net to process descriptors OOO. One case could be > > zero copy, for example, if one descriptor doesn't meet zero copy > > requirment, we could directly return it to used ring, earlier than the > > descriptors in front of it. > > To enforce this, i want to use a reserved bit to indicate in order > > processing of descriptors. > > So what's the point in changing the idx for the used ring? > > You need to communicate the length to the guest anyway, don't you? > For guest virtio driver, we only use the length field in the entry of > the used ring and don't use the index in the entry. Instead, use > used->idx & 255 as the desc idx for RX, and used->idx & 127 as the desc > idx for TX. OK but length and index are in the same cache line. As long as we read the length, does it make sense to skip reading the index? > For vhost driver, as it couldn't assume fixed ring layout(to support > legacy virtio), it needs to update the idx in the used ring entry. > > > >> For tx ring, the arrangement is like below. Each transmitted mbuf needs > >> a desc for virtio_net_hdr, so actually we have only 128 free slots. > > Just fix this one. Support ANY_LAYOUT and then you can put data > > linearly. And/or support INDIRECT_DESC and then you can > > use an indirect descriptor. > Would check those two features. > > > >> > >> > > This one came out corrupted. > Actually i ever replied to the original mail and fixed it. Copy it here > again. > > ++ > || > || > +-----+-----+-----+--------------+------+------+------+ > | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | | | || | | | > v v v || v v v > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | | | || | | | > v v v || v v v > +--+--+--+--+-----+---+------+---+--+---+------+--+---+ > | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat > > > >> > >> /huawei > > > > Please Cc virtio related discussion more widely. > > I added the virtualization mailing list. > > > > > > So what you want to do is avoid changing the avail > > ring, isn't it enough to pre-format it and cache > > the values in the guest? > > > > Host can then keep using avail ring without changes, it will stay in cache. > > Something like the below for guest should do the trick (untested): > > Good optimization to tackle the cache line transfer issue. > Your change could avoid changing avail ring if it points to same head > index. It could improve kernel virtio-net driver's performance if the > avail ring doesn't change in running. > We would also investigating applying this idea to DPDK virtio PMD slow path. > For the optimization i did, which i call fast path, the driver "knows" > in theory the avail ring willn't never get changed, so it doesn't a) > allocate and free descriptors b) care the avail ring. > Based on the fact the DPDK pmd is actually using the simple desc ring > below, we could directly map avail->idx and used->idx to desc idx, and > use vector instruction to do parallel processing. > > +-+--+-+--+-+-+---------+---+--+---+ > | 0 | 1 | 2 | ... | 254 | 255 | rx desc ring > +----+----+---+-------------+------+ Yes, using these instructions in kernel is generally problematic, but can work in userspace. Waiting for a description of that. > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > > index 096b857..9363b50 100644 > > --- a/drivers/virtio/virtio_ring.c > > +++ b/drivers/virtio/virtio_ring.c > > @@ -91,6 +91,7 @@ struct vring_virtqueue { > > bool last_add_time_valid; > > ktime_t last_add_time; > > #endif > > + u16 *avail; > > > > /* Tokens for callbacks. */ > > void *data[]; > > @@ -236,7 +237,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, > > /* Put entry in available array (but don't update avail->idx until they > > * do sync). */ > > avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & (vq->vring.num - 1); > > - vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > + if (vq->avail[avail] != head) { > > + vq->avail[avail] = head; > > + vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); > > + } > > > > /* Descriptors and available array need to be set before we expose the > > * new available array entries. */ > > @@ -724,6 +728,11 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, > > vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL); > > if (!vq) > > return NULL; > > + vq->avail = kzalloc(sizeof (*vq->avail) * num, GFP_KERNEL); > > + if (!va->avail) { > > + kfree(vq); > > + return NULL; > > + } > > > > vring_init(&vq->vring, num, pages, vring_align); > > vq->vq.callback = callback; > > > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-dev] virtio optimization idea 2015-09-10 7:20 ` Michael S. Tsirkin @ 2015-09-14 3:08 ` Xie, Huawei 0 siblings, 0 replies; 11+ messages in thread From: Xie, Huawei @ 2015-09-14 3:08 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: dev, virtualization On 9/10/2015 3:20 PM, Michael S. Tsirkin wrote: > On Thu, Sep 10, 2015 at 06:32:35AM +0000, Xie, Huawei wrote: >> On 9/9/2015 3:34 PM, Michael S. Tsirkin wrote: >>> On Fri, Sep 04, 2015 at 08:25:05AM +0000, Xie, Huawei wrote: >>>> Hi: >>>> >>>> Recently I have done one virtio optimization proof of concept. The >>>> optimization includes two parts: >>>> 1) avail ring set with fixed descriptors >>>> 2) RX vectorization >>>> With the optimizations, we could have several times of performance boost >>>> for purely vhost-virtio throughput. >>> Thanks! >>> I'm very happy to see people work on the virtio ring format >>> optimizations. >>> >>> I think it's best to analyze each optimization separately, >>> unless you see a reason why they would only give benefit when applied >>> together. >> >> Agree. Will split the patch to see each change's benefit. Of course it >> is also very common two give much more gain than the sum of individual. >> >>> Also ideally, we'd need a unit test to show the performance impact. >>> We've been using the tests in tools/virtio/ under linux, >>> feel free to enhance these to simulate more workloads, or >>> to suggest something else entirely. >> If possible, we would provide perf test case under tools/virtio. >> I am interested if the optimization could help kernel virtio-net driver >> performance, if not the other is the bottleneck. >>> Avail ring is initialized with fixed descriptor and is never changed, >>> i.e, the index value of the nth avail ring entry is always n, which >>> means virtio PMD is actually refilling desc ring only, without having to >>> change avail ring. >>> When vhost fetches avail ring, if not evicted, it is always in its first >>> level cache. >>> >>> When RX receives packets from used ring, we use the used->idx as the >>> desc idx. This requires that vhost processes and returns descs from >>> avail ring to used ring in order, which is true for both current dpdk >>> vhost and kernel vhost implementation. In my understanding, there is no >>> necessity for vhost net to process descriptors OOO. One case could be >>> zero copy, for example, if one descriptor doesn't meet zero copy >>> requirment, we could directly return it to used ring, earlier than the >>> descriptors in front of it. >>> To enforce this, i want to use a reserved bit to indicate in order >>> processing of descriptors. >>> So what's the point in changing the idx for the used ring? >>> You need to communicate the length to the guest anyway, don't you? >> For guest virtio driver, we only use the length field in the entry of >> the used ring and don't use the index in the entry. Instead, use >> used->idx & 255 as the desc idx for RX, and used->idx & 127 as the desc >> idx for TX. > OK but length and index are in the same cache line. > As long as we read the length, does it make sense > to skip reading the index? I don't understand "skipping reading the index". Currently virtio RX needs the length field, and CPU will automatically fetch the adjacent index field in the unit of cache line, though the optimized driver doesn't use the index field. /huawei > >> For vhost driver, as it couldn't assume fixed ring layout(to support >> legacy virtio), it needs to update the idx in the used ring entry. >>>> For tx ring, the arrangement is like below. Each transmitted mbuf needs >>>> a desc for virtio_net_hdr, so actually we have only 128 free slots. >>> Just fix this one. Support ANY_LAYOUT and then you can put data >>> linearly. And/or support INDIRECT_DESC and then you can >>> use an indirect descriptor. >> Would check those two features. >>>> >>>> >>> This one came out corrupted. >> Actually i ever replied to the original mail and fixed it. Copy it here >> again. >> >> ++ >> || >> || >> +-----+-----+-----+--------------+------+------+------+ >> | 0 | 1 | ... | 127 || 128 | 129 | ... | 255 | avail ring >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | | | || | | | >> v v v || v v v >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | 127 | 128 | ... | 255 || 127 | 128 | ... | 255 | desc ring for virtio_net_hdr >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | | | || | | | >> v v v || v v v >> +--+--+--+--+-----+---+------+---+--+---+------+--+---+ >> | 0 | 1 | ... | 127 || 0 | 1 | ... | 127 | desc ring for tx dat >> >> >>>> >>>> /huawei >>> Please Cc virtio related discussion more widely. >>> I added the virtualization mailing list. >>> >>> >>> So what you want to do is avoid changing the avail >>> ring, isn't it enough to pre-format it and cache >>> the values in the guest? >>> >>> Host can then keep using avail ring without changes, it will stay in cache. >>> Something like the below for guest should do the trick (untested): >> Good optimization to tackle the cache line transfer issue. >> Your change could avoid changing avail ring if it points to same head >> index. It could improve kernel virtio-net driver's performance if the >> avail ring doesn't change in running. >> We would also investigating applying this idea to DPDK virtio PMD slow path. >> For the optimization i did, which i call fast path, the driver "knows" >> in theory the avail ring willn't never get changed, so it doesn't a) >> allocate and free descriptors b) care the avail ring. >> Based on the fact the DPDK pmd is actually using the simple desc ring >> below, we could directly map avail->idx and used->idx to desc idx, and >> use vector instruction to do parallel processing. >> >> +-+--+-+--+-+-+---------+---+--+---+ >> | 0 | 1 | 2 | ... | 254 | 255 | rx desc ring >> +----+----+---+-------------+------+ > Yes, using these instructions in kernel is generally problematic, > but can work in userspace. Waiting for a description of that. About vectorization? Would do that. /huawei > > >> >>> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> >>> >>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c >>> index 096b857..9363b50 100644 >>> --- a/drivers/virtio/virtio_ring.c >>> +++ b/drivers/virtio/virtio_ring.c >>> @@ -91,6 +91,7 @@ struct vring_virtqueue { >>> bool last_add_time_valid; >>> ktime_t last_add_time; >>> #endif >>> + u16 *avail; >>> >>> /* Tokens for callbacks. */ >>> void *data[]; >>> @@ -236,7 +237,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, >>> /* Put entry in available array (but don't update avail->idx until they >>> * do sync). */ >>> avail = virtio16_to_cpu(_vq->vdev, vq->vring.avail->idx) & (vq->vring.num - 1); >>> - vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); >>> + if (vq->avail[avail] != head) { >>> + vq->avail[avail] = head; >>> + vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head); >>> + } >>> >>> /* Descriptors and available array need to be set before we expose the >>> * new available array entries. */ >>> @@ -724,6 +728,11 @@ struct virtqueue *vring_new_virtqueue(unsigned int index, >>> vq = kmalloc(sizeof(*vq) + sizeof(void *)*num, GFP_KERNEL); >>> if (!vq) >>> return NULL; >>> + vq->avail = kzalloc(sizeof (*vq->avail) * num, GFP_KERNEL); >>> + if (!va->avail) { >>> + kfree(vq); >>> + return NULL; >>> + } >>> >>> vring_init(&vq->vring, num, pages, vring_align); >>> vq->vq.callback = callback; >>> ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2015-09-17 15:41 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2015-09-04 8:25 [dpdk-dev] virtio optimization idea Xie, Huawei 2015-09-04 16:50 ` Xie, Huawei 2015-09-08 8:21 ` Tetsuya Mukawa 2015-09-08 9:42 ` Xie, Huawei 2015-09-08 15:39 ` Stephen Hemminger 2015-09-08 15:52 ` Xie, Huawei 2015-09-17 15:41 ` Xie, Huawei 2015-09-09 7:33 ` Michael S. Tsirkin 2015-09-10 6:32 ` Xie, Huawei 2015-09-10 7:20 ` Michael S. Tsirkin 2015-09-14 3:08 ` Xie, Huawei
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).