DPDK patches and discussions
 help / color / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: wangyunjian <wangyunjian@huawei.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>
Cc: "Lilijun (Jerry)" <jerry.lilijun@huawei.com>,
	xudingke <xudingke@huawei.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] eal/linux: do not create user mem map repeatedly when it exists
Date: Mon, 27 Jul 2020 10:24:29 +0100
Message-ID: <a75edde1-a30c-71d1-5191-6c37cad23779@intel.com> (raw)
In-Reply-To: <34EFBCA9F01B0748BEB6B629CE643AE60D0EAAD8@dggemm513-mbx.china.huawei.com>

On 25-Jul-20 10:59 AM, wangyunjian wrote:
>> -----Original Message-----
>> From: Burakov, Anatoly [mailto:anatoly.burakov@intel.com]
>> Sent: Friday, July 24, 2020 9:25 PM
>> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org;
>> david.marchand@redhat.com
>> Cc: Lilijun (Jerry) <jerry.lilijun@huawei.com>; xudingke
>> <xudingke@huawei.com>; stable@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v2] eal/linux: do not create user mem map
>> repeatedly when it exists
>>
>> On 23-Jul-20 3:48 PM, wangyunjian wrote:
>>> From: Yunjian Wang <wangyunjian@huawei.com>
>>>
>>> Currently, we will create new user mem map entry for the same memory
>>> segment, but in fact it has already been added to the user mem maps.
>>> It's not necessary to create it twice.
>>>
>>> To resolve the issue, add support to remove the same entry in the
>>> function compact_user_maps().
>>>
>>> Fixes: 0cbce3a167f1 ("vfio: skip DMA map failure if already mapped")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
>>> ---
>>> v2:
>>> * Remove the same entry in the function compact_user_maps()
>>> ---
>>>    lib/librte_eal/linux/eal_vfio.c | 5 +++++
>>>    1 file changed, 5 insertions(+)
>>>
>>> diff --git a/lib/librte_eal/linux/eal_vfio.c b/lib/librte_eal/linux/eal_vfio.c
>>> index abb12a354..df99307b7 100644
>>> --- a/lib/librte_eal/linux/eal_vfio.c
>>> +++ b/lib/librte_eal/linux/eal_vfio.c
>>> @@ -167,6 +167,10 @@ adjust_map(struct user_mem_map *src, struct
>> user_mem_map *end,
>>>    static int
>>>    merge_map(struct user_mem_map *left, struct user_mem_map *right)
>>>    {
>>> +	/* merge the same maps into one */
>>> +	if (memcmp(left, right, sizeof(struct user_mem_map)) == 0)
>>> +		goto out;
>>> +
>>
>> merge_map looks for adjacent maps only, but does not handle maps that
>> are wholly contained within one another ("the same map" also matches
>> this definition). wouldn't it be better to check for that instead of
>> *just* handling identical maps?
> 
> What about using the initial implementation?
> We don't create new user mem map entry for the same memory segment.

I don't like this implementation because it relies on particulars of how 
VFIO mapping work without explicitly specifying them. I.e. it's prone to 
breaking when changing code. That's not even mentioning that we have no 
guarantees on kernel behavior in that particular case being identical on 
all supported platforms.

I would honestly prefer an explicit compaction over implicit one.

> 
> @@ -1828,6 +1828,13 @@  container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
>   		ret = -1;
>   		goto out;
>   	}
> +
> +	/* we don't need create new user mem map entry
> +	 * for the same memory segment.
> +	 */
> +	if (errno == EBUSY || errno == EEXIST)
> +		goto out;
> +
>   	/* create new user mem map entry */
>   	new_map = &user_mem_maps->maps[user_mem_maps->n_maps++];
>   	new_map->addr = vaddr;
> 
> Thanks,
> Yunjian
>>
>>>    	if (left->addr + left->len != right->addr)
>>>    		return 0;
>>>    	if (left->iova + left->len != right->iova)
>>> @@ -174,6 +178,7 @@ merge_map(struct user_mem_map *left, struct
>> user_mem_map *right)
>>>
>>>    	left->len += right->len;
>>>
>>> +out:
>>>    	memset(right, 0, sizeof(*right));
>>>
>>>    	return 1;
>>>
>>
>>
>> --
>> Thanks,
>> Anatoly


-- 
Thanks,
Anatoly

  reply index

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-16 13:38 [dpdk-dev] [PATCH 1/1] " wangyunjian
2020-07-17 14:19 ` Burakov, Anatoly
2020-07-17 14:23   ` Burakov, Anatoly
2020-07-20  2:00     ` wangyunjian
2020-07-20 11:46       ` Burakov, Anatoly
2020-07-22 12:47         ` wangyunjian
2020-07-23 14:48 ` [dpdk-dev] [PATCH v2] " wangyunjian
2020-07-24 13:25   ` Burakov, Anatoly
2020-07-25  9:59     ` wangyunjian
2020-07-27  9:24       ` Burakov, Anatoly [this message]
2020-07-30 13:16         ` wangyunjian
2020-07-31 11:55           ` Burakov, Anatoly
2020-08-05 12:58             ` wangyunjian
2020-09-17 11:33               ` Burakov, Anatoly
2020-09-17 11:35   ` Burakov, Anatoly

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a75edde1-a30c-71d1-5191-6c37cad23779@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=jerry.lilijun@huawei.com \
    --cc=stable@dpdk.org \
    --cc=wangyunjian@huawei.com \
    --cc=xudingke@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox