DPDK patches and discussions
 help / color / mirror / Atom feed
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>
Cc: nakajima.yoshihiro@lab.ntt.co.jp, mst@redhat.com, dev@dpdk.org,
	p.fedin@samsung.com, ann.zhuangyanying@huawei.com
Subject: Re: [dpdk-dev] [PATCH v2 1/5] mem: add --single-file to create single mem-backed file
Date: Tue, 8 Mar 2016 10:44:37 +0800	[thread overview]
Message-ID: <20160308024437.GJ14300@yliu-dev.sh.intel.com> (raw)
In-Reply-To: <56DE30FE.7020809@intel.com>

On Tue, Mar 08, 2016 at 09:55:10AM +0800, Tan, Jianfeng wrote:
> Hi Yuanhan,
> 
> On 3/7/2016 9:13 PM, Yuanhan Liu wrote:
> >CC'ed EAL hugepage maintainer, which is something you should do when
> >send a patch.
> 
> Thanks for doing this.
> 
> >
> >On Fri, Feb 05, 2016 at 07:20:24PM +0800, Jianfeng Tan wrote:
> >>Originally, there're two cons in using hugepage: a. needs root
> >>privilege to touch /proc/self/pagemap, which is a premise to
> >>alllocate physically contiguous memseg; b. possibly too many
> >>hugepage file are created, especially used with 2M hugepage.
> >>
> >>For virtual devices, they don't care about physical-contiguity
> >>of allocated hugepages at all. Option --single-file is to
> >>provide a way to allocate all hugepages into single mem-backed
> >>file.
> >>
> >>Known issue:
> >>a. single-file option relys on kernel to allocate numa-affinitive
> >>memory.
> >>b. possible ABI break, originally, --no-huge uses anonymous memory
> >>instead of file-backed way to create memory.
> >>
> >>Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> >>Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>
> >...
> >>@@ -956,6 +961,16 @@ eal_check_common_options(struct internal_config *internal_cfg)
> >>  			"be specified together with --"OPT_NO_HUGE"\n");
> >>  		return -1;
> >>  	}
> >>+	if (internal_cfg->single_file && internal_cfg->force_sockets == 1) {
> >>+		RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE" cannot "
> >>+			"be specified together with --"OPT_SOCKET_MEM"\n");
> >>+		return -1;
> >>+	}
> >>+	if (internal_cfg->single_file && internal_cfg->hugepage_unlink) {
> >>+		RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot "
> >>+			"be specified together with --"OPT_SINGLE_FILE"\n");
> >>+		return -1;
> >>+	}
> >The two limitation doesn't make sense to me.
> 
> For the force_sockets option, my original thought on --single-file option
> is, we don't sort those pages (require root/cap_sys_admin) and even don't
> look up numa information because it may contain both sockets' memory.
> 
> For the hugepage_unlink option, those hugepage files get closed in the end
> of memory initialization, if we even unlink those hugepage files, so we
> cannot share those with other processes (say backend).

Yeah, I know how the two limitations come, from your implementation. I
was just wondering if they both are __truly__ the limitations. I mean,
can we get rid of them somehow?

For --socket-mem option, if we can't handle it well, or if we could
ignore the socket_id for allocated huge page, yes, the limitation is
a true one.

But for the second option, no, we should be able to co-work it with
well. One extra action is you should not invoke "close(fd)" for those
huge page files. And then you can get all the informations as I stated
in a reply to your 2nd patch.

> >
> >>diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> >>index 6008533..68ef49a 100644
> >>--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> >>+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> >>@@ -1102,20 +1102,54 @@ rte_eal_hugepage_init(void)
> >>  	/* get pointer to global configuration */
> >>  	mcfg = rte_eal_get_configuration()->mem_config;
> >>-	/* hugetlbfs can be disabled */
> >>-	if (internal_config.no_hugetlbfs) {
> >>-		addr = mmap(NULL, internal_config.memory, PROT_READ | PROT_WRITE,
> >>-				MAP_PRIVATE | MAP_ANONYMOUS, 0, 0);
> >>+	/* when hugetlbfs is disabled or single-file option is specified */
> >>+	if (internal_config.no_hugetlbfs || internal_config.single_file) {
> >>+		int fd;
> >>+		uint64_t pagesize;
> >>+		unsigned socket_id = rte_socket_id();
> >>+		char filepath[MAX_HUGEPAGE_PATH];
> >>+
> >>+		if (internal_config.no_hugetlbfs) {
> >>+			eal_get_hugefile_path(filepath, sizeof(filepath),
> >>+					      "/dev/shm", 0);
> >>+			pagesize = RTE_PGSIZE_4K;
> >>+		} else {
> >>+			struct hugepage_info *hpi;
> >>+
> >>+			hpi = &internal_config.hugepage_info[0];
> >>+			eal_get_hugefile_path(filepath, sizeof(filepath),
> >>+					      hpi->hugedir, 0);
> >>+			pagesize = hpi->hugepage_sz;
> >>+		}
> >>+		fd = open(filepath, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR);
> >>+		if (fd < 0) {
> >>+			RTE_LOG(ERR, EAL, "%s: open %s failed: %s\n",
> >>+				__func__, filepath, strerror(errno));
> >>+			return -1;
> >>+		}
> >>+
> >>+		if (ftruncate(fd, internal_config.memory) < 0) {
> >>+			RTE_LOG(ERR, EAL, "ftuncate %s failed: %s\n",
> >>+				filepath, strerror(errno));
> >>+			return -1;
> >>+		}
> >>+
> >>+		addr = mmap(NULL, internal_config.memory,
> >>+			    PROT_READ | PROT_WRITE,
> >>+			    MAP_SHARED | MAP_POPULATE, fd, 0);
> >>  		if (addr == MAP_FAILED) {
> >>-			RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
> >>-					strerror(errno));
> >>+			RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n",
> >>+				__func__, strerror(errno));
> >>  			return -1;
> >>  		}
> >>  		mcfg->memseg[0].phys_addr = (phys_addr_t)(uintptr_t)addr;
> >>  		mcfg->memseg[0].addr = addr;
> >>-		mcfg->memseg[0].hugepage_sz = RTE_PGSIZE_4K;
> >>+		mcfg->memseg[0].hugepage_sz = pagesize;
> >>  		mcfg->memseg[0].len = internal_config.memory;
> >>-		mcfg->memseg[0].socket_id = 0;
> >>+		mcfg->memseg[0].socket_id = socket_id;
> >I saw quite few issues:
> >
> >- Assume I have a system with two hugepage sizes: 1G (x4) and 2M (x512),
> >   mounted at /dev/hugepages and /mnt, respectively.
> >
> >   Here we then got an 5G internal_config.memory, and your code will
> >   try to mmap 5G on the first mount point (/dev/hugepages) due to the
> >   hardcode logic in your code:
> >
> >       hpi = &internal_config.hugepage_info[0];
> >       eal_get_hugefile_path(filepath, sizeof(filepath),
> >       		      hpi->hugedir, 0);
> >
> >   But it has 4G in total, therefore, it will fails.
> 
> As mentioned above, this case is not for original design of --single-file.

But it's a so common case, isn't it?

> >
> >- As you stated, socket_id is hardcoded, which could be wrong.
> 
> We rely on OS to allocate hugepages, and cannot promise physical hugepages
> in the big hugepage file are from the same socket.
> 
> >
> >- As stated in above, the option limitation doesn't seem right to me.
> >
> >   I mean, --single-file should be able to work with --socket-mem option
> >   in semantic.
> 
> If we'd like to work well with --socket-mem option, we need to use syscalls
> like set_mempolicy(), mbind(). So it'll bring bigger change related to
> current one. I don't know if it's acceptable?

Yes, if that's the right way to go. But also as you stated, I doubt we
really need handle the numa affinitive here, due to it's complex.

> >
> >
> >And I have been thinking how to deal with those issues properly, and a
> >__very immature__ solution come to my mind (which could be simply not
> >working), but anyway, here is FYI: we go through the same process to
> >handle normal huge page initilization to --single-file option as well.
> >But we take different actions or no actions at all at some stages when
> >that option is given, which is a bit similiar with the way of handling
> >RTE_EAL_SINGLE_FILE_SEGMENTS.
> >
> >And we create one hugepage file for each node, each page size. For a
> >system like mine above (2 nodes), it may populate files like following:
> >
> >- 1G x 2 on node0
> >- 1G x 2 on node1
> >- 2M x 256 on node0
> >- 2M x 256 on node1
> >
> >That could normally fit your case. Though 4 nodes looks like the maximum
> >node number, --socket-mem option may relieve the limit a bit.
> >
> >And if we "could" not care the socket_id being set correctly, we could
> >just simply allocate one file for each hugepage size. That would work
> >well for your container enabling.
> 
> This way seems a good option at first sight. Let's compare this new way with
> original design.
> 
> The original design just covers the simplest scenario:
> a. just one hugetlbfs (new way can provide support for multiple number of
> hugetlbfs)
> b. does not require a root privilege (new way can achieve this by using
> above-mentioned mind() or set_mempolicy() syscall)
> c. no sorting (both way are OK)
> d. performance, from the perspective of virtio for container, we take more
> consideration about the performance of address translation in the vhost. In
> the vhost, now we adopt a O(n) linear comparison to translate address (this
> can be optimized to O(logn) using segment tree, or even better using a
> cache, sorry, it's just another problem), so we should maintain as few files
> as possible. (new way can achieve this by used with --socket-mem,
> --huge-dir)
> e. numa aware is not required (and it's complex). (new way can solve this
> without promise)
> 
> In all, this new way seems great for me.
> 
> Another thing is if "go through the same process to handle normal huge page
> initilization", my consideration is: RTE_EAL_SINGLE_FILE_SEGMENTS goes such
> way to maximize code reuse. But the new way has few common code with
> original ways. And mixing these options together leads to bad readability.
> How do you think?

Indeed. I've already found that the code is a bit hard to read, due to
many "#ifdef ... #else .. #endif" blocks, for RTE_EAL_SINGLE_FILE_SEGMENTS
as well as some special archs.

Therefore, I would suggest to do it as below: add another option based
on the SINGLE_FILE_SEGMENTS implementation.

I mean SINGLE_FILE_SEGMENTS already tries to generate as few files as
possible. If we add another option, say --no-sort (or --no-phys-continuity),
we could add just few lines of code to let it generate one file for
each huge page size (if we don't consider the numa affinity).

> >
> >BTW, since we already have SINGLE_FILE_SEGMENTS (config) option, adding
> >another option --single-file looks really confusing to me.
> >
> >To me, maybe you could base the SINGLE_FILE_SEGMENTS option, and add
> >another option, say --no-sort (I confess this name sucks, but you get
> >my point). With that, we could make sure to create as least huge page
> >files as possible, to fit your case.
> 
> This is a great advice. So how do you think of --converged, or
> --no-scattered-mem, or any better idea?

TBH, none of them looks great to me, either. But I have no better
options. Well, --no-phys-continuity looks like the best option to
me so far :)

	--yliu

> 
> Thanks for valuable input.
> 
> Jianfeng
> 
> >
> >	--yliu

  reply	other threads:[~2016-03-08  2:42 UTC|newest]

Thread overview: 196+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-05 18:31 [dpdk-dev] [RFC 0/5] virtio support for container Jianfeng Tan
2015-11-05 18:31 ` [dpdk-dev] [RFC 1/5] virtio/container: add handler for ioport rd/wr Jianfeng Tan
2015-11-05 18:31 ` [dpdk-dev] [RFC 2/5] virtio/container: add a new virtual device named eth_cvio Jianfeng Tan
2015-11-05 18:31 ` [dpdk-dev] [RFC 3/5] virtio/container: unify desc->addr assignment Jianfeng Tan
2015-11-05 18:31 ` [dpdk-dev] [RFC 4/5] virtio/container: adjust memory initialization process Jianfeng Tan
2015-11-06 16:21   ` Ananyev, Konstantin
2015-11-08 11:18     ` Tan, Jianfeng
2015-11-09 13:32       ` Ananyev, Konstantin
2015-11-09 14:13         ` Tan, Jianfeng
2015-11-05 18:31 ` [dpdk-dev] [RFC 5/5] vhost/container: change mode of vhost listening socket Jianfeng Tan
2015-11-09  3:54   ` Yuanhan Liu
2015-11-09  5:15     ` Tan, Jianfeng
2015-11-09  5:40       ` Yuanhan Liu
2015-11-09  5:46         ` Tan, Jianfeng
2015-11-24  3:53 ` [dpdk-dev] [RFC 0/5] virtio support for container Zhuangyanying
2015-11-24  6:19   ` Tan, Jianfeng
2016-01-10 11:42 ` [dpdk-dev] [PATCH 0/4] " Jianfeng Tan
2016-01-10 11:42   ` [dpdk-dev] [PATCH 1/4] mem: add --single-file to create single mem-backed file Jianfeng Tan
2016-01-21  1:57     ` Xie, Huawei
2016-01-10 11:43   ` [dpdk-dev] [PATCH 2/4] mem: add API to obstain memory-backed file info Jianfeng Tan
2016-01-11 11:43     ` Pavel Fedin
2016-01-11 20:26     ` Rich Lane
2016-01-12  9:12       ` Tan, Jianfeng
2016-01-12 10:04         ` Pavel Fedin
2016-01-12 10:48           ` Tan, Jianfeng
2016-01-12 11:00             ` Pavel Fedin
2016-01-12 11:07               ` Sergio Gonzalez Monroy
2016-01-12 11:37                 ` Pavel Fedin
2016-01-12 12:12                   ` Sergio Gonzalez Monroy
2016-01-12 13:57                     ` Pavel Fedin
2016-01-12 14:13                       ` Sergio Gonzalez Monroy
2016-01-12 11:44                 ` Sergio Gonzalez Monroy
2016-01-12 11:22             ` Pavel Fedin
2016-01-12 11:33               ` Sergio Gonzalez Monroy
2016-01-12 12:01                 ` Pavel Fedin
2016-01-12 13:39                   ` Sergio Gonzalez Monroy
2016-01-10 11:43   ` [dpdk-dev] [PATCH 3/4] virtio/vdev: add ways to interact with vhost Jianfeng Tan
2016-01-11 10:42     ` Pavel Fedin
2016-01-11 14:02     ` Pavel Fedin
2016-01-21  2:18     ` Xie, Huawei
2016-01-10 11:43   ` [dpdk-dev] [PATCH 4/4] virtio/vdev: add a new vdev named eth_cvio Jianfeng Tan
2016-01-12  7:45     ` Pavel Fedin
2016-01-12  7:59       ` Yuanhan Liu
2016-01-12  8:39       ` Tan, Jianfeng
2016-01-12  9:15         ` Tan, Jianfeng
2016-01-27  3:10     ` Qiu, Michael
2016-01-11 14:21   ` [dpdk-dev] [PATCH 0/4] virtio support for container Pavel Fedin
2016-01-11 15:53     ` Tan, Jianfeng
2016-01-12  7:38       ` Pavel Fedin
2016-01-12  8:14         ` Rich Lane
2016-01-12  8:39           ` Pavel Fedin
2016-01-12  8:51             ` Tan, Jianfeng
2016-01-12 10:48               ` Pavel Fedin
2016-01-12 14:45                 ` Amit Tomer
2016-01-12 14:50                   ` Pavel Fedin
2016-01-12 14:58                     ` Amit Tomer
2016-01-12 14:53                   ` Tan, Jianfeng
2016-01-12 15:11                     ` Amit Tomer
2016-01-12 16:18                       ` Tan, Jianfeng
2016-01-13 15:00                         ` Amit Tomer
2016-01-13 18:41                           ` Tan, Jianfeng
2016-01-14  9:34                             ` Amit Tomer
2016-01-14 11:41                               ` Tan, Jianfeng
2016-01-14 12:03                                 ` Amit Tomer
2016-01-15  6:39                                   ` Tan, Jianfeng
2016-01-20 15:19                                     ` Amit Tomer
2016-01-22  6:04                                       ` Tan, Jianfeng
2016-01-12  5:36   ` Tetsuya Mukawa
2016-01-12  5:46     ` Tan, Jianfeng
2016-01-12  6:01       ` Tetsuya Mukawa
2016-01-12  6:14         ` Yuanhan Liu
2016-01-12  6:26           ` Tetsuya Mukawa
2016-01-12  6:29             ` Yuanhan Liu
2016-01-20  3:48     ` Xie, Huawei
2016-01-26  6:02   ` Qiu, Michael
2016-01-26  6:09     ` Tan, Jianfeng
2016-02-05 11:20 ` [dpdk-dev] [PATCH v2 0/5] " Jianfeng Tan
2016-02-05 11:20   ` [dpdk-dev] [PATCH v2 1/5] mem: add --single-file to create single mem-backed file Jianfeng Tan
2016-03-07 13:13     ` Yuanhan Liu
2016-03-08  1:55       ` Tan, Jianfeng
2016-03-08  2:44         ` Yuanhan Liu [this message]
2016-03-09 14:44           ` Tan, Jianfeng
2016-03-08  8:49       ` Panu Matilainen
2016-03-08  9:04         ` Yuanhan Liu
2016-03-08 10:30           ` Thomas Monjalon
2016-03-08 10:57             ` Burakov, Anatoly
2016-03-14 13:53             ` Traynor, Kevin
2016-03-14 14:45               ` Thomas Monjalon
2016-03-14 18:21                 ` Traynor, Kevin
2016-02-05 11:20   ` [dpdk-dev] [PATCH v2 2/5] mem: add API to obtain memory-backed file info Jianfeng Tan
2016-03-07 13:22     ` Yuanhan Liu
2016-03-08  2:31       ` Tan, Jianfeng
2016-03-08  2:53         ` Yuanhan Liu
2016-02-05 11:20   ` [dpdk-dev] [PATCH v2 3/5] virtio/vdev: add embeded device emulation Jianfeng Tan
2016-02-07 10:47     ` Michael S. Tsirkin
2016-02-08  6:59     ` Tetsuya Mukawa
2016-02-16  2:47       ` Tan, Jianfeng
2016-02-16  2:40     ` Tan, Jianfeng
2016-02-05 11:20   ` [dpdk-dev] [PATCH v2 4/5] virtio/vdev: add a new vdev named eth_cvio Jianfeng Tan
2016-02-05 11:20   ` [dpdk-dev] [PATCH v2 5/5] docs: add release note for virtio for container Jianfeng Tan
2016-03-23 19:17   ` [dpdk-dev] [PATCH v2 0/5] virtio support " Neil Horman
2016-03-24  3:10     ` Tan, Jianfeng
2016-03-24 13:45       ` Neil Horman
2016-03-25  1:25         ` Tan, Jianfeng
2016-03-25 11:06           ` Neil Horman
2016-04-13 16:14   ` Thomas Monjalon
2016-04-14  6:08     ` Tan, Jianfeng
2016-04-21  2:56 ` [dpdk-dev] [PATCH v3 0/2] " Jianfeng Tan
2016-04-21  2:56   ` [dpdk-dev] [PATCH v3 1/2] virtio/vdev: add embeded device emulation Jianfeng Tan
2016-04-21 22:01     ` Yuanhan Liu
2016-04-22 10:12       ` Tan, Jianfeng
2016-04-22 10:17         ` Thomas Monjalon
2016-04-22 17:27         ` Yuanhan Liu
2016-04-21  2:56   ` [dpdk-dev] [PATCH v3 2/2] virtio/vdev: add a new vdev named eth_cvio Jianfeng Tan
2016-04-21  8:51     ` David Marchand
2016-04-22  5:15       ` Tan, Jianfeng
2016-04-22  7:36         ` David Marchand
2016-04-22 10:25           ` Tan, Jianfeng
2016-04-21 10:05     ` Thomas Monjalon
2016-04-22  7:26       ` Tan, Jianfeng
2016-04-22  8:30         ` Thomas Monjalon
2016-04-21 22:14     ` Yuanhan Liu
2016-04-22 10:12       ` Tan, Jianfeng
2016-04-29  1:18 ` [dpdk-dev] [PATCH v4 0/8] virtio support for container Jianfeng Tan
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 1/8] virtio: hide phys addr check inside pci ops Jianfeng Tan
2016-05-11 23:05     ` Yuanhan Liu
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 2/8] virtio: abstract vring hdr desc init as a method Jianfeng Tan
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 3/8] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 4/8] virtio-user: add vhost adapter layer Jianfeng Tan
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 5/8] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-05-12  1:05     ` Yuanhan Liu
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 6/8] virtio-user: add new virtual pci driver for virtio Jianfeng Tan
2016-05-12  2:12     ` Yuanhan Liu
2016-05-12  7:08       ` Tan, Jianfeng
2016-05-12 16:40         ` Yuanhan Liu
2016-05-13  1:54           ` Tan, Jianfeng
2016-05-13  4:45             ` Yuanhan Liu
2016-05-16  1:48               ` Tan, Jianfeng
2016-05-16  2:51                 ` Yuanhan Liu
2016-05-12 17:02         ` Michael S. Tsirkin
2016-05-13  2:00           ` Tan, Jianfeng
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 7/8] virtio-user: add a new virtual device named virtio-user Jianfeng Tan
2016-04-29  1:18   ` [dpdk-dev] [PATCH v4 8/8] doc: update doc for virtio-user Jianfeng Tan
2016-04-29  1:35   ` [dpdk-dev] [PATCH v4 0/8] virtio support for container Tan, Jianfeng
2016-05-30 10:55 ` [dpdk-dev] [PATCH v5 " Jianfeng Tan
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 1/8] virtio: hide phys addr check inside pci ops Jianfeng Tan
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 2/8] virtio: clean up virtio_dev_queue_setup Jianfeng Tan
2016-06-01  7:38     ` Yuanhan Liu
2016-06-01  7:44       ` Tan, Jianfeng
2016-06-01  7:58         ` Yuanhan Liu
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 3/8] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-06-01  8:03     ` Yuanhan Liu
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 4/8] virtio-user: add vhost adapter layer Jianfeng Tan
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 5/8] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 6/8] virtio-user: add new virtual pci driver for virtio Jianfeng Tan
2016-06-01  8:21     ` Yuanhan Liu
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 7/8] virtio-user: add a new vdev named virtio-user Jianfeng Tan
2016-06-01  8:26     ` Yuanhan Liu
2016-06-02  1:27       ` Tan, Jianfeng
2016-05-30 10:55   ` [dpdk-dev] [PATCH v5 8/8] doc: update doc for virtio-user Jianfeng Tan
2016-06-01  8:30     ` Yuanhan Liu
2016-06-02  9:54 ` [dpdk-dev] [PATCH v6 0/7] virtio support for container Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 1/7] virtio: hide phys addr check inside pci ops Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 2/7] virtio: clean up virtio_dev_queue_setup Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 3/7] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 4/7] virtio-user: add vhost adapter layer Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 5/7] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 6/7] virtio-user: add new virtual pci driver for virtio Jianfeng Tan
2016-06-06  8:01     ` Yuanhan Liu
2016-06-06  8:31       ` Tan, Jianfeng
2016-06-02  9:54   ` [dpdk-dev] [PATCH v6 7/7] virtio-user: add a new vdev named virtio-user Jianfeng Tan
2016-06-12  0:35 ` [dpdk-dev] [PATCH v7 0/6] virtio support for container Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 1/6] virtio: hide phys addr check inside pci ops Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 2/6] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 3/6] virtio-user: add vhost adapter layer Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 4/6] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 5/6] virtio-user: add new virtual pci driver for virtio Jianfeng Tan
2016-06-12  0:35   ` [dpdk-dev] [PATCH v7 6/6] virtio-user: add a new vdev named virtio-user Jianfeng Tan
2016-06-13  6:38 ` [dpdk-dev] [PATCH v8 0/6] virtio support for container Jianfeng Tan
2016-06-13  6:38   ` [dpdk-dev] [PATCH v8 1/6] virtio: hide phys addr check inside pci ops Jianfeng Tan
2016-06-13  6:38   ` [dpdk-dev] [PATCH v8 2/6] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-06-13  6:39   ` [dpdk-dev] [PATCH v8 3/6] virtio-user: add vhost user adapter layer Jianfeng Tan
2016-06-13  6:39   ` [dpdk-dev] [PATCH v8 4/6] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-06-13  6:39   ` [dpdk-dev] [PATCH v8 5/6] virtio-user: add new virtual pci driver for virtio Jianfeng Tan
2016-06-13  6:39   ` [dpdk-dev] [PATCH v8 6/6] virtio-user: add a new vdev named virtio-user Jianfeng Tan
2016-06-14  8:34   ` [dpdk-dev] [PATCH v8 0/6] virtio support for container Yuanhan Liu
2016-06-15  9:03 ` [dpdk-dev] [PATCH v9 " Jianfeng Tan
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 1/6] virtio: hide phys addr check inside PCI ops Jianfeng Tan
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 2/6] virtio: enable use virtual address to fill desc Jianfeng Tan
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 3/6] virtio-user: add vhost user adapter layer Jianfeng Tan
2016-06-23  9:01     ` Ferruh Yigit
2016-06-24  1:18       ` Tan, Jianfeng
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 4/6] virtio-user: add device emulation layer APIs Jianfeng Tan
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 5/6] virtio-user: add new virtual PCI driver for virtio Jianfeng Tan
2016-06-15  9:03   ` [dpdk-dev] [PATCH v9 6/6] virtio-user: add a new vdev named virtio-user Jianfeng Tan
2016-06-15  9:54   ` [dpdk-dev] [PATCH v9 0/6] virtio support for container Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160308024437.GJ14300@yliu-dev.sh.intel.com \
    --to=yuanhan.liu@linux.intel.com \
    --cc=ann.zhuangyanying@huawei.com \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    --cc=mst@redhat.com \
    --cc=nakajima.yoshihiro@lab.ntt.co.jp \
    --cc=p.fedin@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).