DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH RFC 00/13] Update build system
@ 2015-01-12 16:33 Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 01/13] mk: Remove combined library and related options Sergio Gonzalez Monroy
                   ` (14 more replies)
  0 siblings, 15 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This patch series updates the DPDK build system.

Following are the goals it tries to accomplish:
 - Create a library containing core DPDK libraries (librte_eal,
   librte_malloc, librte_mempool, librte_mbuf and librte_ring).
   The idea of core libraries is to group those libraries that are
   always required for any DPDK application.
 - Remove config option to build a combined library.
 - For shared libraries, explicitly link against dependant
   libraries (adding entries to DT_NEEDED).
 - Update app linking flags against static/shared DPDK libs.

Note that this patch turns up being quite big because of moving lib
directories to a new subdirectory.
I have ommited the actual diff from the patch doing the move of librte_eal
as it is quite big (6MB). Probably a different approach is preferred.

Sergio Gonzalez Monroy (13):
  mk: Remove combined library and related options
  lib/core: create new core dir and makefiles
  core: move librte_eal to core subdir
  core: move librte_malloc to core subdir
  core: move librte_mempool to core subdir
  core: move librte_mbuf to core subdir
  core: move librte_ring to core subdir
  Update path of core libraries
  mk: new corelib makefile
  lib: Set LDLIBS for each library
  mk: Use LDLIBS when linking shared libraries
  mk: update apps build
  mk: add -lpthread to linuxapp EXECENV_LDLIBS

 app/test/test_eal_fs.c                             |     2 +-
 config/common_bsdapp                               |     6 -
 config/common_linuxapp                             |     6 -
 config/defconfig_ppc_64-power8-linuxapp-gcc        |     2 -
 lib/Makefile                                       |     7 +-
 lib/core/Makefile                                  |    43 +
 lib/core/librte_core/Makefile                      |    45 +
 lib/core/librte_eal/Makefile                       |    39 +
 lib/core/librte_eal/bsdapp/Makefile                |    38 +
 lib/core/librte_eal/bsdapp/contigmem/BSDmakefile   |    36 +
 lib/core/librte_eal/bsdapp/contigmem/Makefile      |    52 +
 lib/core/librte_eal/bsdapp/contigmem/contigmem.c   |   233 +
 lib/core/librte_eal/bsdapp/eal/Makefile            |    97 +
 lib/core/librte_eal/bsdapp/eal/eal.c               |   563 +
 lib/core/librte_eal/bsdapp/eal/eal_alarm.c         |    60 +
 lib/core/librte_eal/bsdapp/eal/eal_debug.c         |   113 +
 lib/core/librte_eal/bsdapp/eal/eal_hugepage_info.c |   133 +
 lib/core/librte_eal/bsdapp/eal/eal_interrupts.c    |    71 +
 lib/core/librte_eal/bsdapp/eal/eal_lcore.c         |   107 +
 lib/core/librte_eal/bsdapp/eal/eal_log.c           |    57 +
 lib/core/librte_eal/bsdapp/eal/eal_memory.c        |   224 +
 lib/core/librte_eal/bsdapp/eal/eal_pci.c           |   510 +
 lib/core/librte_eal/bsdapp/eal/eal_thread.c        |   233 +
 lib/core/librte_eal/bsdapp/eal/eal_timer.c         |   141 +
 .../bsdapp/eal/include/exec-env/rte_dom0_common.h  |   107 +
 .../bsdapp/eal/include/exec-env/rte_interrupts.h   |    54 +
 lib/core/librte_eal/bsdapp/nic_uio/BSDmakefile     |    36 +
 lib/core/librte_eal/bsdapp/nic_uio/Makefile        |    52 +
 lib/core/librte_eal/bsdapp/nic_uio/nic_uio.c       |   329 +
 lib/core/librte_eal/common/Makefile                |    61 +
 lib/core/librte_eal/common/eal_common_cpuflags.c   |    85 +
 lib/core/librte_eal/common/eal_common_dev.c        |   109 +
 lib/core/librte_eal/common/eal_common_devargs.c    |   152 +
 lib/core/librte_eal/common/eal_common_errno.c      |    74 +
 lib/core/librte_eal/common/eal_common_hexdump.c    |   121 +
 lib/core/librte_eal/common/eal_common_launch.c     |   120 +
 lib/core/librte_eal/common/eal_common_log.c        |   320 +
 lib/core/librte_eal/common/eal_common_memory.c     |   121 +
 lib/core/librte_eal/common/eal_common_memzone.c    |   533 +
 lib/core/librte_eal/common/eal_common_options.c    |   611 ++
 lib/core/librte_eal/common/eal_common_pci.c        |   207 +
 lib/core/librte_eal/common/eal_common_string_fns.c |    69 +
 lib/core/librte_eal/common/eal_common_tailqs.c     |   146 +
 lib/core/librte_eal/common/eal_filesystem.h        |   118 +
 lib/core/librte_eal/common/eal_hugepages.h         |    67 +
 lib/core/librte_eal/common/eal_internal_cfg.h      |    93 +
 lib/core/librte_eal/common/eal_options.h           |    93 +
 lib/core/librte_eal/common/eal_private.h           |   206 +
 lib/core/librte_eal/common/eal_thread.h            |    53 +
 .../common/include/arch/ppc_64/rte_atomic.h        |   426 +
 .../common/include/arch/ppc_64/rte_byteorder.h     |   149 +
 .../common/include/arch/ppc_64/rte_cpuflags.h      |   187 +
 .../common/include/arch/ppc_64/rte_cycles.h        |    87 +
 .../common/include/arch/ppc_64/rte_memcpy.h        |   225 +
 .../common/include/arch/ppc_64/rte_prefetch.h      |    61 +
 .../common/include/arch/ppc_64/rte_spinlock.h      |    73 +
 .../common/include/arch/x86/rte_atomic.h           |   216 +
 .../common/include/arch/x86/rte_atomic_32.h        |   222 +
 .../common/include/arch/x86/rte_atomic_64.h        |   191 +
 .../common/include/arch/x86/rte_byteorder.h        |   125 +
 .../common/include/arch/x86/rte_byteorder_32.h     |    51 +
 .../common/include/arch/x86/rte_byteorder_64.h     |    52 +
 .../common/include/arch/x86/rte_cpuflags.h         |   310 +
 .../common/include/arch/x86/rte_cycles.h           |   121 +
 .../common/include/arch/x86/rte_memcpy.h           |   297 +
 .../common/include/arch/x86/rte_prefetch.h         |    62 +
 .../common/include/arch/x86/rte_spinlock.h         |    94 +
 .../librte_eal/common/include/generic/rte_atomic.h |   918 ++
 .../common/include/generic/rte_byteorder.h         |   217 +
 .../common/include/generic/rte_cpuflags.h          |   110 +
 .../librte_eal/common/include/generic/rte_cycles.h |   205 +
 .../librte_eal/common/include/generic/rte_memcpy.h |   144 +
 .../common/include/generic/rte_prefetch.h          |    71 +
 .../common/include/generic/rte_spinlock.h          |   226 +
 lib/core/librte_eal/common/include/rte_alarm.h     |   106 +
 .../common/include/rte_branch_prediction.h         |    70 +
 lib/core/librte_eal/common/include/rte_common.h    |   389 +
 .../librte_eal/common/include/rte_common_vect.h    |    93 +
 lib/core/librte_eal/common/include/rte_debug.h     |   105 +
 lib/core/librte_eal/common/include/rte_dev.h       |   111 +
 lib/core/librte_eal/common/include/rte_devargs.h   |   149 +
 lib/core/librte_eal/common/include/rte_eal.h       |   269 +
 .../librte_eal/common/include/rte_eal_memconfig.h  |   112 +
 lib/core/librte_eal/common/include/rte_errno.h     |    96 +
 lib/core/librte_eal/common/include/rte_hexdump.h   |    89 +
 .../librte_eal/common/include/rte_interrupts.h     |   121 +
 lib/core/librte_eal/common/include/rte_launch.h    |   177 +
 lib/core/librte_eal/common/include/rte_lcore.h     |   229 +
 lib/core/librte_eal/common/include/rte_log.h       |   308 +
 .../librte_eal/common/include/rte_malloc_heap.h    |    56 +
 lib/core/librte_eal/common/include/rte_memory.h    |   218 +
 lib/core/librte_eal/common/include/rte_memzone.h   |   278 +
 lib/core/librte_eal/common/include/rte_pci.h       |   305 +
 .../common/include/rte_pci_dev_feature_defs.h      |    45 +
 .../common/include/rte_pci_dev_features.h          |    44 +
 .../librte_eal/common/include/rte_pci_dev_ids.h    |   540 +
 lib/core/librte_eal/common/include/rte_per_lcore.h |    79 +
 lib/core/librte_eal/common/include/rte_random.h    |    91 +
 lib/core/librte_eal/common/include/rte_rwlock.h    |   158 +
 .../librte_eal/common/include/rte_string_fns.h     |    81 +
 lib/core/librte_eal/common/include/rte_tailq.h     |   215 +
 .../librte_eal/common/include/rte_tailq_elem.h     |    90 +
 lib/core/librte_eal/common/include/rte_version.h   |   129 +
 lib/core/librte_eal/common/include/rte_warnings.h  |    84 +
 lib/core/librte_eal/linuxapp/Makefile              |    45 +
 lib/core/librte_eal/linuxapp/eal/Makefile          |   111 +
 lib/core/librte_eal/linuxapp/eal/eal.c             |   861 ++
 lib/core/librte_eal/linuxapp/eal/eal_alarm.c       |   268 +
 lib/core/librte_eal/linuxapp/eal/eal_debug.c       |   113 +
 .../librte_eal/linuxapp/eal/eal_hugepage_info.c    |   359 +
 lib/core/librte_eal/linuxapp/eal/eal_interrupts.c  |   826 ++
 lib/core/librte_eal/linuxapp/eal/eal_ivshmem.c     |   968 ++
 lib/core/librte_eal/linuxapp/eal/eal_lcore.c       |   191 +
 lib/core/librte_eal/linuxapp/eal/eal_log.c         |   197 +
 lib/core/librte_eal/linuxapp/eal/eal_memory.c      |  1564 +++
 lib/core/librte_eal/linuxapp/eal/eal_pci.c         |   629 ++
 lib/core/librte_eal/linuxapp/eal/eal_pci_init.h    |   122 +
 lib/core/librte_eal/linuxapp/eal/eal_pci_uio.c     |   440 +
 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio.c    |   807 ++
 .../librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c |   395 +
 lib/core/librte_eal/linuxapp/eal/eal_thread.c      |   233 +
 lib/core/librte_eal/linuxapp/eal/eal_timer.c       |   343 +
 lib/core/librte_eal/linuxapp/eal/eal_vfio.h        |    55 +
 lib/core/librte_eal/linuxapp/eal/eal_xen_memory.c  |   370 +
 .../eal/include/exec-env/rte_dom0_common.h         |   108 +
 .../linuxapp/eal/include/exec-env/rte_interrupts.h |    58 +
 .../linuxapp/eal/include/exec-env/rte_kni_common.h |   174 +
 lib/core/librte_eal/linuxapp/igb_uio/Makefile      |    53 +
 lib/core/librte_eal/linuxapp/igb_uio/compat.h      |   116 +
 lib/core/librte_eal/linuxapp/igb_uio/igb_uio.c     |   643 ++
 lib/core/librte_eal/linuxapp/kni/Makefile          |    93 +
 lib/core/librte_eal/linuxapp/kni/compat.h          |    21 +
 lib/core/librte_eal/linuxapp/kni/ethtool/README    |   100 +
 .../librte_eal/linuxapp/kni/ethtool/igb/COPYING    |   339 +
 .../linuxapp/kni/ethtool/igb/e1000_82575.c         |  3665 +++++++
 .../linuxapp/kni/ethtool/igb/e1000_82575.h         |   509 +
 .../linuxapp/kni/ethtool/igb/e1000_api.c           |  1160 +++
 .../linuxapp/kni/ethtool/igb/e1000_api.h           |   157 +
 .../linuxapp/kni/ethtool/igb/e1000_defines.h       |  1380 +++
 .../librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h |   793 ++
 .../linuxapp/kni/ethtool/igb/e1000_i210.c          |   909 ++
 .../linuxapp/kni/ethtool/igb/e1000_i210.h          |    91 +
 .../linuxapp/kni/ethtool/igb/e1000_mac.c           |  2096 ++++
 .../linuxapp/kni/ethtool/igb/e1000_mac.h           |    80 +
 .../linuxapp/kni/ethtool/igb/e1000_manage.c        |   556 +
 .../linuxapp/kni/ethtool/igb/e1000_manage.h        |    89 +
 .../linuxapp/kni/ethtool/igb/e1000_mbx.c           |   526 +
 .../linuxapp/kni/ethtool/igb/e1000_mbx.h           |    87 +
 .../linuxapp/kni/ethtool/igb/e1000_nvm.c           |   967 ++
 .../linuxapp/kni/ethtool/igb/e1000_nvm.h           |    75 +
 .../linuxapp/kni/ethtool/igb/e1000_osdep.h         |   136 +
 .../linuxapp/kni/ethtool/igb/e1000_phy.c           |  3405 ++++++
 .../linuxapp/kni/ethtool/igb/e1000_phy.h           |   256 +
 .../linuxapp/kni/ethtool/igb/e1000_regs.h          |   646 ++
 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb.h |   859 ++
 .../linuxapp/kni/ethtool/igb/igb_debugfs.c         |    29 +
 .../linuxapp/kni/ethtool/igb/igb_ethtool.c         |  2859 ++++++
 .../linuxapp/kni/ethtool/igb/igb_hwmon.c           |   260 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_main.c | 10263 +++++++++++++++++++
 .../linuxapp/kni/ethtool/igb/igb_param.c           |   848 ++
 .../linuxapp/kni/ethtool/igb/igb_procfs.c          |   363 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c  |   944 ++
 .../linuxapp/kni/ethtool/igb/igb_regtest.h         |   251 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c |   437 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h |    46 +
 .../librte_eal/linuxapp/kni/ethtool/igb/kcompat.c  |  1482 +++
 .../librte_eal/linuxapp/kni/ethtool/igb/kcompat.h  |  3884 +++++++
 .../linuxapp/kni/ethtool/igb/kcompat_ethtool.c     |  1172 +++
 .../librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING  |   339 +
 .../librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h  |   925 ++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c       |  1296 +++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h       |    44 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c       |  2314 +++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h       |    58 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.c         |  1158 +++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.h         |   168 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.c      |  4083 ++++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.h      |   140 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h         |   168 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c     |  2901 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h        |    91 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_main.c        |  2975 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h         |   105 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h       |   132 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c         |  1847 ++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h         |   137 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h       |    74 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_type.h        |  3254 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c        |   938 ++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h        |    58 +
 .../linuxapp/kni/ethtool/ixgbe/kcompat.c           |  1246 +++
 .../linuxapp/kni/ethtool/ixgbe/kcompat.h           |  3143 ++++++
 lib/core/librte_eal/linuxapp/kni/kni_dev.h         |   150 +
 lib/core/librte_eal/linuxapp/kni/kni_ethtool.c     |   217 +
 lib/core/librte_eal/linuxapp/kni/kni_fifo.h        |   108 +
 lib/core/librte_eal/linuxapp/kni/kni_misc.c        |   606 ++
 lib/core/librte_eal/linuxapp/kni/kni_net.c         |   687 ++
 lib/core/librte_eal/linuxapp/kni/kni_vhost.c       |   811 ++
 lib/core/librte_eal/linuxapp/xen_dom0/Makefile     |    56 +
 lib/core/librte_eal/linuxapp/xen_dom0/compat.h     |    15 +
 .../librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h     |   107 +
 .../librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c    |   781 ++
 lib/core/librte_malloc/Makefile                    |    48 +
 lib/core/librte_malloc/malloc_elem.c               |   321 +
 lib/core/librte_malloc/malloc_elem.h               |   190 +
 lib/core/librte_malloc/malloc_heap.c               |   210 +
 lib/core/librte_malloc/malloc_heap.h               |    65 +
 lib/core/librte_malloc/rte_malloc.c                |   261 +
 lib/core/librte_malloc/rte_malloc.h                |   342 +
 lib/core/librte_mbuf/Makefile                      |    48 +
 lib/core/librte_mbuf/rte_mbuf.c                    |   252 +
 lib/core/librte_mbuf/rte_mbuf.h                    |  1133 ++
 lib/core/librte_mempool/Makefile                   |    51 +
 lib/core/librte_mempool/rte_dom0_mempool.c         |   134 +
 lib/core/librte_mempool/rte_mempool.c              |   901 ++
 lib/core/librte_mempool/rte_mempool.h              |  1392 +++
 lib/core/librte_ring/Makefile                      |    48 +
 lib/core/librte_ring/rte_ring.c                    |   338 +
 lib/core/librte_ring/rte_ring.h                    |  1214 +++
 lib/librte_acl/Makefile                            |     5 +-
 lib/librte_cfgfile/Makefile                        |     3 +-
 lib/librte_cmdline/Makefile                        |     5 +-
 lib/librte_distributor/Makefile                    |     4 +-
 lib/librte_eal/Makefile                            |    39 -
 lib/librte_eal/bsdapp/Makefile                     |    38 -
 lib/librte_eal/bsdapp/contigmem/BSDmakefile        |    36 -
 lib/librte_eal/bsdapp/contigmem/Makefile           |    52 -
 lib/librte_eal/bsdapp/contigmem/contigmem.c        |   233 -
 lib/librte_eal/bsdapp/eal/Makefile                 |    97 -
 lib/librte_eal/bsdapp/eal/eal.c                    |   563 -
 lib/librte_eal/bsdapp/eal/eal_alarm.c              |    60 -
 lib/librte_eal/bsdapp/eal/eal_debug.c              |   113 -
 lib/librte_eal/bsdapp/eal/eal_hugepage_info.c      |   133 -
 lib/librte_eal/bsdapp/eal/eal_interrupts.c         |    71 -
 lib/librte_eal/bsdapp/eal/eal_lcore.c              |   107 -
 lib/librte_eal/bsdapp/eal/eal_log.c                |    57 -
 lib/librte_eal/bsdapp/eal/eal_memory.c             |   224 -
 lib/librte_eal/bsdapp/eal/eal_pci.c                |   510 -
 lib/librte_eal/bsdapp/eal/eal_thread.c             |   233 -
 lib/librte_eal/bsdapp/eal/eal_timer.c              |   141 -
 .../bsdapp/eal/include/exec-env/rte_dom0_common.h  |   107 -
 .../bsdapp/eal/include/exec-env/rte_interrupts.h   |    54 -
 lib/librte_eal/bsdapp/nic_uio/BSDmakefile          |    36 -
 lib/librte_eal/bsdapp/nic_uio/Makefile             |    52 -
 lib/librte_eal/bsdapp/nic_uio/nic_uio.c            |   329 -
 lib/librte_eal/common/Makefile                     |    61 -
 lib/librte_eal/common/eal_common_cpuflags.c        |    85 -
 lib/librte_eal/common/eal_common_dev.c             |   109 -
 lib/librte_eal/common/eal_common_devargs.c         |   152 -
 lib/librte_eal/common/eal_common_errno.c           |    74 -
 lib/librte_eal/common/eal_common_hexdump.c         |   121 -
 lib/librte_eal/common/eal_common_launch.c          |   120 -
 lib/librte_eal/common/eal_common_log.c             |   320 -
 lib/librte_eal/common/eal_common_memory.c          |   121 -
 lib/librte_eal/common/eal_common_memzone.c         |   533 -
 lib/librte_eal/common/eal_common_options.c         |   611 --
 lib/librte_eal/common/eal_common_pci.c             |   207 -
 lib/librte_eal/common/eal_common_string_fns.c      |    69 -
 lib/librte_eal/common/eal_common_tailqs.c          |   146 -
 lib/librte_eal/common/eal_filesystem.h             |   118 -
 lib/librte_eal/common/eal_hugepages.h              |    67 -
 lib/librte_eal/common/eal_internal_cfg.h           |    93 -
 lib/librte_eal/common/eal_options.h                |    93 -
 lib/librte_eal/common/eal_private.h                |   206 -
 lib/librte_eal/common/eal_thread.h                 |    53 -
 .../common/include/arch/ppc_64/rte_atomic.h        |   426 -
 .../common/include/arch/ppc_64/rte_byteorder.h     |   149 -
 .../common/include/arch/ppc_64/rte_cpuflags.h      |   187 -
 .../common/include/arch/ppc_64/rte_cycles.h        |    87 -
 .../common/include/arch/ppc_64/rte_memcpy.h        |   225 -
 .../common/include/arch/ppc_64/rte_prefetch.h      |    61 -
 .../common/include/arch/ppc_64/rte_spinlock.h      |    73 -
 .../common/include/arch/x86/rte_atomic.h           |   216 -
 .../common/include/arch/x86/rte_atomic_32.h        |   222 -
 .../common/include/arch/x86/rte_atomic_64.h        |   191 -
 .../common/include/arch/x86/rte_byteorder.h        |   125 -
 .../common/include/arch/x86/rte_byteorder_32.h     |    51 -
 .../common/include/arch/x86/rte_byteorder_64.h     |    52 -
 .../common/include/arch/x86/rte_cpuflags.h         |   310 -
 .../common/include/arch/x86/rte_cycles.h           |   121 -
 .../common/include/arch/x86/rte_memcpy.h           |   297 -
 .../common/include/arch/x86/rte_prefetch.h         |    62 -
 .../common/include/arch/x86/rte_spinlock.h         |    94 -
 lib/librte_eal/common/include/generic/rte_atomic.h |   918 --
 .../common/include/generic/rte_byteorder.h         |   217 -
 .../common/include/generic/rte_cpuflags.h          |   110 -
 lib/librte_eal/common/include/generic/rte_cycles.h |   205 -
 lib/librte_eal/common/include/generic/rte_memcpy.h |   144 -
 .../common/include/generic/rte_prefetch.h          |    71 -
 .../common/include/generic/rte_spinlock.h          |   226 -
 lib/librte_eal/common/include/rte_alarm.h          |   106 -
 .../common/include/rte_branch_prediction.h         |    70 -
 lib/librte_eal/common/include/rte_common.h         |   389 -
 lib/librte_eal/common/include/rte_common_vect.h    |    93 -
 lib/librte_eal/common/include/rte_debug.h          |   105 -
 lib/librte_eal/common/include/rte_dev.h            |   111 -
 lib/librte_eal/common/include/rte_devargs.h        |   149 -
 lib/librte_eal/common/include/rte_eal.h            |   269 -
 lib/librte_eal/common/include/rte_eal_memconfig.h  |   112 -
 lib/librte_eal/common/include/rte_errno.h          |    96 -
 lib/librte_eal/common/include/rte_hexdump.h        |    89 -
 lib/librte_eal/common/include/rte_interrupts.h     |   121 -
 lib/librte_eal/common/include/rte_launch.h         |   177 -
 lib/librte_eal/common/include/rte_lcore.h          |   229 -
 lib/librte_eal/common/include/rte_log.h            |   308 -
 lib/librte_eal/common/include/rte_malloc_heap.h    |    56 -
 lib/librte_eal/common/include/rte_memory.h         |   218 -
 lib/librte_eal/common/include/rte_memzone.h        |   278 -
 lib/librte_eal/common/include/rte_pci.h            |   305 -
 .../common/include/rte_pci_dev_feature_defs.h      |    45 -
 .../common/include/rte_pci_dev_features.h          |    44 -
 lib/librte_eal/common/include/rte_pci_dev_ids.h    |   540 -
 lib/librte_eal/common/include/rte_per_lcore.h      |    79 -
 lib/librte_eal/common/include/rte_random.h         |    91 -
 lib/librte_eal/common/include/rte_rwlock.h         |   158 -
 lib/librte_eal/common/include/rte_string_fns.h     |    81 -
 lib/librte_eal/common/include/rte_tailq.h          |   215 -
 lib/librte_eal/common/include/rte_tailq_elem.h     |    90 -
 lib/librte_eal/common/include/rte_version.h        |   129 -
 lib/librte_eal/common/include/rte_warnings.h       |    84 -
 lib/librte_eal/linuxapp/Makefile                   |    45 -
 lib/librte_eal/linuxapp/eal/Makefile               |   112 -
 lib/librte_eal/linuxapp/eal/eal.c                  |   861 --
 lib/librte_eal/linuxapp/eal/eal_alarm.c            |   268 -
 lib/librte_eal/linuxapp/eal/eal_debug.c            |   113 -
 lib/librte_eal/linuxapp/eal/eal_hugepage_info.c    |   359 -
 lib/librte_eal/linuxapp/eal/eal_interrupts.c       |   826 --
 lib/librte_eal/linuxapp/eal/eal_ivshmem.c          |   968 --
 lib/librte_eal/linuxapp/eal/eal_lcore.c            |   191 -
 lib/librte_eal/linuxapp/eal/eal_log.c              |   197 -
 lib/librte_eal/linuxapp/eal/eal_memory.c           |  1564 ---
 lib/librte_eal/linuxapp/eal/eal_pci.c              |   629 --
 lib/librte_eal/linuxapp/eal/eal_pci_init.h         |   122 -
 lib/librte_eal/linuxapp/eal/eal_pci_uio.c          |   440 -
 lib/librte_eal/linuxapp/eal/eal_pci_vfio.c         |   807 --
 lib/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c |   395 -
 lib/librte_eal/linuxapp/eal/eal_thread.c           |   233 -
 lib/librte_eal/linuxapp/eal/eal_timer.c            |   343 -
 lib/librte_eal/linuxapp/eal/eal_vfio.h             |    55 -
 lib/librte_eal/linuxapp/eal/eal_xen_memory.c       |   370 -
 .../eal/include/exec-env/rte_dom0_common.h         |   108 -
 .../linuxapp/eal/include/exec-env/rte_interrupts.h |    58 -
 .../linuxapp/eal/include/exec-env/rte_kni_common.h |   174 -
 lib/librte_eal/linuxapp/igb_uio/Makefile           |    53 -
 lib/librte_eal/linuxapp/igb_uio/compat.h           |   116 -
 lib/librte_eal/linuxapp/igb_uio/igb_uio.c          |   643 --
 lib/librte_eal/linuxapp/kni/Makefile               |    93 -
 lib/librte_eal/linuxapp/kni/compat.h               |    21 -
 lib/librte_eal/linuxapp/kni/ethtool/README         |   100 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/COPYING    |   339 -
 .../linuxapp/kni/ethtool/igb/e1000_82575.c         |  3665 -------
 .../linuxapp/kni/ethtool/igb/e1000_82575.h         |   509 -
 .../linuxapp/kni/ethtool/igb/e1000_api.c           |  1160 ---
 .../linuxapp/kni/ethtool/igb/e1000_api.h           |   157 -
 .../linuxapp/kni/ethtool/igb/e1000_defines.h       |  1380 ---
 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h |   793 --
 .../linuxapp/kni/ethtool/igb/e1000_i210.c          |   909 --
 .../linuxapp/kni/ethtool/igb/e1000_i210.h          |    91 -
 .../linuxapp/kni/ethtool/igb/e1000_mac.c           |  2096 ----
 .../linuxapp/kni/ethtool/igb/e1000_mac.h           |    80 -
 .../linuxapp/kni/ethtool/igb/e1000_manage.c        |   556 -
 .../linuxapp/kni/ethtool/igb/e1000_manage.h        |    89 -
 .../linuxapp/kni/ethtool/igb/e1000_mbx.c           |   526 -
 .../linuxapp/kni/ethtool/igb/e1000_mbx.h           |    87 -
 .../linuxapp/kni/ethtool/igb/e1000_nvm.c           |   967 --
 .../linuxapp/kni/ethtool/igb/e1000_nvm.h           |    75 -
 .../linuxapp/kni/ethtool/igb/e1000_osdep.h         |   136 -
 .../linuxapp/kni/ethtool/igb/e1000_phy.c           |  3405 ------
 .../linuxapp/kni/ethtool/igb/e1000_phy.h           |   256 -
 .../linuxapp/kni/ethtool/igb/e1000_regs.h          |   646 --
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb.h      |   859 --
 .../linuxapp/kni/ethtool/igb/igb_debugfs.c         |    29 -
 .../linuxapp/kni/ethtool/igb/igb_ethtool.c         |  2859 ------
 .../linuxapp/kni/ethtool/igb/igb_hwmon.c           |   260 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c | 10263 -------------------
 .../linuxapp/kni/ethtool/igb/igb_param.c           |   848 --
 .../linuxapp/kni/ethtool/igb/igb_procfs.c          |   363 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c  |   944 --
 .../linuxapp/kni/ethtool/igb/igb_regtest.h         |   251 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c |   437 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h |    46 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c  |  1482 ---
 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h  |  3884 -------
 .../linuxapp/kni/ethtool/igb/kcompat_ethtool.c     |  1172 ---
 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING  |   339 -
 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h  |   925 --
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c       |  1296 ---
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h       |    44 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c       |  2314 -----
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h       |    58 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.c         |  1158 ---
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.h         |   168 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.c      |  4083 --------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.h      |   140 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h         |   168 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c     |  2901 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h        |    91 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_main.c        |  2975 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h         |   105 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h       |   132 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c         |  1847 ----
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h         |   137 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h       |    74 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_type.h        |  3254 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c        |   938 --
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h        |    58 -
 .../linuxapp/kni/ethtool/ixgbe/kcompat.c           |  1246 ---
 .../linuxapp/kni/ethtool/ixgbe/kcompat.h           |  3143 ------
 lib/librte_eal/linuxapp/kni/kni_dev.h              |   150 -
 lib/librte_eal/linuxapp/kni/kni_ethtool.c          |   217 -
 lib/librte_eal/linuxapp/kni/kni_fifo.h             |   108 -
 lib/librte_eal/linuxapp/kni/kni_misc.c             |   606 --
 lib/librte_eal/linuxapp/kni/kni_net.c              |   687 --
 lib/librte_eal/linuxapp/kni/kni_vhost.c            |   811 --
 lib/librte_eal/linuxapp/xen_dom0/Makefile          |    56 -
 lib/librte_eal/linuxapp/xen_dom0/compat.h          |    15 -
 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h     |   107 -
 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c    |   781 --
 lib/librte_ether/Makefile                          |     3 +-
 lib/librte_hash/Makefile                           |     3 +-
 lib/librte_ip_frag/Makefile                        |     5 +-
 lib/librte_ivshmem/Makefile                        |     3 +-
 lib/librte_kni/Makefile                            |     5 +-
 lib/librte_kvargs/Makefile                         |     5 +-
 lib/librte_lpm/Makefile                            |     5 +-
 lib/librte_malloc/Makefile                         |    48 -
 lib/librte_malloc/malloc_elem.c                    |   321 -
 lib/librte_malloc/malloc_elem.h                    |   190 -
 lib/librte_malloc/malloc_heap.c                    |   210 -
 lib/librte_malloc/malloc_heap.h                    |    65 -
 lib/librte_malloc/rte_malloc.c                     |   261 -
 lib/librte_malloc/rte_malloc.h                     |   342 -
 lib/librte_mbuf/Makefile                           |    48 -
 lib/librte_mbuf/rte_mbuf.c                         |   252 -
 lib/librte_mbuf/rte_mbuf.h                         |  1133 --
 lib/librte_mempool/Makefile                        |    51 -
 lib/librte_mempool/rte_dom0_mempool.c              |   134 -
 lib/librte_mempool/rte_mempool.c                   |   901 --
 lib/librte_mempool/rte_mempool.h                   |  1392 ---
 lib/librte_meter/Makefile                          |     4 +-
 lib/librte_pipeline/Makefile                       |     3 +
 lib/librte_pmd_af_packet/Makefile                  |     5 +-
 lib/librte_pmd_bond/Makefile                       |     7 +-
 lib/librte_pmd_e1000/Makefile                      |     8 +-
 lib/librte_pmd_enic/Makefile                       |     8 +-
 lib/librte_pmd_i40e/Makefile                       |     8 +-
 lib/librte_pmd_ixgbe/Makefile                      |     8 +-
 lib/librte_pmd_pcap/Makefile                       |     5 +-
 lib/librte_pmd_ring/Makefile                       |     6 +-
 lib/librte_pmd_virtio/Makefile                     |     8 +-
 lib/librte_pmd_vmxnet3/Makefile                    |     8 +-
 lib/librte_pmd_xenvirt/Makefile                    |     8 +-
 lib/librte_port/Makefile                           |     8 +-
 lib/librte_power/Makefile                          |     4 +-
 lib/librte_ring/Makefile                           |    48 -
 lib/librte_ring/rte_ring.c                         |   338 -
 lib/librte_ring/rte_ring.h                         |  1214 ---
 lib/librte_sched/Makefile                          |     7 +-
 lib/librte_table/Makefile                          |     8 +-
 lib/librte_timer/Makefile                          |     6 +-
 lib/librte_vhost/Makefile                          |     8 +-
 mk/exec-env/linuxapp/rte.vars.mk                   |     2 +
 mk/rte.app.mk                                      |    74 +-
 mk/rte.corelib.mk                                  |    81 +
 mk/rte.lib.mk                                      |    49 +-
 mk/rte.sdkbuild.mk                                 |     3 -
 mk/rte.sharelib.mk                                 |   101 -
 mk/rte.vars.mk                                     |     9 -
 468 files changed, 106598 insertions(+), 106572 deletions(-)
 create mode 100644 lib/core/Makefile
 create mode 100644 lib/core/librte_core/Makefile
 create mode 100644 lib/core/librte_eal/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/BSDmakefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/contigmem.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_alarm.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_debug.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_hugepage_info.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_interrupts.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_lcore.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_log.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_memory.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_pci.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_thread.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_timer.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/include/exec-env/rte_dom0_common.h
 create mode 100644 lib/core/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/BSDmakefile
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/nic_uio.c
 create mode 100644 lib/core/librte_eal/common/Makefile
 create mode 100644 lib/core/librte_eal/common/eal_common_cpuflags.c
 create mode 100644 lib/core/librte_eal/common/eal_common_dev.c
 create mode 100644 lib/core/librte_eal/common/eal_common_devargs.c
 create mode 100644 lib/core/librte_eal/common/eal_common_errno.c
 create mode 100644 lib/core/librte_eal/common/eal_common_hexdump.c
 create mode 100644 lib/core/librte_eal/common/eal_common_launch.c
 create mode 100644 lib/core/librte_eal/common/eal_common_log.c
 create mode 100644 lib/core/librte_eal/common/eal_common_memory.c
 create mode 100644 lib/core/librte_eal/common/eal_common_memzone.c
 create mode 100644 lib/core/librte_eal/common/eal_common_options.c
 create mode 100644 lib/core/librte_eal/common/eal_common_pci.c
 create mode 100644 lib/core/librte_eal/common/eal_common_string_fns.c
 create mode 100644 lib/core/librte_eal/common/eal_common_tailqs.c
 create mode 100644 lib/core/librte_eal/common/eal_filesystem.h
 create mode 100644 lib/core/librte_eal/common/eal_hugepages.h
 create mode 100644 lib/core/librte_eal/common/eal_internal_cfg.h
 create mode 100644 lib/core/librte_eal/common/eal_options.h
 create mode 100644 lib/core/librte_eal/common/eal_private.h
 create mode 100644 lib/core/librte_eal/common/eal_thread.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic_32.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic_64.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder_32.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder_64.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/rte_alarm.h
 create mode 100644 lib/core/librte_eal/common/include/rte_branch_prediction.h
 create mode 100644 lib/core/librte_eal/common/include/rte_common.h
 create mode 100644 lib/core/librte_eal/common/include/rte_common_vect.h
 create mode 100644 lib/core/librte_eal/common/include/rte_debug.h
 create mode 100644 lib/core/librte_eal/common/include/rte_dev.h
 create mode 100644 lib/core/librte_eal/common/include/rte_devargs.h
 create mode 100644 lib/core/librte_eal/common/include/rte_eal.h
 create mode 100644 lib/core/librte_eal/common/include/rte_eal_memconfig.h
 create mode 100644 lib/core/librte_eal/common/include/rte_errno.h
 create mode 100644 lib/core/librte_eal/common/include/rte_hexdump.h
 create mode 100644 lib/core/librte_eal/common/include/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/common/include/rte_launch.h
 create mode 100644 lib/core/librte_eal/common/include/rte_lcore.h
 create mode 100644 lib/core/librte_eal/common/include/rte_log.h
 create mode 100644 lib/core/librte_eal/common/include/rte_malloc_heap.h
 create mode 100644 lib/core/librte_eal/common/include/rte_memory.h
 create mode 100644 lib/core/librte_eal/common/include/rte_memzone.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_feature_defs.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_features.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_ids.h
 create mode 100644 lib/core/librte_eal/common/include/rte_per_lcore.h
 create mode 100644 lib/core/librte_eal/common/include/rte_random.h
 create mode 100644 lib/core/librte_eal/common/include/rte_rwlock.h
 create mode 100644 lib/core/librte_eal/common/include/rte_string_fns.h
 create mode 100644 lib/core/librte_eal/common/include/rte_tailq.h
 create mode 100644 lib/core/librte_eal/common/include/rte_tailq_elem.h
 create mode 100644 lib/core/librte_eal/common/include/rte_version.h
 create mode 100644 lib/core/librte_eal/common/include/rte_warnings.h
 create mode 100644 lib/core/librte_eal/linuxapp/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/eal/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_alarm.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_debug.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_hugepage_info.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_interrupts.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_ivshmem.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_lcore.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_log.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_memory.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_init.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_uio.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_thread.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_timer.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_vfio.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_xen_memory.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_dom0_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/igb_uio.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/kni/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/README
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/COPYING
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_defines.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_osdep.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_regs.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_debugfs.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_hwmon.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_param.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_procfs.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_regtest.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_main.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_dev.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_fifo.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_misc.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_net.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_vhost.c
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c
 create mode 100644 lib/core/librte_malloc/Makefile
 create mode 100644 lib/core/librte_malloc/malloc_elem.c
 create mode 100644 lib/core/librte_malloc/malloc_elem.h
 create mode 100644 lib/core/librte_malloc/malloc_heap.c
 create mode 100644 lib/core/librte_malloc/malloc_heap.h
 create mode 100644 lib/core/librte_malloc/rte_malloc.c
 create mode 100644 lib/core/librte_malloc/rte_malloc.h
 create mode 100644 lib/core/librte_mbuf/Makefile
 create mode 100644 lib/core/librte_mbuf/rte_mbuf.c
 create mode 100644 lib/core/librte_mbuf/rte_mbuf.h
 create mode 100644 lib/core/librte_mempool/Makefile
 create mode 100644 lib/core/librte_mempool/rte_dom0_mempool.c
 create mode 100644 lib/core/librte_mempool/rte_mempool.c
 create mode 100644 lib/core/librte_mempool/rte_mempool.h
 create mode 100644 lib/core/librte_ring/Makefile
 create mode 100644 lib/core/librte_ring/rte_ring.c
 create mode 100644 lib/core/librte_ring/rte_ring.h
 delete mode 100644 lib/librte_eal/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/BSDmakefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/contigmem.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_alarm.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_debug.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_hugepage_info.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_interrupts.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_lcore.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_log.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_memory.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_pci.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_thread.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_timer.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/include/exec-env/rte_dom0_common.h
 delete mode 100644 lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/BSDmakefile
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/nic_uio.c
 delete mode 100644 lib/librte_eal/common/Makefile
 delete mode 100644 lib/librte_eal/common/eal_common_cpuflags.c
 delete mode 100644 lib/librte_eal/common/eal_common_dev.c
 delete mode 100644 lib/librte_eal/common/eal_common_devargs.c
 delete mode 100644 lib/librte_eal/common/eal_common_errno.c
 delete mode 100644 lib/librte_eal/common/eal_common_hexdump.c
 delete mode 100644 lib/librte_eal/common/eal_common_launch.c
 delete mode 100644 lib/librte_eal/common/eal_common_log.c
 delete mode 100644 lib/librte_eal/common/eal_common_memory.c
 delete mode 100644 lib/librte_eal/common/eal_common_memzone.c
 delete mode 100644 lib/librte_eal/common/eal_common_options.c
 delete mode 100644 lib/librte_eal/common/eal_common_pci.c
 delete mode 100644 lib/librte_eal/common/eal_common_string_fns.c
 delete mode 100644 lib/librte_eal/common/eal_common_tailqs.c
 delete mode 100644 lib/librte_eal/common/eal_filesystem.h
 delete mode 100644 lib/librte_eal/common/eal_hugepages.h
 delete mode 100644 lib/librte_eal/common/eal_internal_cfg.h
 delete mode 100644 lib/librte_eal/common/eal_options.h
 delete mode 100644 lib/librte_eal/common/eal_private.h
 delete mode 100644 lib/librte_eal/common/eal_thread.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic_32.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder_32.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder_64.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/rte_alarm.h
 delete mode 100644 lib/librte_eal/common/include/rte_branch_prediction.h
 delete mode 100644 lib/librte_eal/common/include/rte_common.h
 delete mode 100644 lib/librte_eal/common/include/rte_common_vect.h
 delete mode 100644 lib/librte_eal/common/include/rte_debug.h
 delete mode 100644 lib/librte_eal/common/include/rte_dev.h
 delete mode 100644 lib/librte_eal/common/include/rte_devargs.h
 delete mode 100644 lib/librte_eal/common/include/rte_eal.h
 delete mode 100644 lib/librte_eal/common/include/rte_eal_memconfig.h
 delete mode 100644 lib/librte_eal/common/include/rte_errno.h
 delete mode 100644 lib/librte_eal/common/include/rte_hexdump.h
 delete mode 100644 lib/librte_eal/common/include/rte_interrupts.h
 delete mode 100644 lib/librte_eal/common/include/rte_launch.h
 delete mode 100644 lib/librte_eal/common/include/rte_lcore.h
 delete mode 100644 lib/librte_eal/common/include/rte_log.h
 delete mode 100644 lib/librte_eal/common/include/rte_malloc_heap.h
 delete mode 100644 lib/librte_eal/common/include/rte_memory.h
 delete mode 100644 lib/librte_eal/common/include/rte_memzone.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_feature_defs.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_features.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_ids.h
 delete mode 100644 lib/librte_eal/common/include/rte_per_lcore.h
 delete mode 100644 lib/librte_eal/common/include/rte_random.h
 delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
 delete mode 100644 lib/librte_eal/common/include/rte_string_fns.h
 delete mode 100644 lib/librte_eal/common/include/rte_tailq.h
 delete mode 100644 lib/librte_eal/common/include/rte_tailq_elem.h
 delete mode 100644 lib/librte_eal/common/include/rte_version.h
 delete mode 100644 lib/librte_eal/common/include/rte_warnings.h
 delete mode 100644 lib/librte_eal/linuxapp/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/eal/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_alarm.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_debug.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_interrupts.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_ivshmem.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_lcore.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_log.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_memory.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_init.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_uio.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_vfio.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_thread.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_timer.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_vfio.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_xen_memory.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_dom0_common.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/igb_uio.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/kni/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/README
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/COPYING
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_defines.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_osdep.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_regs.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_debugfs.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_hwmon.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_param.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_procfs.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_regtest.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_main.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_dev.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_fifo.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_misc.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_net.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_vhost.c
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c
 delete mode 100644 lib/librte_malloc/Makefile
 delete mode 100644 lib/librte_malloc/malloc_elem.c
 delete mode 100644 lib/librte_malloc/malloc_elem.h
 delete mode 100644 lib/librte_malloc/malloc_heap.c
 delete mode 100644 lib/librte_malloc/malloc_heap.h
 delete mode 100644 lib/librte_malloc/rte_malloc.c
 delete mode 100644 lib/librte_malloc/rte_malloc.h
 delete mode 100644 lib/librte_mbuf/Makefile
 delete mode 100644 lib/librte_mbuf/rte_mbuf.c
 delete mode 100644 lib/librte_mbuf/rte_mbuf.h
 delete mode 100644 lib/librte_mempool/Makefile
 delete mode 100644 lib/librte_mempool/rte_dom0_mempool.c
 delete mode 100644 lib/librte_mempool/rte_mempool.c
 delete mode 100644 lib/librte_mempool/rte_mempool.h
 delete mode 100644 lib/librte_ring/Makefile
 delete mode 100644 lib/librte_ring/rte_ring.c
 delete mode 100644 lib/librte_ring/rte_ring.h
 create mode 100644 mk/rte.corelib.mk
 delete mode 100644 mk/rte.sharelib.mk

-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 01/13] mk: Remove combined library and related options
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 02/13] lib/core: create new core dir and makefiles Sergio Gonzalez Monroy
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

Remove CONFIG_RTE_BUILD_COMBINE_LIBS and CONFIG_RTE_LIBNAME.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 config/common_bsdapp                        |   6 --
 config/common_linuxapp                      |   6 --
 config/defconfig_ppc_64-power8-linuxapp-gcc |   2 -
 lib/Makefile                                |   1 -
 mk/rte.app.mk                               |  12 ----
 mk/rte.lib.mk                               |  34 ----------
 mk/rte.sdkbuild.mk                          |   3 -
 mk/rte.sharelib.mk                          | 101 ----------------------------
 mk/rte.vars.mk                              |   9 ---
 9 files changed, 174 deletions(-)
 delete mode 100644 mk/rte.sharelib.mk

diff --git a/config/common_bsdapp b/config/common_bsdapp
index 9177db1..812a6ca 100644
--- a/config/common_bsdapp
+++ b/config/common_bsdapp
@@ -79,12 +79,6 @@ CONFIG_RTE_FORCE_INTRINSICS=n
 CONFIG_RTE_BUILD_SHARED_LIB=n
 
 #
-# Combine to one single library
-#
-CONFIG_RTE_BUILD_COMBINE_LIBS=n
-CONFIG_RTE_LIBNAME=intel_dpdk
-
-#
 # Compile Environment Abstraction Layer
 #
 CONFIG_RTE_LIBRTE_EAL=y
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 2f9643b..e35ad2b 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -79,12 +79,6 @@ CONFIG_RTE_FORCE_INTRINSICS=n
 CONFIG_RTE_BUILD_SHARED_LIB=n
 
 #
-# Combine to one single library
-#
-CONFIG_RTE_BUILD_COMBINE_LIBS=n
-CONFIG_RTE_LIBNAME="intel_dpdk"
-
-#
 # Compile Environment Abstraction Layer
 #
 CONFIG_RTE_LIBRTE_EAL=y
diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc b/config/defconfig_ppc_64-power8-linuxapp-gcc
index d97a885..f1af518 100644
--- a/config/defconfig_ppc_64-power8-linuxapp-gcc
+++ b/config/defconfig_ppc_64-power8-linuxapp-gcc
@@ -39,8 +39,6 @@ CONFIG_RTE_ARCH_64=y
 CONFIG_RTE_TOOLCHAIN="gcc"
 CONFIG_RTE_TOOLCHAIN_GCC=y
 
-CONFIG_RTE_LIBNAME="powerpc_dpdk"
-
 # Note: Power doesn't have this support
 CONFIG_RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT=n
 
diff --git a/lib/Makefile b/lib/Makefile
index 0ffc982..bafc9ae 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -71,5 +71,4 @@ DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
 DIRS-$(CONFIG_RTE_LIBRTE_IVSHMEM) += librte_ivshmem
 endif
 
-include $(RTE_SDK)/mk/rte.sharelib.mk
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index e1a0dbf..becdac5 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -61,8 +61,6 @@ ifeq ($(NO_AUTOLIBS),)
 
 LDLIBS += --whole-archive
 
-ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
-
 ifeq ($(CONFIG_RTE_LIBRTE_DISTRIBUTOR),y)
 LDLIBS += -lrte_distributor
 endif
@@ -121,16 +119,12 @@ LDLIBS += -lm
 LDLIBS += -lrt
 endif
 
-endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_PCAP),y)
 LDLIBS += -lpcap
 endif
 
 LDLIBS += --start-group
 
-ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),n)
-
 ifeq ($(CONFIG_RTE_LIBRTE_KVARGS),y)
 LDLIBS += -lrte_kvargs
 endif
@@ -226,8 +220,6 @@ endif
 
 endif # plugins
 
-endif # ! CONFIG_RTE_BUILD_COMBINE_LIBS
-
 LDLIBS += $(EXECENV_LDLIBS)
 
 LDLIBS += --end-group
@@ -251,10 +243,6 @@ build: _postbuild
 
 exe2cmd = $(strip $(call dotfile,$(patsubst %,%.cmd,$(1))))
 
-ifeq ($(CONFIG_RTE_BUILD_COMBINE_LIBS),y)
-LDLIBS += -l$(RTE_LIBNAME)
-endif
-
 ifeq ($(LINK_USING_CC),1)
 override EXTRA_LDFLAGS := $(call linkerprefix,$(EXTRA_LDFLAGS))
 O_TO_EXE = $(CC) $(CFLAGS) $(LDFLAGS_$(@)) \
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 81bf8e1..7c99fd1 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -84,24 +84,6 @@ O_TO_S_DO = @set -e; \
 	$(O_TO_S) && \
 	echo $(O_TO_S_CMD) > $(call exe2cmd,$(@))
 
-ifeq ($(RTE_BUILD_SHARED_LIB),n)
-O_TO_C = $(AR) crus $(LIB_ONE) $(OBJS-y)
-O_TO_C_STR = $(subst ','\'',$(O_TO_C)) #'# fix syntax highlight
-O_TO_C_DISP = $(if $(V),"$(O_TO_C_STR)","  AR_C $(@)")
-O_TO_C_DO = @set -e; \
-	$(lib_dir) \
-	$(copy_obj)
-else
-O_TO_C = $(LD) -shared $(OBJS-y) -o $(LIB_ONE)
-O_TO_C_STR = $(subst ','\'',$(O_TO_C)) #'# fix syntax highlight
-O_TO_C_DISP = $(if $(V),"$(O_TO_C_STR)","  LD_C $(@)")
-O_TO_C_DO = @set -e; \
-	$(lib_dir) \
-	$(copy_obj)
-endif
-
-copy_obj = cp -f $(OBJS-y) $(RTE_OUTPUT)/build/lib;
-lib_dir = [ -d $(RTE_OUTPUT)/lib ] || mkdir -p $(RTE_OUTPUT)/lib;
 -include .$(LIB).cmd
 
 #
@@ -122,14 +104,6 @@ $(LIB): $(OBJS-y) $(DEP_$(LIB)) FORCE
 		$(depfile_missing),\
 		$(depfile_newer)),\
 		$(O_TO_S_DO))
-ifeq ($(RTE_BUILD_COMBINE_LIBS),y)
-	$(if $(or \
-        $(file_missing),\
-        $(call cmdline_changed,$(O_TO_C_STR)),\
-        $(depfile_missing),\
-        $(depfile_newer)),\
-        $(O_TO_C_DO))
-endif
 else
 $(LIB): $(OBJS-y) $(DEP_$(LIB)) FORCE
 	@[ -d $(dir $@) ] || mkdir -p $(dir $@)
@@ -145,14 +119,6 @@ $(LIB): $(OBJS-y) $(DEP_$(LIB)) FORCE
 	    $(depfile_missing),\
 	    $(depfile_newer)),\
 	    $(O_TO_A_DO))
-ifeq ($(RTE_BUILD_COMBINE_LIBS),y)
-	$(if $(or \
-        $(file_missing),\
-        $(call cmdline_changed,$(O_TO_C_STR)),\
-        $(depfile_missing),\
-        $(depfile_newer)),\
-        $(O_TO_C_DO))
-endif
 endif
 
 #
diff --git a/mk/rte.sdkbuild.mk b/mk/rte.sdkbuild.mk
index 3154457..2b24e74 100644
--- a/mk/rte.sdkbuild.mk
+++ b/mk/rte.sdkbuild.mk
@@ -93,9 +93,6 @@ $(ROOTDIRS-y):
 	@[ -d $(BUILDDIR)/$@ ] || mkdir -p $(BUILDDIR)/$@
 	@echo "== Build $@"
 	$(Q)$(MAKE) S=$@ -f $(RTE_SRCDIR)/$@/Makefile -C $(BUILDDIR)/$@ all
-	@if [ $@ = lib -a $(RTE_BUILD_COMBINE_LIBS) = y ]; then \
-		$(MAKE) -f $(RTE_SDK)/lib/Makefile sharelib; \
-	fi
 
 %_clean:
 	@echo "== Clean $*"
diff --git a/mk/rte.sharelib.mk b/mk/rte.sharelib.mk
deleted file mode 100644
index de53558..0000000
--- a/mk/rte.sharelib.mk
+++ /dev/null
@@ -1,101 +0,0 @@
-#   BSD LICENSE
-#
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-#   All rights reserved.
-#
-#   Redistribution and use in source and binary forms, with or without
-#   modification, are permitted provided that the following conditions
-#   are met:
-#
-#     * Redistributions of source code must retain the above copyright
-#       notice, this list of conditions and the following disclaimer.
-#     * Redistributions in binary form must reproduce the above copyright
-#       notice, this list of conditions and the following disclaimer in
-#       the documentation and/or other materials provided with the
-#       distribution.
-#     * Neither the name of Intel Corporation nor the names of its
-#       contributors may be used to endorse or promote products derived
-#       from this software without specific prior written permission.
-#
-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/internal/rte.build-pre.mk
-
-# VPATH contains at least SRCDIR
-VPATH += $(SRCDIR)
-
-ifeq ($(RTE_BUILD_COMBINE_LIBS),y)
-ifeq ($(RTE_BUILD_SHARED_LIB),y)
-LIB_ONE := lib$(RTE_LIBNAME).so
-else
-LIB_ONE := lib$(RTE_LIBNAME).a
-endif
-endif
-
-.PHONY:sharelib
-sharelib: $(LIB_ONE) FORCE
-
-OBJS = $(wildcard $(RTE_OUTPUT)/build/lib/*.o)
-
-ifeq ($(LINK_USING_CC),1)
-# Override the definition of LD here, since we're linking with CC
-LD := $(CC) $(CPU_CFLAGS)
-O_TO_S = $(LD) $(call linkerprefix,$(CPU_LDFLAGS)) \
-	-shared $(OBJS) -o $(RTE_OUTPUT)/lib/$(LIB_ONE)
-else
-O_TO_S = $(LD) $(CPU_LDFLAGS) \
-	-shared $(OBJS) -o $(RTE_OUTPUT)/lib/$(LIB_ONE)
-endif
-
-O_TO_S_STR = $(subst ','\'',$(O_TO_S)) #'# fix syntax highlight
-O_TO_S_DISP = $(if $(V),"$(O_TO_S_STR)","  LD $(@)")
-O_TO_S_CMD = "cmd_$@ = $(O_TO_S_STR)"
-O_TO_S_DO = @set -e; \
-    echo $(O_TO_S_DISP); \
-    $(O_TO_S)
-
-O_TO_A =  $(AR) crus $(RTE_OUTPUT)/lib/$(LIB_ONE) $(OBJS)
-O_TO_A_STR = $(subst ','\'',$(O_TO_A)) #'# fix syntax highlight
-O_TO_A_DISP = $(if $(V),"$(O_TO_A_STR)","  LD $(@)")
-O_TO_A_CMD = "cmd_$@ = $(O_TO_A_STR)"
-O_TO_A_DO = @set -e; \
-    echo $(O_TO_A_DISP); \
-    $(O_TO_A)
-#
-# Archive objects to share library
-#
-
-ifeq ($(RTE_BUILD_COMBINE_LIBS),y)
-ifeq ($(RTE_BUILD_SHARED_LIB),y)
-$(LIB_ONE): FORCE
-	@[ -d $(dir $@) ] || mkdir -p $(dir $@)
-	$(O_TO_S_DO)
-else
-$(LIB_ONE): FORCE
-	@[ -d $(dir $@) ] || mkdir -p $(dir $@)
-	$(O_TO_A_DO)
-endif
-endif
-
-#
-# Clean all generated files
-#
-.PHONY: clean
-clean: _postclean
-
-.PHONY: doclean
-doclean:
-	$(Q)rm -rf $(LIB_ONE)
-
-.PHONY: FORCE
-FORCE:
diff --git a/mk/rte.vars.mk b/mk/rte.vars.mk
index d5b36be..316c35b 100644
--- a/mk/rte.vars.mk
+++ b/mk/rte.vars.mk
@@ -67,15 +67,6 @@ ifneq ($(BUILDING_RTE_SDK),)
   ifeq ($(RTE_BUILD_SHARED_LIB),)
     RTE_BUILD_SHARED_LIB := n
   endif
-  RTE_BUILD_COMBINE_LIBS := $(CONFIG_RTE_BUILD_COMBINE_LIBS:"%"=%)
-  ifeq ($(RTE_BUILD_COMBINE_LIBS),)
-    RTE_BUILD_COMBINE_LIBS := n
-  endif
-endif
-
-RTE_LIBNAME := $(CONFIG_RTE_LIBNAME:"%"=%)
-ifeq ($(RTE_LIBNAME),)
-RTE_LIBNAME := intel_dpdk
 endif
 
 # RTE_TARGET is deducted from config when we are building the SDK.
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 02/13] lib/core: create new core dir and makefiles
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 01/13] mk: Remove combined library and related options Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 03/13] core: move librte_eal to core subdir Sergio Gonzalez Monroy
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This patch creates a new subdirectory 'core' which contains DPDK core
libraries.

The goal is to generate a librte_core library that contains all
libraries under the core subdirectory. For that purpose, a synthetic
library librte_core is created.

When building the DPDK, all object files from core libraries would be
moved to the build directory of librte_core. When building librte_core,
the build system will link/archive all objects found in the directory.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/Makefile             | 43 +++++++++++++++++++++++++++++++++++++++++
 lib/core/librte_core/Makefile | 45 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)
 create mode 100644 lib/core/Makefile
 create mode 100644 lib/core/librte_core/Makefile

diff --git a/lib/core/Makefile b/lib/core/Makefile
new file mode 100644
index 0000000..ad44daa
--- /dev/null
+++ b/lib/core/Makefile
@@ -0,0 +1,43 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-y += librte_eal
+DIRS-y += librte_malloc
+DIRS-y += librte_ring
+DIRS-y += librte_mempool
+DIRS-y += librte_mbuf
+
+DIRS-y += librte_core
+export COREDIR=$(CURDIR)/librte_core
+
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/lib/core/librte_core/Makefile b/lib/core/librte_core/Makefile
new file mode 100644
index 0000000..b169134
--- /dev/null
+++ b/lib/core/librte_core/Makefile
@@ -0,0 +1,45 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_core.a
+
+SRCS-y = $(wildcard *.o)
+
+DEPDIRS-y += lib/core/librte_eal
+DEPDIRS-y += lib/core/librte_mempool
+DEPDIRS-y += lib/core/librte_malloc
+DEPDIRS-y += lib/core/librte_mbuf
+DEPDIRS-y += lib/core/librte_ring
+
+include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 03/13] core: move librte_eal to core subdir
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 01/13] mk: Remove combined library and related options Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 02/13] lib/core: create new core dir and makefiles Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 04/13] core: move librte_malloc " Sergio Gonzalez Monroy
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This is equivalent to:

git mv lib/librte_eal lib/core

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_eal/Makefile                       |    39 +
 lib/core/librte_eal/bsdapp/Makefile                |    38 +
 lib/core/librte_eal/bsdapp/contigmem/BSDmakefile   |    36 +
 lib/core/librte_eal/bsdapp/contigmem/Makefile      |    52 +
 lib/core/librte_eal/bsdapp/contigmem/contigmem.c   |   233 +
 lib/core/librte_eal/bsdapp/eal/Makefile            |    97 +
 lib/core/librte_eal/bsdapp/eal/eal.c               |   563 +
 lib/core/librte_eal/bsdapp/eal/eal_alarm.c         |    60 +
 lib/core/librte_eal/bsdapp/eal/eal_debug.c         |   113 +
 lib/core/librte_eal/bsdapp/eal/eal_hugepage_info.c |   133 +
 lib/core/librte_eal/bsdapp/eal/eal_interrupts.c    |    71 +
 lib/core/librte_eal/bsdapp/eal/eal_lcore.c         |   107 +
 lib/core/librte_eal/bsdapp/eal/eal_log.c           |    57 +
 lib/core/librte_eal/bsdapp/eal/eal_memory.c        |   224 +
 lib/core/librte_eal/bsdapp/eal/eal_pci.c           |   510 +
 lib/core/librte_eal/bsdapp/eal/eal_thread.c        |   233 +
 lib/core/librte_eal/bsdapp/eal/eal_timer.c         |   141 +
 .../bsdapp/eal/include/exec-env/rte_dom0_common.h  |   107 +
 .../bsdapp/eal/include/exec-env/rte_interrupts.h   |    54 +
 lib/core/librte_eal/bsdapp/nic_uio/BSDmakefile     |    36 +
 lib/core/librte_eal/bsdapp/nic_uio/Makefile        |    52 +
 lib/core/librte_eal/bsdapp/nic_uio/nic_uio.c       |   329 +
 lib/core/librte_eal/common/Makefile                |    61 +
 lib/core/librte_eal/common/eal_common_cpuflags.c   |    85 +
 lib/core/librte_eal/common/eal_common_dev.c        |   109 +
 lib/core/librte_eal/common/eal_common_devargs.c    |   152 +
 lib/core/librte_eal/common/eal_common_errno.c      |    74 +
 lib/core/librte_eal/common/eal_common_hexdump.c    |   121 +
 lib/core/librte_eal/common/eal_common_launch.c     |   120 +
 lib/core/librte_eal/common/eal_common_log.c        |   320 +
 lib/core/librte_eal/common/eal_common_memory.c     |   121 +
 lib/core/librte_eal/common/eal_common_memzone.c    |   533 +
 lib/core/librte_eal/common/eal_common_options.c    |   611 ++
 lib/core/librte_eal/common/eal_common_pci.c        |   207 +
 lib/core/librte_eal/common/eal_common_string_fns.c |    69 +
 lib/core/librte_eal/common/eal_common_tailqs.c     |   146 +
 lib/core/librte_eal/common/eal_filesystem.h        |   118 +
 lib/core/librte_eal/common/eal_hugepages.h         |    67 +
 lib/core/librte_eal/common/eal_internal_cfg.h      |    93 +
 lib/core/librte_eal/common/eal_options.h           |    93 +
 lib/core/librte_eal/common/eal_private.h           |   206 +
 lib/core/librte_eal/common/eal_thread.h            |    53 +
 .../common/include/arch/ppc_64/rte_atomic.h        |   426 +
 .../common/include/arch/ppc_64/rte_byteorder.h     |   149 +
 .../common/include/arch/ppc_64/rte_cpuflags.h      |   187 +
 .../common/include/arch/ppc_64/rte_cycles.h        |    87 +
 .../common/include/arch/ppc_64/rte_memcpy.h        |   225 +
 .../common/include/arch/ppc_64/rte_prefetch.h      |    61 +
 .../common/include/arch/ppc_64/rte_spinlock.h      |    73 +
 .../common/include/arch/x86/rte_atomic.h           |   216 +
 .../common/include/arch/x86/rte_atomic_32.h        |   222 +
 .../common/include/arch/x86/rte_atomic_64.h        |   191 +
 .../common/include/arch/x86/rte_byteorder.h        |   125 +
 .../common/include/arch/x86/rte_byteorder_32.h     |    51 +
 .../common/include/arch/x86/rte_byteorder_64.h     |    52 +
 .../common/include/arch/x86/rte_cpuflags.h         |   310 +
 .../common/include/arch/x86/rte_cycles.h           |   121 +
 .../common/include/arch/x86/rte_memcpy.h           |   297 +
 .../common/include/arch/x86/rte_prefetch.h         |    62 +
 .../common/include/arch/x86/rte_spinlock.h         |    94 +
 .../librte_eal/common/include/generic/rte_atomic.h |   918 ++
 .../common/include/generic/rte_byteorder.h         |   217 +
 .../common/include/generic/rte_cpuflags.h          |   110 +
 .../librte_eal/common/include/generic/rte_cycles.h |   205 +
 .../librte_eal/common/include/generic/rte_memcpy.h |   144 +
 .../common/include/generic/rte_prefetch.h          |    71 +
 .../common/include/generic/rte_spinlock.h          |   226 +
 lib/core/librte_eal/common/include/rte_alarm.h     |   106 +
 .../common/include/rte_branch_prediction.h         |    70 +
 lib/core/librte_eal/common/include/rte_common.h    |   389 +
 .../librte_eal/common/include/rte_common_vect.h    |    93 +
 lib/core/librte_eal/common/include/rte_debug.h     |   105 +
 lib/core/librte_eal/common/include/rte_dev.h       |   111 +
 lib/core/librte_eal/common/include/rte_devargs.h   |   149 +
 lib/core/librte_eal/common/include/rte_eal.h       |   269 +
 .../librte_eal/common/include/rte_eal_memconfig.h  |   112 +
 lib/core/librte_eal/common/include/rte_errno.h     |    96 +
 lib/core/librte_eal/common/include/rte_hexdump.h   |    89 +
 .../librte_eal/common/include/rte_interrupts.h     |   121 +
 lib/core/librte_eal/common/include/rte_launch.h    |   177 +
 lib/core/librte_eal/common/include/rte_lcore.h     |   229 +
 lib/core/librte_eal/common/include/rte_log.h       |   308 +
 .../librte_eal/common/include/rte_malloc_heap.h    |    56 +
 lib/core/librte_eal/common/include/rte_memory.h    |   218 +
 lib/core/librte_eal/common/include/rte_memzone.h   |   278 +
 lib/core/librte_eal/common/include/rte_pci.h       |   305 +
 .../common/include/rte_pci_dev_feature_defs.h      |    45 +
 .../common/include/rte_pci_dev_features.h          |    44 +
 .../librte_eal/common/include/rte_pci_dev_ids.h    |   540 +
 lib/core/librte_eal/common/include/rte_per_lcore.h |    79 +
 lib/core/librte_eal/common/include/rte_random.h    |    91 +
 lib/core/librte_eal/common/include/rte_rwlock.h    |   158 +
 .../librte_eal/common/include/rte_string_fns.h     |    81 +
 lib/core/librte_eal/common/include/rte_tailq.h     |   215 +
 .../librte_eal/common/include/rte_tailq_elem.h     |    90 +
 lib/core/librte_eal/common/include/rte_version.h   |   129 +
 lib/core/librte_eal/common/include/rte_warnings.h  |    84 +
 lib/core/librte_eal/linuxapp/Makefile              |    45 +
 lib/core/librte_eal/linuxapp/eal/Makefile          |   112 +
 lib/core/librte_eal/linuxapp/eal/eal.c             |   861 ++
 lib/core/librte_eal/linuxapp/eal/eal_alarm.c       |   268 +
 lib/core/librte_eal/linuxapp/eal/eal_debug.c       |   113 +
 .../librte_eal/linuxapp/eal/eal_hugepage_info.c    |   359 +
 lib/core/librte_eal/linuxapp/eal/eal_interrupts.c  |   826 ++
 lib/core/librte_eal/linuxapp/eal/eal_ivshmem.c     |   968 ++
 lib/core/librte_eal/linuxapp/eal/eal_lcore.c       |   191 +
 lib/core/librte_eal/linuxapp/eal/eal_log.c         |   197 +
 lib/core/librte_eal/linuxapp/eal/eal_memory.c      |  1564 +++
 lib/core/librte_eal/linuxapp/eal/eal_pci.c         |   629 ++
 lib/core/librte_eal/linuxapp/eal/eal_pci_init.h    |   122 +
 lib/core/librte_eal/linuxapp/eal/eal_pci_uio.c     |   440 +
 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio.c    |   807 ++
 .../librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c |   395 +
 lib/core/librte_eal/linuxapp/eal/eal_thread.c      |   233 +
 lib/core/librte_eal/linuxapp/eal/eal_timer.c       |   343 +
 lib/core/librte_eal/linuxapp/eal/eal_vfio.h        |    55 +
 lib/core/librte_eal/linuxapp/eal/eal_xen_memory.c  |   370 +
 .../eal/include/exec-env/rte_dom0_common.h         |   108 +
 .../linuxapp/eal/include/exec-env/rte_interrupts.h |    58 +
 .../linuxapp/eal/include/exec-env/rte_kni_common.h |   174 +
 lib/core/librte_eal/linuxapp/igb_uio/Makefile      |    53 +
 lib/core/librte_eal/linuxapp/igb_uio/compat.h      |   116 +
 lib/core/librte_eal/linuxapp/igb_uio/igb_uio.c     |   643 ++
 lib/core/librte_eal/linuxapp/kni/Makefile          |    93 +
 lib/core/librte_eal/linuxapp/kni/compat.h          |    21 +
 lib/core/librte_eal/linuxapp/kni/ethtool/README    |   100 +
 .../librte_eal/linuxapp/kni/ethtool/igb/COPYING    |   339 +
 .../linuxapp/kni/ethtool/igb/e1000_82575.c         |  3665 +++++++
 .../linuxapp/kni/ethtool/igb/e1000_82575.h         |   509 +
 .../linuxapp/kni/ethtool/igb/e1000_api.c           |  1160 +++
 .../linuxapp/kni/ethtool/igb/e1000_api.h           |   157 +
 .../linuxapp/kni/ethtool/igb/e1000_defines.h       |  1380 +++
 .../librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h |   793 ++
 .../linuxapp/kni/ethtool/igb/e1000_i210.c          |   909 ++
 .../linuxapp/kni/ethtool/igb/e1000_i210.h          |    91 +
 .../linuxapp/kni/ethtool/igb/e1000_mac.c           |  2096 ++++
 .../linuxapp/kni/ethtool/igb/e1000_mac.h           |    80 +
 .../linuxapp/kni/ethtool/igb/e1000_manage.c        |   556 +
 .../linuxapp/kni/ethtool/igb/e1000_manage.h        |    89 +
 .../linuxapp/kni/ethtool/igb/e1000_mbx.c           |   526 +
 .../linuxapp/kni/ethtool/igb/e1000_mbx.h           |    87 +
 .../linuxapp/kni/ethtool/igb/e1000_nvm.c           |   967 ++
 .../linuxapp/kni/ethtool/igb/e1000_nvm.h           |    75 +
 .../linuxapp/kni/ethtool/igb/e1000_osdep.h         |   136 +
 .../linuxapp/kni/ethtool/igb/e1000_phy.c           |  3405 ++++++
 .../linuxapp/kni/ethtool/igb/e1000_phy.h           |   256 +
 .../linuxapp/kni/ethtool/igb/e1000_regs.h          |   646 ++
 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb.h |   859 ++
 .../linuxapp/kni/ethtool/igb/igb_debugfs.c         |    29 +
 .../linuxapp/kni/ethtool/igb/igb_ethtool.c         |  2859 ++++++
 .../linuxapp/kni/ethtool/igb/igb_hwmon.c           |   260 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_main.c | 10263 +++++++++++++++++++
 .../linuxapp/kni/ethtool/igb/igb_param.c           |   848 ++
 .../linuxapp/kni/ethtool/igb/igb_procfs.c          |   363 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c  |   944 ++
 .../linuxapp/kni/ethtool/igb/igb_regtest.h         |   251 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c |   437 +
 .../librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h |    46 +
 .../librte_eal/linuxapp/kni/ethtool/igb/kcompat.c  |  1482 +++
 .../librte_eal/linuxapp/kni/ethtool/igb/kcompat.h  |  3884 +++++++
 .../linuxapp/kni/ethtool/igb/kcompat_ethtool.c     |  1172 +++
 .../librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING  |   339 +
 .../librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h  |   925 ++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c       |  1296 +++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h       |    44 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c       |  2314 +++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h       |    58 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.c         |  1158 +++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.h         |   168 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.c      |  4083 ++++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.h      |   140 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h         |   168 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c     |  2901 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h        |    91 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_main.c        |  2975 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h         |   105 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h       |   132 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c         |  1847 ++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h         |   137 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h       |    74 +
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_type.h        |  3254 ++++++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c        |   938 ++
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h        |    58 +
 .../linuxapp/kni/ethtool/ixgbe/kcompat.c           |  1246 +++
 .../linuxapp/kni/ethtool/ixgbe/kcompat.h           |  3143 ++++++
 lib/core/librte_eal/linuxapp/kni/kni_dev.h         |   150 +
 lib/core/librte_eal/linuxapp/kni/kni_ethtool.c     |   217 +
 lib/core/librte_eal/linuxapp/kni/kni_fifo.h        |   108 +
 lib/core/librte_eal/linuxapp/kni/kni_misc.c        |   606 ++
 lib/core/librte_eal/linuxapp/kni/kni_net.c         |   687 ++
 lib/core/librte_eal/linuxapp/kni/kni_vhost.c       |   811 ++
 lib/core/librte_eal/linuxapp/xen_dom0/Makefile     |    56 +
 lib/core/librte_eal/linuxapp/xen_dom0/compat.h     |    15 +
 .../librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h     |   107 +
 .../librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c    |   781 ++
 lib/librte_eal/Makefile                            |    39 -
 lib/librte_eal/bsdapp/Makefile                     |    38 -
 lib/librte_eal/bsdapp/contigmem/BSDmakefile        |    36 -
 lib/librte_eal/bsdapp/contigmem/Makefile           |    52 -
 lib/librte_eal/bsdapp/contigmem/contigmem.c        |   233 -
 lib/librte_eal/bsdapp/eal/Makefile                 |    97 -
 lib/librte_eal/bsdapp/eal/eal.c                    |   563 -
 lib/librte_eal/bsdapp/eal/eal_alarm.c              |    60 -
 lib/librte_eal/bsdapp/eal/eal_debug.c              |   113 -
 lib/librte_eal/bsdapp/eal/eal_hugepage_info.c      |   133 -
 lib/librte_eal/bsdapp/eal/eal_interrupts.c         |    71 -
 lib/librte_eal/bsdapp/eal/eal_lcore.c              |   107 -
 lib/librte_eal/bsdapp/eal/eal_log.c                |    57 -
 lib/librte_eal/bsdapp/eal/eal_memory.c             |   224 -
 lib/librte_eal/bsdapp/eal/eal_pci.c                |   510 -
 lib/librte_eal/bsdapp/eal/eal_thread.c             |   233 -
 lib/librte_eal/bsdapp/eal/eal_timer.c              |   141 -
 .../bsdapp/eal/include/exec-env/rte_dom0_common.h  |   107 -
 .../bsdapp/eal/include/exec-env/rte_interrupts.h   |    54 -
 lib/librte_eal/bsdapp/nic_uio/BSDmakefile          |    36 -
 lib/librte_eal/bsdapp/nic_uio/Makefile             |    52 -
 lib/librte_eal/bsdapp/nic_uio/nic_uio.c            |   329 -
 lib/librte_eal/common/Makefile                     |    61 -
 lib/librte_eal/common/eal_common_cpuflags.c        |    85 -
 lib/librte_eal/common/eal_common_dev.c             |   109 -
 lib/librte_eal/common/eal_common_devargs.c         |   152 -
 lib/librte_eal/common/eal_common_errno.c           |    74 -
 lib/librte_eal/common/eal_common_hexdump.c         |   121 -
 lib/librte_eal/common/eal_common_launch.c          |   120 -
 lib/librte_eal/common/eal_common_log.c             |   320 -
 lib/librte_eal/common/eal_common_memory.c          |   121 -
 lib/librte_eal/common/eal_common_memzone.c         |   533 -
 lib/librte_eal/common/eal_common_options.c         |   611 --
 lib/librte_eal/common/eal_common_pci.c             |   207 -
 lib/librte_eal/common/eal_common_string_fns.c      |    69 -
 lib/librte_eal/common/eal_common_tailqs.c          |   146 -
 lib/librte_eal/common/eal_filesystem.h             |   118 -
 lib/librte_eal/common/eal_hugepages.h              |    67 -
 lib/librte_eal/common/eal_internal_cfg.h           |    93 -
 lib/librte_eal/common/eal_options.h                |    93 -
 lib/librte_eal/common/eal_private.h                |   206 -
 lib/librte_eal/common/eal_thread.h                 |    53 -
 .../common/include/arch/ppc_64/rte_atomic.h        |   426 -
 .../common/include/arch/ppc_64/rte_byteorder.h     |   149 -
 .../common/include/arch/ppc_64/rte_cpuflags.h      |   187 -
 .../common/include/arch/ppc_64/rte_cycles.h        |    87 -
 .../common/include/arch/ppc_64/rte_memcpy.h        |   225 -
 .../common/include/arch/ppc_64/rte_prefetch.h      |    61 -
 .../common/include/arch/ppc_64/rte_spinlock.h      |    73 -
 .../common/include/arch/x86/rte_atomic.h           |   216 -
 .../common/include/arch/x86/rte_atomic_32.h        |   222 -
 .../common/include/arch/x86/rte_atomic_64.h        |   191 -
 .../common/include/arch/x86/rte_byteorder.h        |   125 -
 .../common/include/arch/x86/rte_byteorder_32.h     |    51 -
 .../common/include/arch/x86/rte_byteorder_64.h     |    52 -
 .../common/include/arch/x86/rte_cpuflags.h         |   310 -
 .../common/include/arch/x86/rte_cycles.h           |   121 -
 .../common/include/arch/x86/rte_memcpy.h           |   297 -
 .../common/include/arch/x86/rte_prefetch.h         |    62 -
 .../common/include/arch/x86/rte_spinlock.h         |    94 -
 lib/librte_eal/common/include/generic/rte_atomic.h |   918 --
 .../common/include/generic/rte_byteorder.h         |   217 -
 .../common/include/generic/rte_cpuflags.h          |   110 -
 lib/librte_eal/common/include/generic/rte_cycles.h |   205 -
 lib/librte_eal/common/include/generic/rte_memcpy.h |   144 -
 .../common/include/generic/rte_prefetch.h          |    71 -
 .../common/include/generic/rte_spinlock.h          |   226 -
 lib/librte_eal/common/include/rte_alarm.h          |   106 -
 .../common/include/rte_branch_prediction.h         |    70 -
 lib/librte_eal/common/include/rte_common.h         |   389 -
 lib/librte_eal/common/include/rte_common_vect.h    |    93 -
 lib/librte_eal/common/include/rte_debug.h          |   105 -
 lib/librte_eal/common/include/rte_dev.h            |   111 -
 lib/librte_eal/common/include/rte_devargs.h        |   149 -
 lib/librte_eal/common/include/rte_eal.h            |   269 -
 lib/librte_eal/common/include/rte_eal_memconfig.h  |   112 -
 lib/librte_eal/common/include/rte_errno.h          |    96 -
 lib/librte_eal/common/include/rte_hexdump.h        |    89 -
 lib/librte_eal/common/include/rte_interrupts.h     |   121 -
 lib/librte_eal/common/include/rte_launch.h         |   177 -
 lib/librte_eal/common/include/rte_lcore.h          |   229 -
 lib/librte_eal/common/include/rte_log.h            |   308 -
 lib/librte_eal/common/include/rte_malloc_heap.h    |    56 -
 lib/librte_eal/common/include/rte_memory.h         |   218 -
 lib/librte_eal/common/include/rte_memzone.h        |   278 -
 lib/librte_eal/common/include/rte_pci.h            |   305 -
 .../common/include/rte_pci_dev_feature_defs.h      |    45 -
 .../common/include/rte_pci_dev_features.h          |    44 -
 lib/librte_eal/common/include/rte_pci_dev_ids.h    |   540 -
 lib/librte_eal/common/include/rte_per_lcore.h      |    79 -
 lib/librte_eal/common/include/rte_random.h         |    91 -
 lib/librte_eal/common/include/rte_rwlock.h         |   158 -
 lib/librte_eal/common/include/rte_string_fns.h     |    81 -
 lib/librte_eal/common/include/rte_tailq.h          |   215 -
 lib/librte_eal/common/include/rte_tailq_elem.h     |    90 -
 lib/librte_eal/common/include/rte_version.h        |   129 -
 lib/librte_eal/common/include/rte_warnings.h       |    84 -
 lib/librte_eal/linuxapp/Makefile                   |    45 -
 lib/librte_eal/linuxapp/eal/Makefile               |   112 -
 lib/librte_eal/linuxapp/eal/eal.c                  |   861 --
 lib/librte_eal/linuxapp/eal/eal_alarm.c            |   268 -
 lib/librte_eal/linuxapp/eal/eal_debug.c            |   113 -
 lib/librte_eal/linuxapp/eal/eal_hugepage_info.c    |   359 -
 lib/librte_eal/linuxapp/eal/eal_interrupts.c       |   826 --
 lib/librte_eal/linuxapp/eal/eal_ivshmem.c          |   968 --
 lib/librte_eal/linuxapp/eal/eal_lcore.c            |   191 -
 lib/librte_eal/linuxapp/eal/eal_log.c              |   197 -
 lib/librte_eal/linuxapp/eal/eal_memory.c           |  1564 ---
 lib/librte_eal/linuxapp/eal/eal_pci.c              |   629 --
 lib/librte_eal/linuxapp/eal/eal_pci_init.h         |   122 -
 lib/librte_eal/linuxapp/eal/eal_pci_uio.c          |   440 -
 lib/librte_eal/linuxapp/eal/eal_pci_vfio.c         |   807 --
 lib/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c |   395 -
 lib/librte_eal/linuxapp/eal/eal_thread.c           |   233 -
 lib/librte_eal/linuxapp/eal/eal_timer.c            |   343 -
 lib/librte_eal/linuxapp/eal/eal_vfio.h             |    55 -
 lib/librte_eal/linuxapp/eal/eal_xen_memory.c       |   370 -
 .../eal/include/exec-env/rte_dom0_common.h         |   108 -
 .../linuxapp/eal/include/exec-env/rte_interrupts.h |    58 -
 .../linuxapp/eal/include/exec-env/rte_kni_common.h |   174 -
 lib/librte_eal/linuxapp/igb_uio/Makefile           |    53 -
 lib/librte_eal/linuxapp/igb_uio/compat.h           |   116 -
 lib/librte_eal/linuxapp/igb_uio/igb_uio.c          |   643 --
 lib/librte_eal/linuxapp/kni/Makefile               |    93 -
 lib/librte_eal/linuxapp/kni/compat.h               |    21 -
 lib/librte_eal/linuxapp/kni/ethtool/README         |   100 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/COPYING    |   339 -
 .../linuxapp/kni/ethtool/igb/e1000_82575.c         |  3665 -------
 .../linuxapp/kni/ethtool/igb/e1000_82575.h         |   509 -
 .../linuxapp/kni/ethtool/igb/e1000_api.c           |  1160 ---
 .../linuxapp/kni/ethtool/igb/e1000_api.h           |   157 -
 .../linuxapp/kni/ethtool/igb/e1000_defines.h       |  1380 ---
 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h |   793 --
 .../linuxapp/kni/ethtool/igb/e1000_i210.c          |   909 --
 .../linuxapp/kni/ethtool/igb/e1000_i210.h          |    91 -
 .../linuxapp/kni/ethtool/igb/e1000_mac.c           |  2096 ----
 .../linuxapp/kni/ethtool/igb/e1000_mac.h           |    80 -
 .../linuxapp/kni/ethtool/igb/e1000_manage.c        |   556 -
 .../linuxapp/kni/ethtool/igb/e1000_manage.h        |    89 -
 .../linuxapp/kni/ethtool/igb/e1000_mbx.c           |   526 -
 .../linuxapp/kni/ethtool/igb/e1000_mbx.h           |    87 -
 .../linuxapp/kni/ethtool/igb/e1000_nvm.c           |   967 --
 .../linuxapp/kni/ethtool/igb/e1000_nvm.h           |    75 -
 .../linuxapp/kni/ethtool/igb/e1000_osdep.h         |   136 -
 .../linuxapp/kni/ethtool/igb/e1000_phy.c           |  3405 ------
 .../linuxapp/kni/ethtool/igb/e1000_phy.h           |   256 -
 .../linuxapp/kni/ethtool/igb/e1000_regs.h          |   646 --
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb.h      |   859 --
 .../linuxapp/kni/ethtool/igb/igb_debugfs.c         |    29 -
 .../linuxapp/kni/ethtool/igb/igb_ethtool.c         |  2859 ------
 .../linuxapp/kni/ethtool/igb/igb_hwmon.c           |   260 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c | 10263 -------------------
 .../linuxapp/kni/ethtool/igb/igb_param.c           |   848 --
 .../linuxapp/kni/ethtool/igb/igb_procfs.c          |   363 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c  |   944 --
 .../linuxapp/kni/ethtool/igb/igb_regtest.h         |   251 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c |   437 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h |    46 -
 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c  |  1482 ---
 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h  |  3884 -------
 .../linuxapp/kni/ethtool/igb/kcompat_ethtool.c     |  1172 ---
 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING  |   339 -
 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h  |   925 --
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c       |  1296 ---
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h       |    44 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c       |  2314 -----
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h       |    58 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.c         |  1158 ---
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_api.h         |   168 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.c      |  4083 --------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_common.h      |   140 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h         |   168 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c     |  2901 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h        |    91 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_main.c        |  2975 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h         |   105 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h       |   132 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c         |  1847 ----
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h         |   137 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h       |    74 -
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_type.h        |  3254 ------
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c        |   938 --
 .../linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h        |    58 -
 .../linuxapp/kni/ethtool/ixgbe/kcompat.c           |  1246 ---
 .../linuxapp/kni/ethtool/ixgbe/kcompat.h           |  3143 ------
 lib/librte_eal/linuxapp/kni/kni_dev.h              |   150 -
 lib/librte_eal/linuxapp/kni/kni_ethtool.c          |   217 -
 lib/librte_eal/linuxapp/kni/kni_fifo.h             |   108 -
 lib/librte_eal/linuxapp/kni/kni_misc.c             |   606 --
 lib/librte_eal/linuxapp/kni/kni_net.c              |   687 --
 lib/librte_eal/linuxapp/kni/kni_vhost.c            |   811 --
 lib/librte_eal/linuxapp/xen_dom0/Makefile          |    56 -
 lib/librte_eal/linuxapp/xen_dom0/compat.h          |    15 -
 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h     |   107 -
 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c    |   781 --
 390 files changed, 99336 insertions(+), 99336 deletions(-)
 create mode 100644 lib/core/librte_eal/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/BSDmakefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/contigmem/contigmem.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_alarm.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_debug.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_hugepage_info.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_interrupts.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_lcore.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_log.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_memory.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_pci.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_thread.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/eal_timer.c
 create mode 100644 lib/core/librte_eal/bsdapp/eal/include/exec-env/rte_dom0_common.h
 create mode 100644 lib/core/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/BSDmakefile
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/Makefile
 create mode 100644 lib/core/librte_eal/bsdapp/nic_uio/nic_uio.c
 create mode 100644 lib/core/librte_eal/common/Makefile
 create mode 100644 lib/core/librte_eal/common/eal_common_cpuflags.c
 create mode 100644 lib/core/librte_eal/common/eal_common_dev.c
 create mode 100644 lib/core/librte_eal/common/eal_common_devargs.c
 create mode 100644 lib/core/librte_eal/common/eal_common_errno.c
 create mode 100644 lib/core/librte_eal/common/eal_common_hexdump.c
 create mode 100644 lib/core/librte_eal/common/eal_common_launch.c
 create mode 100644 lib/core/librte_eal/common/eal_common_log.c
 create mode 100644 lib/core/librte_eal/common/eal_common_memory.c
 create mode 100644 lib/core/librte_eal/common/eal_common_memzone.c
 create mode 100644 lib/core/librte_eal/common/eal_common_options.c
 create mode 100644 lib/core/librte_eal/common/eal_common_pci.c
 create mode 100644 lib/core/librte_eal/common/eal_common_string_fns.c
 create mode 100644 lib/core/librte_eal/common/eal_common_tailqs.c
 create mode 100644 lib/core/librte_eal/common/eal_filesystem.h
 create mode 100644 lib/core/librte_eal/common/eal_hugepages.h
 create mode 100644 lib/core/librte_eal/common/eal_internal_cfg.h
 create mode 100644 lib/core/librte_eal/common/eal_options.h
 create mode 100644 lib/core/librte_eal/common/eal_private.h
 create mode 100644 lib/core/librte_eal/common/eal_thread.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic_32.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_atomic_64.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder_32.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_byteorder_64.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/arch/x86/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_atomic.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_byteorder.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_cpuflags.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_cycles.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_memcpy.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_prefetch.h
 create mode 100644 lib/core/librte_eal/common/include/generic/rte_spinlock.h
 create mode 100644 lib/core/librte_eal/common/include/rte_alarm.h
 create mode 100644 lib/core/librte_eal/common/include/rte_branch_prediction.h
 create mode 100644 lib/core/librte_eal/common/include/rte_common.h
 create mode 100644 lib/core/librte_eal/common/include/rte_common_vect.h
 create mode 100644 lib/core/librte_eal/common/include/rte_debug.h
 create mode 100644 lib/core/librte_eal/common/include/rte_dev.h
 create mode 100644 lib/core/librte_eal/common/include/rte_devargs.h
 create mode 100644 lib/core/librte_eal/common/include/rte_eal.h
 create mode 100644 lib/core/librte_eal/common/include/rte_eal_memconfig.h
 create mode 100644 lib/core/librte_eal/common/include/rte_errno.h
 create mode 100644 lib/core/librte_eal/common/include/rte_hexdump.h
 create mode 100644 lib/core/librte_eal/common/include/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/common/include/rte_launch.h
 create mode 100644 lib/core/librte_eal/common/include/rte_lcore.h
 create mode 100644 lib/core/librte_eal/common/include/rte_log.h
 create mode 100644 lib/core/librte_eal/common/include/rte_malloc_heap.h
 create mode 100644 lib/core/librte_eal/common/include/rte_memory.h
 create mode 100644 lib/core/librte_eal/common/include/rte_memzone.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_feature_defs.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_features.h
 create mode 100644 lib/core/librte_eal/common/include/rte_pci_dev_ids.h
 create mode 100644 lib/core/librte_eal/common/include/rte_per_lcore.h
 create mode 100644 lib/core/librte_eal/common/include/rte_random.h
 create mode 100644 lib/core/librte_eal/common/include/rte_rwlock.h
 create mode 100644 lib/core/librte_eal/common/include/rte_string_fns.h
 create mode 100644 lib/core/librte_eal/common/include/rte_tailq.h
 create mode 100644 lib/core/librte_eal/common/include/rte_tailq_elem.h
 create mode 100644 lib/core/librte_eal/common/include/rte_version.h
 create mode 100644 lib/core/librte_eal/common/include/rte_warnings.h
 create mode 100644 lib/core/librte_eal/linuxapp/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/eal/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_alarm.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_debug.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_hugepage_info.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_interrupts.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_ivshmem.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_lcore.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_log.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_memory.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_init.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_uio.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_thread.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_timer.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_vfio.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/eal_xen_memory.c
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_dom0_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
 create mode 100644 lib/core/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/igb_uio/igb_uio.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/kni/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/README
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/COPYING
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_defines.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_osdep.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/e1000_regs.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_debugfs.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_hwmon.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_param.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_procfs.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_regtest.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/igb/kcompat_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_main.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_dev.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_ethtool.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_fifo.h
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_misc.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_net.c
 create mode 100644 lib/core/librte_eal/linuxapp/kni/kni_vhost.c
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/Makefile
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/compat.h
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h
 create mode 100644 lib/core/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c
 delete mode 100644 lib/librte_eal/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/BSDmakefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/contigmem/contigmem.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_alarm.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_debug.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_hugepage_info.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_interrupts.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_lcore.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_log.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_memory.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_pci.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_thread.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/eal_timer.c
 delete mode 100644 lib/librte_eal/bsdapp/eal/include/exec-env/rte_dom0_common.h
 delete mode 100644 lib/librte_eal/bsdapp/eal/include/exec-env/rte_interrupts.h
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/BSDmakefile
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/Makefile
 delete mode 100644 lib/librte_eal/bsdapp/nic_uio/nic_uio.c
 delete mode 100644 lib/librte_eal/common/Makefile
 delete mode 100644 lib/librte_eal/common/eal_common_cpuflags.c
 delete mode 100644 lib/librte_eal/common/eal_common_dev.c
 delete mode 100644 lib/librte_eal/common/eal_common_devargs.c
 delete mode 100644 lib/librte_eal/common/eal_common_errno.c
 delete mode 100644 lib/librte_eal/common/eal_common_hexdump.c
 delete mode 100644 lib/librte_eal/common/eal_common_launch.c
 delete mode 100644 lib/librte_eal/common/eal_common_log.c
 delete mode 100644 lib/librte_eal/common/eal_common_memory.c
 delete mode 100644 lib/librte_eal/common/eal_common_memzone.c
 delete mode 100644 lib/librte_eal/common/eal_common_options.c
 delete mode 100644 lib/librte_eal/common/eal_common_pci.c
 delete mode 100644 lib/librte_eal/common/eal_common_string_fns.c
 delete mode 100644 lib/librte_eal/common/eal_common_tailqs.c
 delete mode 100644 lib/librte_eal/common/eal_filesystem.h
 delete mode 100644 lib/librte_eal/common/eal_hugepages.h
 delete mode 100644 lib/librte_eal/common/eal_internal_cfg.h
 delete mode 100644 lib/librte_eal/common/eal_options.h
 delete mode 100644 lib/librte_eal/common/eal_private.h
 delete mode 100644 lib/librte_eal/common/eal_thread.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic_32.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder_32.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_byteorder_64.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/arch/x86/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_atomic.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_byteorder.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_cpuflags.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_cycles.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_memcpy.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_prefetch.h
 delete mode 100644 lib/librte_eal/common/include/generic/rte_spinlock.h
 delete mode 100644 lib/librte_eal/common/include/rte_alarm.h
 delete mode 100644 lib/librte_eal/common/include/rte_branch_prediction.h
 delete mode 100644 lib/librte_eal/common/include/rte_common.h
 delete mode 100644 lib/librte_eal/common/include/rte_common_vect.h
 delete mode 100644 lib/librte_eal/common/include/rte_debug.h
 delete mode 100644 lib/librte_eal/common/include/rte_dev.h
 delete mode 100644 lib/librte_eal/common/include/rte_devargs.h
 delete mode 100644 lib/librte_eal/common/include/rte_eal.h
 delete mode 100644 lib/librte_eal/common/include/rte_eal_memconfig.h
 delete mode 100644 lib/librte_eal/common/include/rte_errno.h
 delete mode 100644 lib/librte_eal/common/include/rte_hexdump.h
 delete mode 100644 lib/librte_eal/common/include/rte_interrupts.h
 delete mode 100644 lib/librte_eal/common/include/rte_launch.h
 delete mode 100644 lib/librte_eal/common/include/rte_lcore.h
 delete mode 100644 lib/librte_eal/common/include/rte_log.h
 delete mode 100644 lib/librte_eal/common/include/rte_malloc_heap.h
 delete mode 100644 lib/librte_eal/common/include/rte_memory.h
 delete mode 100644 lib/librte_eal/common/include/rte_memzone.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_feature_defs.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_features.h
 delete mode 100644 lib/librte_eal/common/include/rte_pci_dev_ids.h
 delete mode 100644 lib/librte_eal/common/include/rte_per_lcore.h
 delete mode 100644 lib/librte_eal/common/include/rte_random.h
 delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
 delete mode 100644 lib/librte_eal/common/include/rte_string_fns.h
 delete mode 100644 lib/librte_eal/common/include/rte_tailq.h
 delete mode 100644 lib/librte_eal/common/include/rte_tailq_elem.h
 delete mode 100644 lib/librte_eal/common/include/rte_version.h
 delete mode 100644 lib/librte_eal/common/include/rte_warnings.h
 delete mode 100644 lib/librte_eal/linuxapp/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/eal/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_alarm.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_debug.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_hugepage_info.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_interrupts.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_ivshmem.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_lcore.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_log.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_memory.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_init.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_uio.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_vfio.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_pci_vfio_mp_sync.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_thread.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_timer.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_vfio.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/eal_xen_memory.c
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_dom0_common.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_interrupts.h
 delete mode 100644 lib/librte_eal/linuxapp/eal/include/exec-env/rte_kni_common.h
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/igb_uio/igb_uio.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/kni/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/README
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/COPYING
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_api.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_defines.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_hw.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_i210.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mac.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_manage.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_mbx.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_nvm.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_osdep.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_phy.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_regs.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_debugfs.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_hwmon.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_param.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_procfs.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_ptp.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_regtest.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/igb_vmdq.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/COPYING
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82598.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_82599.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_dcb.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_fcoe.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_main.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_mbx.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_osdep.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_phy.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_sriov.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_type.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_x540.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_dev.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_ethtool.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_fifo.h
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_misc.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_net.c
 delete mode 100644 lib/librte_eal/linuxapp/kni/kni_vhost.c
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/Makefile
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/compat.h
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_dev.h
 delete mode 100644 lib/librte_eal/linuxapp/xen_dom0/dom0_mm_misc.c

-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 04/13] core: move librte_malloc to core subdir
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (2 preceding siblings ...)
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 03/13] core: move librte_eal to core subdir Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 05/13] core: move librte_mempool " Sergio Gonzalez Monroy
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This is equivalent to:

git mv lib/librte_malloc lib/core

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_malloc/Makefile      |  48 +++++
 lib/core/librte_malloc/malloc_elem.c | 321 ++++++++++++++++++++++++++++++++
 lib/core/librte_malloc/malloc_elem.h | 190 +++++++++++++++++++
 lib/core/librte_malloc/malloc_heap.c | 210 +++++++++++++++++++++
 lib/core/librte_malloc/malloc_heap.h |  65 +++++++
 lib/core/librte_malloc/rte_malloc.c  | 261 ++++++++++++++++++++++++++
 lib/core/librte_malloc/rte_malloc.h  | 342 +++++++++++++++++++++++++++++++++++
 lib/librte_malloc/Makefile           |  48 -----
 lib/librte_malloc/malloc_elem.c      | 321 --------------------------------
 lib/librte_malloc/malloc_elem.h      | 190 -------------------
 lib/librte_malloc/malloc_heap.c      | 210 ---------------------
 lib/librte_malloc/malloc_heap.h      |  65 -------
 lib/librte_malloc/rte_malloc.c       | 261 --------------------------
 lib/librte_malloc/rte_malloc.h       | 342 -----------------------------------
 14 files changed, 1437 insertions(+), 1437 deletions(-)
 create mode 100644 lib/core/librte_malloc/Makefile
 create mode 100644 lib/core/librte_malloc/malloc_elem.c
 create mode 100644 lib/core/librte_malloc/malloc_elem.h
 create mode 100644 lib/core/librte_malloc/malloc_heap.c
 create mode 100644 lib/core/librte_malloc/malloc_heap.h
 create mode 100644 lib/core/librte_malloc/rte_malloc.c
 create mode 100644 lib/core/librte_malloc/rte_malloc.h
 delete mode 100644 lib/librte_malloc/Makefile
 delete mode 100644 lib/librte_malloc/malloc_elem.c
 delete mode 100644 lib/librte_malloc/malloc_elem.h
 delete mode 100644 lib/librte_malloc/malloc_heap.c
 delete mode 100644 lib/librte_malloc/malloc_heap.h
 delete mode 100644 lib/librte_malloc/rte_malloc.c
 delete mode 100644 lib/librte_malloc/rte_malloc.h

diff --git a/lib/core/librte_malloc/Makefile b/lib/core/librte_malloc/Makefile
new file mode 100644
index 0000000..ba87e34
--- /dev/null
+++ b/lib/core/librte_malloc/Makefile
@@ -0,0 +1,48 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_malloc.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MALLOC) := rte_malloc.c malloc_elem.c malloc_heap.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/librte_eal
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_malloc/malloc_elem.c b/lib/core/librte_malloc/malloc_elem.c
new file mode 100644
index 0000000..ef26e47
--- /dev/null
+++ b/lib/core/librte_malloc/malloc_elem.c
@@ -0,0 +1,321 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdint.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/queue.h>
+
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_debug.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+
+#include "malloc_elem.h"
+#include "malloc_heap.h"
+
+#define MIN_DATA_SIZE (RTE_CACHE_LINE_SIZE)
+
+/*
+ * initialise a general malloc_elem header structure
+ */
+void
+malloc_elem_init(struct malloc_elem *elem,
+		struct malloc_heap *heap, const struct rte_memzone *mz, size_t size)
+{
+	elem->heap = heap;
+	elem->mz = mz;
+	elem->prev = NULL;
+	memset(&elem->free_list, 0, sizeof(elem->free_list));
+	elem->state = ELEM_FREE;
+	elem->size = size;
+	elem->pad = 0;
+	set_header(elem);
+	set_trailer(elem);
+}
+
+/*
+ * initialise a dummy malloc_elem header for the end-of-memzone marker
+ */
+void
+malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev)
+{
+	malloc_elem_init(elem, prev->heap, prev->mz, 0);
+	elem->prev = prev;
+	elem->state = ELEM_BUSY; /* mark busy so its never merged */
+}
+
+/*
+ * calculate the starting point of where data of the requested size
+ * and alignment would fit in the current element. If the data doesn't
+ * fit, return NULL.
+ */
+static void *
+elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align)
+{
+	const uintptr_t end_pt = (uintptr_t)elem +
+			elem->size - MALLOC_ELEM_TRAILER_LEN;
+	const uintptr_t new_data_start = rte_align_floor_int((end_pt - size),align);
+	const uintptr_t new_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN;
+
+	/* if the new start point is before the exist start, it won't fit */
+	return (new_elem_start < (uintptr_t)elem) ? NULL : (void *)new_elem_start;
+}
+
+/*
+ * use elem_start_pt to determine if we get meet the size and
+ * alignment request from the current element
+ */
+int
+malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align)
+{
+	return elem_start_pt(elem, size, align) != NULL;
+}
+
+/*
+ * split an existing element into two smaller elements at the given
+ * split_pt parameter.
+ */
+static void
+split_elem(struct malloc_elem *elem, struct malloc_elem *split_pt)
+{
+	struct malloc_elem *next_elem = RTE_PTR_ADD(elem, elem->size);
+	const unsigned old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem;
+	const unsigned new_elem_size = elem->size - old_elem_size;
+
+	malloc_elem_init(split_pt, elem->heap, elem->mz, new_elem_size);
+	split_pt->prev = elem;
+	next_elem->prev = split_pt;
+	elem->size = old_elem_size;
+	set_trailer(elem);
+}
+
+/*
+ * Given an element size, compute its freelist index.
+ * We free an element into the freelist containing similarly-sized elements.
+ * We try to allocate elements starting with the freelist containing
+ * similarly-sized elements, and if necessary, we search freelists
+ * containing larger elements.
+ *
+ * Example element size ranges for a heap with five free lists:
+ *   heap->free_head[0] - (0   , 2^8]
+ *   heap->free_head[1] - (2^8 , 2^10]
+ *   heap->free_head[2] - (2^10 ,2^12]
+ *   heap->free_head[3] - (2^12, 2^14]
+ *   heap->free_head[4] - (2^14, MAX_SIZE]
+ */
+size_t
+malloc_elem_free_list_index(size_t size)
+{
+#define MALLOC_MINSIZE_LOG2   8
+#define MALLOC_LOG2_INCREMENT 2
+
+	size_t log2;
+	size_t index;
+
+	if (size <= (1UL << MALLOC_MINSIZE_LOG2))
+		return 0;
+
+	/* Find next power of 2 >= size. */
+	log2 = sizeof(size) * 8 - __builtin_clzl(size-1);
+
+	/* Compute freelist index, based on log2(size). */
+	index = (log2 - MALLOC_MINSIZE_LOG2 + MALLOC_LOG2_INCREMENT - 1) /
+	        MALLOC_LOG2_INCREMENT;
+
+	return (index <= RTE_HEAP_NUM_FREELISTS-1?
+	        index: RTE_HEAP_NUM_FREELISTS-1);
+}
+
+/*
+ * Add the specified element to its heap's free list.
+ */
+void
+malloc_elem_free_list_insert(struct malloc_elem *elem)
+{
+	size_t idx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN);
+
+	elem->state = ELEM_FREE;
+	LIST_INSERT_HEAD(&elem->heap->free_head[idx], elem, free_list);
+}
+
+/*
+ * Remove the specified element from its heap's free list.
+ */
+static void
+elem_free_list_remove(struct malloc_elem *elem)
+{
+	LIST_REMOVE(elem, free_list);
+}
+
+/*
+ * reserve a block of data in an existing malloc_elem. If the malloc_elem
+ * is much larger than the data block requested, we split the element in two.
+ * This function is only called from malloc_heap_alloc so parameter checking
+ * is not done here, as it's done there previously.
+ */
+struct malloc_elem *
+malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align)
+{
+	struct malloc_elem *new_elem = elem_start_pt(elem, size, align);
+	const unsigned old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem;
+
+	if (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE){
+		/* don't split it, pad the element instead */
+		elem->state = ELEM_BUSY;
+		elem->pad = old_elem_size;
+
+		/* put a dummy header in padding, to point to real element header */
+		if (elem->pad > 0){ /* pad will be at least 64-bytes, as everything
+		                     * is cache-line aligned */
+			new_elem->pad = elem->pad;
+			new_elem->state = ELEM_PAD;
+			new_elem->size = elem->size - elem->pad;
+			set_header(new_elem);
+		}
+		/* remove element from free list */
+		elem_free_list_remove(elem);
+
+		return new_elem;
+	}
+
+	/* we are going to split the element in two. The original element
+	 * remains free, and the new element is the one allocated.
+	 * Re-insert original element, in case its new size makes it
+	 * belong on a different list.
+	 */
+	elem_free_list_remove(elem);
+	split_elem(elem, new_elem);
+	new_elem->state = ELEM_BUSY;
+	malloc_elem_free_list_insert(elem);
+
+	return new_elem;
+}
+
+/*
+ * joing two struct malloc_elem together. elem1 and elem2 must
+ * be contiguous in memory.
+ */
+static inline void
+join_elem(struct malloc_elem *elem1, struct malloc_elem *elem2)
+{
+	struct malloc_elem *next = RTE_PTR_ADD(elem2, elem2->size);
+	elem1->size += elem2->size;
+	next->prev = elem1;
+}
+
+/*
+ * free a malloc_elem block by adding it to the free list. If the
+ * blocks either immediately before or immediately after newly freed block
+ * are also free, the blocks are merged together.
+ */
+int
+malloc_elem_free(struct malloc_elem *elem)
+{
+	if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY)
+		return -1;
+
+	rte_spinlock_lock(&(elem->heap->lock));
+	struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size);
+	if (next->state == ELEM_FREE){
+		/* remove from free list, join to this one */
+		elem_free_list_remove(next);
+		join_elem(elem, next);
+	}
+
+	/* check if previous element is free, if so join with it and return,
+	 * need to re-insert in free list, as that element's size is changing
+	 */
+	if (elem->prev != NULL && elem->prev->state == ELEM_FREE) {
+		elem_free_list_remove(elem->prev);
+		join_elem(elem->prev, elem);
+		malloc_elem_free_list_insert(elem->prev);
+	}
+	/* otherwise add ourselves to the free list */
+	else {
+		malloc_elem_free_list_insert(elem);
+		elem->pad = 0;
+	}
+	/* decrease heap's count of allocated elements */
+	elem->heap->alloc_count--;
+	rte_spinlock_unlock(&(elem->heap->lock));
+
+	return 0;
+}
+
+/*
+ * attempt to resize a malloc_elem by expanding into any free space
+ * immediately after it in memory.
+ */
+int
+malloc_elem_resize(struct malloc_elem *elem, size_t size)
+{
+	const size_t new_size = size + MALLOC_ELEM_OVERHEAD;
+	/* if we request a smaller size, then always return ok */
+	const size_t current_size = elem->size - elem->pad;
+	if (current_size >= new_size)
+		return 0;
+
+	struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size);
+	rte_spinlock_lock(&elem->heap->lock);
+	if (next ->state != ELEM_FREE)
+		goto err_return;
+	if (current_size + next->size < new_size)
+		goto err_return;
+
+	/* we now know the element fits, so remove from free list,
+	 * join the two
+	 */
+	elem_free_list_remove(next);
+	join_elem(elem, next);
+
+	if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){
+		/* now we have a big block together. Lets cut it down a bit, by splitting */
+		struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size);
+		split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE);
+		split_elem(elem, split_pt);
+		malloc_elem_free_list_insert(split_pt);
+	}
+	rte_spinlock_unlock(&elem->heap->lock);
+	return 0;
+
+err_return:
+	rte_spinlock_unlock(&elem->heap->lock);
+	return -1;
+}
diff --git a/lib/core/librte_malloc/malloc_elem.h b/lib/core/librte_malloc/malloc_elem.h
new file mode 100644
index 0000000..9790b1a
--- /dev/null
+++ b/lib/core/librte_malloc/malloc_elem.h
@@ -0,0 +1,190 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef MALLOC_ELEM_H_
+#define MALLOC_ELEM_H_
+
+#include <rte_memory.h>
+
+/* dummy definition of struct so we can use pointers to it in malloc_elem struct */
+struct malloc_heap;
+
+enum elem_state {
+	ELEM_FREE = 0,
+	ELEM_BUSY,
+	ELEM_PAD  /* element is a padding-only header */
+};
+
+struct malloc_elem {
+	struct malloc_heap *heap;
+	struct malloc_elem *volatile prev;      /* points to prev elem in memzone */
+	LIST_ENTRY(malloc_elem) free_list;      /* list of free elements in heap */
+	const struct rte_memzone *mz;
+	volatile enum elem_state state;
+	uint32_t pad;
+	size_t size;
+#ifdef RTE_LIBRTE_MALLOC_DEBUG
+	uint64_t header_cookie;         /* Cookie marking start of data */
+	                                /* trailer cookie at start + size */
+#endif
+} __rte_cache_aligned;
+
+#ifndef RTE_LIBRTE_MALLOC_DEBUG
+static const unsigned MALLOC_ELEM_TRAILER_LEN = 0;
+
+/* dummy function - just check if pointer is non-null */
+static inline int
+malloc_elem_cookies_ok(const struct malloc_elem *elem){ return elem != NULL; }
+
+/* dummy function - no header if malloc_debug is not enabled */
+static inline void
+set_header(struct malloc_elem *elem __rte_unused){ }
+
+/* dummy function - no trailer if malloc_debug is not enabled */
+static inline void
+set_trailer(struct malloc_elem *elem __rte_unused){ }
+
+
+#else
+static const unsigned MALLOC_ELEM_TRAILER_LEN = RTE_CACHE_LINE_SIZE;
+
+#define MALLOC_HEADER_COOKIE   0xbadbadbadadd2e55ULL /**< Header cookie. */
+#define MALLOC_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
+
+/* define macros to make referencing the header and trailer cookies easier */
+#define MALLOC_ELEM_TRAILER(elem) (*((uint64_t*)RTE_PTR_ADD(elem, \
+		elem->size - MALLOC_ELEM_TRAILER_LEN)))
+#define MALLOC_ELEM_HEADER(elem) (elem->header_cookie)
+
+static inline void
+set_header(struct malloc_elem *elem)
+{
+	if (elem != NULL)
+		MALLOC_ELEM_HEADER(elem) = MALLOC_HEADER_COOKIE;
+}
+
+static inline void
+set_trailer(struct malloc_elem *elem)
+{
+	if (elem != NULL)
+		MALLOC_ELEM_TRAILER(elem) = MALLOC_TRAILER_COOKIE;
+}
+
+/* check that the header and trailer cookies are set correctly */
+static inline int
+malloc_elem_cookies_ok(const struct malloc_elem *elem)
+{
+	return (elem != NULL &&
+			MALLOC_ELEM_HEADER(elem) == MALLOC_HEADER_COOKIE &&
+			MALLOC_ELEM_TRAILER(elem) == MALLOC_TRAILER_COOKIE);
+}
+
+#endif
+
+static const unsigned MALLOC_ELEM_HEADER_LEN = sizeof(struct malloc_elem);
+#define MALLOC_ELEM_OVERHEAD (MALLOC_ELEM_HEADER_LEN + MALLOC_ELEM_TRAILER_LEN)
+
+/*
+ * Given a pointer to the start of a memory block returned by malloc, get
+ * the actual malloc_elem header for that block.
+ */
+static inline struct malloc_elem *
+malloc_elem_from_data(const void *data)
+{
+	if (data == NULL)
+		return NULL;
+
+	struct malloc_elem *elem = RTE_PTR_SUB(data, MALLOC_ELEM_HEADER_LEN);
+	if (!malloc_elem_cookies_ok(elem))
+		return NULL;
+	return elem->state != ELEM_PAD ? elem:  RTE_PTR_SUB(elem, elem->pad);
+}
+
+/*
+ * initialise a malloc_elem header
+ */
+void
+malloc_elem_init(struct malloc_elem *elem,
+		struct malloc_heap *heap,
+		const struct rte_memzone *mz,
+		size_t size);
+
+/*
+ * initialise a dummy malloc_elem header for the end-of-memzone marker
+ */
+void
+malloc_elem_mkend(struct malloc_elem *elem,
+		struct malloc_elem *prev_free);
+
+/*
+ * return true if the current malloc_elem can hold a block of data
+ * of the requested size and with the requested alignment
+ */
+int
+malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align);
+
+/*
+ * reserve a block of data in an existing malloc_elem. If the malloc_elem
+ * is much larger than the data block requested, we split the element in two.
+ */
+struct malloc_elem *
+malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align);
+
+/*
+ * free a malloc_elem block by adding it to the free list. If the
+ * blocks either immediately before or immediately after newly freed block
+ * are also free, the blocks are merged together.
+ */
+int
+malloc_elem_free(struct malloc_elem *elem);
+
+/*
+ * attempt to resize a malloc_elem by expanding into any free space
+ * immediately after it in memory.
+ */
+int
+malloc_elem_resize(struct malloc_elem *elem, size_t size);
+
+/*
+ * Given an element size, compute its freelist index.
+ */
+size_t
+malloc_elem_free_list_index(size_t size);
+
+/*
+ * Add element to its heap's free list.
+ */
+void
+malloc_elem_free_list_insert(struct malloc_elem *elem);
+
+#endif /* MALLOC_ELEM_H_ */
diff --git a/lib/core/librte_malloc/malloc_heap.c b/lib/core/librte_malloc/malloc_heap.c
new file mode 100644
index 0000000..95fcfec
--- /dev/null
+++ b/lib/core/librte_malloc/malloc_heap.c
@@ -0,0 +1,210 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdint.h>
+#include <stddef.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_launch.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+#include <rte_memcpy.h>
+#include <rte_atomic.h>
+
+#include "malloc_elem.h"
+#include "malloc_heap.h"
+
+/* since the memzone size starts with a digit, it will appear unquoted in
+ * rte_config.h, so quote it so it can be passed to rte_str_to_size */
+#define MALLOC_MEMZONE_SIZE RTE_STR(RTE_MALLOC_MEMZONE_SIZE)
+
+/*
+ * returns the configuration setting for the memzone size as a size_t value
+ */
+static inline size_t
+get_malloc_memzone_size(void)
+{
+	return rte_str_to_size(MALLOC_MEMZONE_SIZE);
+}
+
+/*
+ * reserve an extra memory zone and make it available for use by a particular
+ * heap. This reserves the zone and sets a dummy malloc_elem header at the end
+ * to prevent overflow. The rest of the zone is added to free list as a single
+ * large free block
+ */
+static int
+malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align)
+{
+	const unsigned mz_flags = 0;
+	const size_t block_size = get_malloc_memzone_size();
+	/* ensure the data we want to allocate will fit in the memzone */
+	const size_t min_size = size + align + MALLOC_ELEM_OVERHEAD * 2;
+	const struct rte_memzone *mz = NULL;
+	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+	unsigned numa_socket = heap - mcfg->malloc_heaps;
+
+	size_t mz_size = min_size;
+	if (mz_size < block_size)
+		mz_size = block_size;
+
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	snprintf(mz_name, sizeof(mz_name), "MALLOC_S%u_HEAP_%u",
+		     numa_socket, heap->mz_count++);
+
+	/* try getting a block. if we fail and we don't need as big a block
+	 * as given in the config, we can shrink our request and try again
+	 */
+	do {
+		mz = rte_memzone_reserve(mz_name, mz_size, numa_socket,
+					 mz_flags);
+		if (mz == NULL)
+			mz_size /= 2;
+	} while (mz == NULL && mz_size > min_size);
+	if (mz == NULL)
+		return -1;
+
+	/* allocate the memory block headers, one at end, one at start */
+	struct malloc_elem *start_elem = (struct malloc_elem *)mz->addr;
+	struct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr,
+			mz_size - MALLOC_ELEM_OVERHEAD);
+	end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE);
+
+	const unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;
+	malloc_elem_init(start_elem, heap, mz, elem_size);
+	malloc_elem_mkend(end_elem, start_elem);
+	malloc_elem_free_list_insert(start_elem);
+
+	/* increase heap total size by size of new memzone */
+	heap->total_size+=mz_size - MALLOC_ELEM_OVERHEAD;
+	return 0;
+}
+
+/*
+ * Iterates through the freelist for a heap to find a free element
+ * which can store data of the required size and with the requested alignment.
+ * Returns null on failure, or pointer to element on success.
+ */
+static struct malloc_elem *
+find_suitable_element(struct malloc_heap *heap, size_t size, unsigned align)
+{
+	size_t idx;
+	struct malloc_elem *elem;
+
+	for (idx = malloc_elem_free_list_index(size);
+		idx < RTE_HEAP_NUM_FREELISTS; idx++)
+	{
+		for (elem = LIST_FIRST(&heap->free_head[idx]);
+			!!elem; elem = LIST_NEXT(elem, free_list))
+		{
+			if (malloc_elem_can_hold(elem, size, align))
+				return elem;
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Main function called by malloc to allocate a block of memory from the
+ * heap. It locks the free list, scans it, and adds a new memzone if the
+ * scan fails. Once the new memzone is added, it re-scans and should return
+ * the new element after releasing the lock.
+ */
+void *
+malloc_heap_alloc(struct malloc_heap *heap,
+		const char *type __attribute__((unused)), size_t size, unsigned align)
+{
+	size = RTE_CACHE_LINE_ROUNDUP(size);
+	align = RTE_CACHE_LINE_ROUNDUP(align);
+	rte_spinlock_lock(&heap->lock);
+	struct malloc_elem *elem = find_suitable_element(heap, size, align);
+	if (elem == NULL){
+		if ((malloc_heap_add_memzone(heap, size, align)) == 0)
+			elem = find_suitable_element(heap, size, align);
+	}
+
+	if (elem != NULL){
+		elem = malloc_elem_alloc(elem, size, align);
+		/* increase heap's count of allocated elements */
+		heap->alloc_count++;
+	}
+	rte_spinlock_unlock(&heap->lock);
+	return elem == NULL ? NULL : (void *)(&elem[1]);
+
+}
+
+/*
+ * Function to retrieve data for heap on given socket
+ */
+int
+malloc_heap_get_stats(const struct malloc_heap *heap,
+		struct rte_malloc_socket_stats *socket_stats)
+{
+	size_t idx;
+	struct malloc_elem *elem;
+
+	/* Initialise variables for heap */
+	socket_stats->free_count = 0;
+	socket_stats->heap_freesz_bytes = 0;
+	socket_stats->greatest_free_size = 0;
+
+	/* Iterate through free list */
+	for (idx = 0; idx < RTE_HEAP_NUM_FREELISTS; idx++) {
+		for (elem = LIST_FIRST(&heap->free_head[idx]);
+			!!elem; elem = LIST_NEXT(elem, free_list))
+		{
+			socket_stats->free_count++;
+			socket_stats->heap_freesz_bytes += elem->size;
+			if (elem->size > socket_stats->greatest_free_size)
+				socket_stats->greatest_free_size = elem->size;
+		}
+	}
+	/* Get stats on overall heap and allocated memory on this heap */
+	socket_stats->heap_totalsz_bytes = heap->total_size;
+	socket_stats->heap_allocsz_bytes = (socket_stats->heap_totalsz_bytes -
+			socket_stats->heap_freesz_bytes);
+	socket_stats->alloc_count = heap->alloc_count;
+	return 0;
+}
+
diff --git a/lib/core/librte_malloc/malloc_heap.h b/lib/core/librte_malloc/malloc_heap.h
new file mode 100644
index 0000000..b4aec45
--- /dev/null
+++ b/lib/core/librte_malloc/malloc_heap.h
@@ -0,0 +1,65 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef MALLOC_HEAP_H_
+#define MALLOC_HEAP_H_
+
+#include <rte_malloc.h>
+#include <rte_malloc_heap.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline unsigned
+malloc_get_numa_socket(void)
+{
+	return rte_socket_id();
+}
+
+void *
+malloc_heap_alloc(struct malloc_heap *heap, const char *type,
+		size_t size, unsigned align);
+
+int
+malloc_heap_get_stats(const struct malloc_heap *heap,
+		struct rte_malloc_socket_stats *socket_stats);
+
+int
+rte_eal_heap_memzone_init(void);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* MALLOC_HEAP_H_ */
diff --git a/lib/core/librte_malloc/rte_malloc.c b/lib/core/librte_malloc/rte_malloc.c
new file mode 100644
index 0000000..b966fc7
--- /dev/null
+++ b/lib/core/librte_malloc/rte_malloc.c
@@ -0,0 +1,261 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stddef.h>
+#include <stdio.h>
+#include <string.h>
+#include <sys/queue.h>
+
+#include <rte_memcpy.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_branch_prediction.h>
+#include <rte_debug.h>
+#include <rte_launch.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_spinlock.h>
+
+#include <rte_malloc.h>
+#include "malloc_elem.h"
+#include "malloc_heap.h"
+
+
+/* Free the memory space back to heap */
+void rte_free(void *addr)
+{
+	if (addr == NULL) return;
+	if (malloc_elem_free(malloc_elem_from_data(addr)) < 0)
+		rte_panic("Fatal error: Invalid memory\n");
+}
+
+/*
+ * Allocate memory on specified heap.
+ */
+void *
+rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg)
+{
+	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+	int socket, i;
+	void *ret;
+
+	/* return NULL if size is 0 or alignment is not power-of-2 */
+	if (size == 0 || !rte_is_power_of_2(align))
+		return NULL;
+
+	if (socket_arg == SOCKET_ID_ANY)
+		socket = malloc_get_numa_socket();
+	else
+		socket = socket_arg;
+
+	/* Check socket parameter */
+	if (socket >= RTE_MAX_NUMA_NODES)
+		return NULL;
+
+	ret = malloc_heap_alloc(&mcfg->malloc_heaps[socket], type,
+				size, align == 0 ? 1 : align);
+	if (ret != NULL || socket_arg != SOCKET_ID_ANY)
+		return ret;
+
+	/* try other heaps */
+	for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
+		/* we already tried this one */
+		if (i == socket)
+			continue;
+
+		ret = malloc_heap_alloc(&mcfg->malloc_heaps[i], type,
+					size, align == 0 ? 1 : align);
+		if (ret != NULL)
+			return ret;
+	}
+
+	return NULL;
+}
+
+/*
+ * Allocate memory on default heap.
+ */
+void *
+rte_malloc(const char *type, size_t size, unsigned align)
+{
+	return rte_malloc_socket(type, size, align, SOCKET_ID_ANY);
+}
+
+/*
+ * Allocate zero'd memory on specified heap.
+ */
+void *
+rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
+{
+	void *ptr = rte_malloc_socket(type, size, align, socket);
+
+	if (ptr != NULL)
+		memset(ptr, 0, size);
+	return ptr;
+}
+
+/*
+ * Allocate zero'd memory on default heap.
+ */
+void *
+rte_zmalloc(const char *type, size_t size, unsigned align)
+{
+	return rte_zmalloc_socket(type, size, align, SOCKET_ID_ANY);
+}
+
+/*
+ * Allocate zero'd memory on specified heap.
+ */
+void *
+rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket)
+{
+	return rte_zmalloc_socket(type, num * size, align, socket);
+}
+
+/*
+ * Allocate zero'd memory on default heap.
+ */
+void *
+rte_calloc(const char *type, size_t num, size_t size, unsigned align)
+{
+	return rte_zmalloc(type, num * size, align);
+}
+
+/*
+ * Resize allocated memory.
+ */
+void *
+rte_realloc(void *ptr, size_t size, unsigned align)
+{
+	if (ptr == NULL)
+		return rte_malloc(NULL, size, align);
+
+	struct malloc_elem *elem = malloc_elem_from_data(ptr);
+	if (elem == NULL)
+		rte_panic("Fatal error: memory corruption detected\n");
+
+	size = RTE_CACHE_LINE_ROUNDUP(size), align = RTE_CACHE_LINE_ROUNDUP(align);
+	/* check alignment matches first, and if ok, see if we can resize block */
+	if (RTE_PTR_ALIGN(ptr,align) == ptr &&
+			malloc_elem_resize(elem, size) == 0)
+		return ptr;
+
+	/* either alignment is off, or we have no room to expand,
+	 * so move data. */
+	void *new_ptr = rte_malloc(NULL, size, align);
+	if (new_ptr == NULL)
+		return NULL;
+	const unsigned old_size = elem->size - MALLOC_ELEM_OVERHEAD;
+	rte_memcpy(new_ptr, ptr, old_size < size ? old_size : size);
+	rte_free(ptr);
+
+	return new_ptr;
+}
+
+int
+rte_malloc_validate(const void *ptr, size_t *size)
+{
+	const struct malloc_elem *elem = malloc_elem_from_data(ptr);
+	if (!malloc_elem_cookies_ok(elem))
+		return -1;
+	if (size != NULL)
+		*size = elem->size - elem->pad - MALLOC_ELEM_OVERHEAD;
+	return 0;
+}
+
+/*
+ * Function to retrieve data for heap on given socket
+ */
+int
+rte_malloc_get_socket_stats(int socket,
+		struct rte_malloc_socket_stats *socket_stats)
+{
+	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+
+	if (socket >= RTE_MAX_NUMA_NODES || socket < 0)
+		return -1;
+
+	return malloc_heap_get_stats(&mcfg->malloc_heaps[socket], socket_stats);
+}
+
+/*
+ * Print stats on memory type. If type is NULL, info on all types is printed
+ */
+void
+rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
+{
+	unsigned int socket;
+	struct rte_malloc_socket_stats sock_stats;
+	/* Iterate through all initialised heaps */
+	for (socket=0; socket< RTE_MAX_NUMA_NODES; socket++) {
+		if ((rte_malloc_get_socket_stats(socket, &sock_stats) < 0))
+			continue;
+
+		fprintf(f, "Socket:%u\n", socket);
+		fprintf(f, "\tHeap_size:%zu,\n", sock_stats.heap_totalsz_bytes);
+		fprintf(f, "\tFree_size:%zu,\n", sock_stats.heap_freesz_bytes);
+		fprintf(f, "\tAlloc_size:%zu,\n", sock_stats.heap_allocsz_bytes);
+		fprintf(f, "\tGreatest_free_size:%zu,\n",
+				sock_stats.greatest_free_size);
+		fprintf(f, "\tAlloc_count:%u,\n",sock_stats.alloc_count);
+		fprintf(f, "\tFree_count:%u,\n", sock_stats.free_count);
+	}
+	return;
+}
+
+/*
+ * TODO: Set limit to memory that can be allocated to memory type
+ */
+int
+rte_malloc_set_limit(__rte_unused const char *type,
+		__rte_unused size_t max)
+{
+	return 0;
+}
+
+/*
+ * Return the physical address of a virtual address obtained through rte_malloc
+ */
+phys_addr_t
+rte_malloc_virt2phy(const void *addr)
+{
+	const struct malloc_elem *elem = malloc_elem_from_data(addr);
+	if (elem == NULL)
+		return 0;
+	return elem->mz->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->mz->addr);
+}
diff --git a/lib/core/librte_malloc/rte_malloc.h b/lib/core/librte_malloc/rte_malloc.h
new file mode 100644
index 0000000..74bb78c
--- /dev/null
+++ b/lib/core/librte_malloc/rte_malloc.h
@@ -0,0 +1,342 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MALLOC_H_
+#define _RTE_MALLOC_H_
+
+/**
+ * @file
+ * RTE Malloc. This library provides methods for dynamically allocating memory
+ * from hugepages.
+ */
+
+#include <stdio.h>
+#include <stddef.h>
+#include <rte_memory.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ *  Structure to hold heap statistics obtained from rte_malloc_get_socket_stats function.
+ */
+struct rte_malloc_socket_stats {
+	size_t heap_totalsz_bytes; /**< Total bytes on heap */
+	size_t heap_freesz_bytes;  /**< Total free bytes on heap */
+	size_t greatest_free_size; /**< Size in bytes of largest free block */
+	unsigned free_count;       /**< Number of free elements on heap */
+	unsigned alloc_count;      /**< Number of allocated elements on heap */
+	size_t heap_allocsz_bytes; /**< Total allocated bytes on heap */
+};
+
+/**
+ * This function allocates memory from the huge-page area of memory. The memory
+ * is not cleared. In NUMA systems, the memory allocated resides on the same
+ * NUMA socket as the core that calls this function.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param size
+ *   Size (in bytes) to be allocated.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_malloc(const char *type, size_t size, unsigned align);
+
+/**
+ * Allocate zero'ed memory from the heap.
+ *
+ * Equivalent to rte_malloc() except that the memory zone is
+ * initialised with zeros. In NUMA systems, the memory allocated resides on the
+ * same NUMA socket as the core that calls this function.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param size
+ *   Size (in bytes) to be allocated.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must obviously be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_zmalloc(const char *type, size_t size, unsigned align);
+
+/**
+ * Replacement function for calloc(), using huge-page memory. Memory area is
+ * initialised with zeros. In NUMA systems, the memory allocated resides on the
+ * same NUMA socket as the core that calls this function.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param num
+ *   Number of elements to be allocated.
+ * @param size
+ *   Size (in bytes) of a single element.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must obviously be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_calloc(const char *type, size_t num, size_t size, unsigned align);
+
+/**
+ * Replacement function for realloc(), using huge-page memory. Reserved area
+ * memory is resized, preserving contents. In NUMA systems, the new area
+ * resides on the same NUMA socket as the old area.
+ *
+ * @param ptr
+ *   Pointer to already allocated memory
+ * @param size
+ *   Size (in bytes) of new area. If this is 0, memory is freed.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must obviously be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the reallocated memory.
+ */
+void *
+rte_realloc(void *ptr, size_t size, unsigned align);
+
+/**
+ * This function allocates memory from the huge-page area of memory. The memory
+ * is not cleared.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param size
+ *   Size (in bytes) to be allocated.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @param socket
+ *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
+ *   will behave the same as rte_malloc().
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_malloc_socket(const char *type, size_t size, unsigned align, int socket);
+
+/**
+ * Allocate zero'ed memory from the heap.
+ *
+ * Equivalent to rte_malloc() except that the memory zone is
+ * initialised with zeros.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param size
+ *   Size (in bytes) to be allocated.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must obviously be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @param socket
+ *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
+ *   will behave the same as rte_zmalloc().
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket);
+
+/**
+ * Replacement function for calloc(), using huge-page memory. Memory area is
+ * initialised with zeros.
+ *
+ * @param type
+ *   A string identifying the type of allocated objects (useful for debug
+ *   purposes, such as identifying the cause of a memory leak). Can be NULL.
+ * @param num
+ *   Number of elements to be allocated.
+ * @param size
+ *   Size (in bytes) of a single element.
+ * @param align
+ *   If 0, the return is a pointer that is suitably aligned for any kind of
+ *   variable (in the same manner as malloc()).
+ *   Otherwise, the return is a pointer that is a multiple of *align*. In
+ *   this case, it must obviously be a power of two. (Minimum alignment is the
+ *   cacheline size, i.e. 64-bytes)
+ * @param socket
+ *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
+ *   will behave the same as rte_calloc().
+ * @return
+ *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
+ *     align is not a power of two).
+ *   - Otherwise, the pointer to the allocated object.
+ */
+void *
+rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket);
+
+/**
+ * Frees the memory space pointed to by the provided pointer.
+ *
+ * This pointer must have been returned by a previous call to
+ * rte_malloc(), rte_zmalloc(), rte_calloc() or rte_realloc(). The behaviour of
+ * rte_free() is undefined if the pointer does not match this requirement.
+ *
+ * If the pointer is NULL, the function does nothing.
+ *
+ * @param ptr
+ *   The pointer to memory to be freed.
+ */
+void
+rte_free(void *ptr);
+
+/**
+ * If malloc debug is enabled, check a memory block for header
+ * and trailer markers to indicate that all is well with the block.
+ * If size is non-null, also return the size of the block.
+ *
+ * @param ptr
+ *   pointer to the start of a data block, must have been returned
+ *   by a previous call to rte_malloc(), rte_zmalloc(), rte_calloc()
+ *   or rte_realloc()
+ * @param size
+ *   if non-null, and memory block pointer is valid, returns the size
+ *   of the memory block
+ * @return
+ *   -1 on error, invalid pointer passed or header and trailer markers
+ *   are missing or corrupted
+ *   0 on success
+ */
+int
+rte_malloc_validate(const void *ptr, size_t *size);
+
+/**
+ * Get heap statistics for the specified heap.
+ *
+ * @param socket
+ *   An unsigned integer specifying the socket to get heap statistics for
+ * @param socket_stats
+ *   A structure which provides memory to store statistics
+ * @return
+ *   Null on error
+ *   Pointer to structure storing statistics on success
+ */
+int
+rte_malloc_get_socket_stats(int socket,
+		struct rte_malloc_socket_stats *socket_stats);
+
+/**
+ * Dump statistics.
+ *
+ * Dump for the specified type to the console. If the type argument is
+ * NULL, all memory types will be dumped.
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param type
+ *   A string identifying the type of objects to dump, or NULL
+ *   to dump all objects.
+ */
+void
+rte_malloc_dump_stats(FILE *f, const char *type);
+
+/**
+ * Set the maximum amount of allocated memory for this type.
+ *
+ * This is not yet implemented
+ *
+ * @param type
+ *   A string identifying the type of allocated objects.
+ * @param max
+ *   The maximum amount of allocated bytes for this type.
+ * @return
+ *   - 0: Success.
+ *   - (-1): Error.
+ */
+int
+rte_malloc_set_limit(const char *type, size_t max);
+
+/**
+ * Return the physical address of a virtual address obtained through
+ * rte_malloc
+ *
+ * @param addr
+ *   Adress obtained from a previous rte_malloc call
+ * @return
+ *   NULL on error
+ *   otherwise return physical address of the buffer
+ */
+phys_addr_t
+rte_malloc_virt2phy(const void *addr);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MALLOC_H_ */
diff --git a/lib/librte_malloc/Makefile b/lib/librte_malloc/Makefile
deleted file mode 100644
index ba87e34..0000000
--- a/lib/librte_malloc/Makefile
+++ /dev/null
@@ -1,48 +0,0 @@
-#   BSD LICENSE
-#
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-#   All rights reserved.
-#
-#   Redistribution and use in source and binary forms, with or without
-#   modification, are permitted provided that the following conditions
-#   are met:
-#
-#     * Redistributions of source code must retain the above copyright
-#       notice, this list of conditions and the following disclaimer.
-#     * Redistributions in binary form must reproduce the above copyright
-#       notice, this list of conditions and the following disclaimer in
-#       the documentation and/or other materials provided with the
-#       distribution.
-#     * Neither the name of Intel Corporation nor the names of its
-#       contributors may be used to endorse or promote products derived
-#       from this software without specific prior written permission.
-#
-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-# library name
-LIB = librte_malloc.a
-
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
-
-# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_MALLOC) := rte_malloc.c malloc_elem.c malloc_heap.c
-
-# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h
-
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/librte_eal
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_malloc/malloc_elem.c b/lib/librte_malloc/malloc_elem.c
deleted file mode 100644
index ef26e47..0000000
--- a/lib/librte_malloc/malloc_elem.c
+++ /dev/null
@@ -1,321 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#include <stdint.h>
-#include <stddef.h>
-#include <stdio.h>
-#include <string.h>
-#include <sys/queue.h>
-
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_launch.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_debug.h>
-#include <rte_common.h>
-#include <rte_spinlock.h>
-
-#include "malloc_elem.h"
-#include "malloc_heap.h"
-
-#define MIN_DATA_SIZE (RTE_CACHE_LINE_SIZE)
-
-/*
- * initialise a general malloc_elem header structure
- */
-void
-malloc_elem_init(struct malloc_elem *elem,
-		struct malloc_heap *heap, const struct rte_memzone *mz, size_t size)
-{
-	elem->heap = heap;
-	elem->mz = mz;
-	elem->prev = NULL;
-	memset(&elem->free_list, 0, sizeof(elem->free_list));
-	elem->state = ELEM_FREE;
-	elem->size = size;
-	elem->pad = 0;
-	set_header(elem);
-	set_trailer(elem);
-}
-
-/*
- * initialise a dummy malloc_elem header for the end-of-memzone marker
- */
-void
-malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev)
-{
-	malloc_elem_init(elem, prev->heap, prev->mz, 0);
-	elem->prev = prev;
-	elem->state = ELEM_BUSY; /* mark busy so its never merged */
-}
-
-/*
- * calculate the starting point of where data of the requested size
- * and alignment would fit in the current element. If the data doesn't
- * fit, return NULL.
- */
-static void *
-elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align)
-{
-	const uintptr_t end_pt = (uintptr_t)elem +
-			elem->size - MALLOC_ELEM_TRAILER_LEN;
-	const uintptr_t new_data_start = rte_align_floor_int((end_pt - size),align);
-	const uintptr_t new_elem_start = new_data_start - MALLOC_ELEM_HEADER_LEN;
-
-	/* if the new start point is before the exist start, it won't fit */
-	return (new_elem_start < (uintptr_t)elem) ? NULL : (void *)new_elem_start;
-}
-
-/*
- * use elem_start_pt to determine if we get meet the size and
- * alignment request from the current element
- */
-int
-malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align)
-{
-	return elem_start_pt(elem, size, align) != NULL;
-}
-
-/*
- * split an existing element into two smaller elements at the given
- * split_pt parameter.
- */
-static void
-split_elem(struct malloc_elem *elem, struct malloc_elem *split_pt)
-{
-	struct malloc_elem *next_elem = RTE_PTR_ADD(elem, elem->size);
-	const unsigned old_elem_size = (uintptr_t)split_pt - (uintptr_t)elem;
-	const unsigned new_elem_size = elem->size - old_elem_size;
-
-	malloc_elem_init(split_pt, elem->heap, elem->mz, new_elem_size);
-	split_pt->prev = elem;
-	next_elem->prev = split_pt;
-	elem->size = old_elem_size;
-	set_trailer(elem);
-}
-
-/*
- * Given an element size, compute its freelist index.
- * We free an element into the freelist containing similarly-sized elements.
- * We try to allocate elements starting with the freelist containing
- * similarly-sized elements, and if necessary, we search freelists
- * containing larger elements.
- *
- * Example element size ranges for a heap with five free lists:
- *   heap->free_head[0] - (0   , 2^8]
- *   heap->free_head[1] - (2^8 , 2^10]
- *   heap->free_head[2] - (2^10 ,2^12]
- *   heap->free_head[3] - (2^12, 2^14]
- *   heap->free_head[4] - (2^14, MAX_SIZE]
- */
-size_t
-malloc_elem_free_list_index(size_t size)
-{
-#define MALLOC_MINSIZE_LOG2   8
-#define MALLOC_LOG2_INCREMENT 2
-
-	size_t log2;
-	size_t index;
-
-	if (size <= (1UL << MALLOC_MINSIZE_LOG2))
-		return 0;
-
-	/* Find next power of 2 >= size. */
-	log2 = sizeof(size) * 8 - __builtin_clzl(size-1);
-
-	/* Compute freelist index, based on log2(size). */
-	index = (log2 - MALLOC_MINSIZE_LOG2 + MALLOC_LOG2_INCREMENT - 1) /
-	        MALLOC_LOG2_INCREMENT;
-
-	return (index <= RTE_HEAP_NUM_FREELISTS-1?
-	        index: RTE_HEAP_NUM_FREELISTS-1);
-}
-
-/*
- * Add the specified element to its heap's free list.
- */
-void
-malloc_elem_free_list_insert(struct malloc_elem *elem)
-{
-	size_t idx = malloc_elem_free_list_index(elem->size - MALLOC_ELEM_HEADER_LEN);
-
-	elem->state = ELEM_FREE;
-	LIST_INSERT_HEAD(&elem->heap->free_head[idx], elem, free_list);
-}
-
-/*
- * Remove the specified element from its heap's free list.
- */
-static void
-elem_free_list_remove(struct malloc_elem *elem)
-{
-	LIST_REMOVE(elem, free_list);
-}
-
-/*
- * reserve a block of data in an existing malloc_elem. If the malloc_elem
- * is much larger than the data block requested, we split the element in two.
- * This function is only called from malloc_heap_alloc so parameter checking
- * is not done here, as it's done there previously.
- */
-struct malloc_elem *
-malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align)
-{
-	struct malloc_elem *new_elem = elem_start_pt(elem, size, align);
-	const unsigned old_elem_size = (uintptr_t)new_elem - (uintptr_t)elem;
-
-	if (old_elem_size < MALLOC_ELEM_OVERHEAD + MIN_DATA_SIZE){
-		/* don't split it, pad the element instead */
-		elem->state = ELEM_BUSY;
-		elem->pad = old_elem_size;
-
-		/* put a dummy header in padding, to point to real element header */
-		if (elem->pad > 0){ /* pad will be at least 64-bytes, as everything
-		                     * is cache-line aligned */
-			new_elem->pad = elem->pad;
-			new_elem->state = ELEM_PAD;
-			new_elem->size = elem->size - elem->pad;
-			set_header(new_elem);
-		}
-		/* remove element from free list */
-		elem_free_list_remove(elem);
-
-		return new_elem;
-	}
-
-	/* we are going to split the element in two. The original element
-	 * remains free, and the new element is the one allocated.
-	 * Re-insert original element, in case its new size makes it
-	 * belong on a different list.
-	 */
-	elem_free_list_remove(elem);
-	split_elem(elem, new_elem);
-	new_elem->state = ELEM_BUSY;
-	malloc_elem_free_list_insert(elem);
-
-	return new_elem;
-}
-
-/*
- * joing two struct malloc_elem together. elem1 and elem2 must
- * be contiguous in memory.
- */
-static inline void
-join_elem(struct malloc_elem *elem1, struct malloc_elem *elem2)
-{
-	struct malloc_elem *next = RTE_PTR_ADD(elem2, elem2->size);
-	elem1->size += elem2->size;
-	next->prev = elem1;
-}
-
-/*
- * free a malloc_elem block by adding it to the free list. If the
- * blocks either immediately before or immediately after newly freed block
- * are also free, the blocks are merged together.
- */
-int
-malloc_elem_free(struct malloc_elem *elem)
-{
-	if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY)
-		return -1;
-
-	rte_spinlock_lock(&(elem->heap->lock));
-	struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size);
-	if (next->state == ELEM_FREE){
-		/* remove from free list, join to this one */
-		elem_free_list_remove(next);
-		join_elem(elem, next);
-	}
-
-	/* check if previous element is free, if so join with it and return,
-	 * need to re-insert in free list, as that element's size is changing
-	 */
-	if (elem->prev != NULL && elem->prev->state == ELEM_FREE) {
-		elem_free_list_remove(elem->prev);
-		join_elem(elem->prev, elem);
-		malloc_elem_free_list_insert(elem->prev);
-	}
-	/* otherwise add ourselves to the free list */
-	else {
-		malloc_elem_free_list_insert(elem);
-		elem->pad = 0;
-	}
-	/* decrease heap's count of allocated elements */
-	elem->heap->alloc_count--;
-	rte_spinlock_unlock(&(elem->heap->lock));
-
-	return 0;
-}
-
-/*
- * attempt to resize a malloc_elem by expanding into any free space
- * immediately after it in memory.
- */
-int
-malloc_elem_resize(struct malloc_elem *elem, size_t size)
-{
-	const size_t new_size = size + MALLOC_ELEM_OVERHEAD;
-	/* if we request a smaller size, then always return ok */
-	const size_t current_size = elem->size - elem->pad;
-	if (current_size >= new_size)
-		return 0;
-
-	struct malloc_elem *next = RTE_PTR_ADD(elem, elem->size);
-	rte_spinlock_lock(&elem->heap->lock);
-	if (next ->state != ELEM_FREE)
-		goto err_return;
-	if (current_size + next->size < new_size)
-		goto err_return;
-
-	/* we now know the element fits, so remove from free list,
-	 * join the two
-	 */
-	elem_free_list_remove(next);
-	join_elem(elem, next);
-
-	if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){
-		/* now we have a big block together. Lets cut it down a bit, by splitting */
-		struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size);
-		split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE);
-		split_elem(elem, split_pt);
-		malloc_elem_free_list_insert(split_pt);
-	}
-	rte_spinlock_unlock(&elem->heap->lock);
-	return 0;
-
-err_return:
-	rte_spinlock_unlock(&elem->heap->lock);
-	return -1;
-}
diff --git a/lib/librte_malloc/malloc_elem.h b/lib/librte_malloc/malloc_elem.h
deleted file mode 100644
index 9790b1a..0000000
--- a/lib/librte_malloc/malloc_elem.h
+++ /dev/null
@@ -1,190 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef MALLOC_ELEM_H_
-#define MALLOC_ELEM_H_
-
-#include <rte_memory.h>
-
-/* dummy definition of struct so we can use pointers to it in malloc_elem struct */
-struct malloc_heap;
-
-enum elem_state {
-	ELEM_FREE = 0,
-	ELEM_BUSY,
-	ELEM_PAD  /* element is a padding-only header */
-};
-
-struct malloc_elem {
-	struct malloc_heap *heap;
-	struct malloc_elem *volatile prev;      /* points to prev elem in memzone */
-	LIST_ENTRY(malloc_elem) free_list;      /* list of free elements in heap */
-	const struct rte_memzone *mz;
-	volatile enum elem_state state;
-	uint32_t pad;
-	size_t size;
-#ifdef RTE_LIBRTE_MALLOC_DEBUG
-	uint64_t header_cookie;         /* Cookie marking start of data */
-	                                /* trailer cookie at start + size */
-#endif
-} __rte_cache_aligned;
-
-#ifndef RTE_LIBRTE_MALLOC_DEBUG
-static const unsigned MALLOC_ELEM_TRAILER_LEN = 0;
-
-/* dummy function - just check if pointer is non-null */
-static inline int
-malloc_elem_cookies_ok(const struct malloc_elem *elem){ return elem != NULL; }
-
-/* dummy function - no header if malloc_debug is not enabled */
-static inline void
-set_header(struct malloc_elem *elem __rte_unused){ }
-
-/* dummy function - no trailer if malloc_debug is not enabled */
-static inline void
-set_trailer(struct malloc_elem *elem __rte_unused){ }
-
-
-#else
-static const unsigned MALLOC_ELEM_TRAILER_LEN = RTE_CACHE_LINE_SIZE;
-
-#define MALLOC_HEADER_COOKIE   0xbadbadbadadd2e55ULL /**< Header cookie. */
-#define MALLOC_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
-
-/* define macros to make referencing the header and trailer cookies easier */
-#define MALLOC_ELEM_TRAILER(elem) (*((uint64_t*)RTE_PTR_ADD(elem, \
-		elem->size - MALLOC_ELEM_TRAILER_LEN)))
-#define MALLOC_ELEM_HEADER(elem) (elem->header_cookie)
-
-static inline void
-set_header(struct malloc_elem *elem)
-{
-	if (elem != NULL)
-		MALLOC_ELEM_HEADER(elem) = MALLOC_HEADER_COOKIE;
-}
-
-static inline void
-set_trailer(struct malloc_elem *elem)
-{
-	if (elem != NULL)
-		MALLOC_ELEM_TRAILER(elem) = MALLOC_TRAILER_COOKIE;
-}
-
-/* check that the header and trailer cookies are set correctly */
-static inline int
-malloc_elem_cookies_ok(const struct malloc_elem *elem)
-{
-	return (elem != NULL &&
-			MALLOC_ELEM_HEADER(elem) == MALLOC_HEADER_COOKIE &&
-			MALLOC_ELEM_TRAILER(elem) == MALLOC_TRAILER_COOKIE);
-}
-
-#endif
-
-static const unsigned MALLOC_ELEM_HEADER_LEN = sizeof(struct malloc_elem);
-#define MALLOC_ELEM_OVERHEAD (MALLOC_ELEM_HEADER_LEN + MALLOC_ELEM_TRAILER_LEN)
-
-/*
- * Given a pointer to the start of a memory block returned by malloc, get
- * the actual malloc_elem header for that block.
- */
-static inline struct malloc_elem *
-malloc_elem_from_data(const void *data)
-{
-	if (data == NULL)
-		return NULL;
-
-	struct malloc_elem *elem = RTE_PTR_SUB(data, MALLOC_ELEM_HEADER_LEN);
-	if (!malloc_elem_cookies_ok(elem))
-		return NULL;
-	return elem->state != ELEM_PAD ? elem:  RTE_PTR_SUB(elem, elem->pad);
-}
-
-/*
- * initialise a malloc_elem header
- */
-void
-malloc_elem_init(struct malloc_elem *elem,
-		struct malloc_heap *heap,
-		const struct rte_memzone *mz,
-		size_t size);
-
-/*
- * initialise a dummy malloc_elem header for the end-of-memzone marker
- */
-void
-malloc_elem_mkend(struct malloc_elem *elem,
-		struct malloc_elem *prev_free);
-
-/*
- * return true if the current malloc_elem can hold a block of data
- * of the requested size and with the requested alignment
- */
-int
-malloc_elem_can_hold(struct malloc_elem *elem, size_t size, unsigned align);
-
-/*
- * reserve a block of data in an existing malloc_elem. If the malloc_elem
- * is much larger than the data block requested, we split the element in two.
- */
-struct malloc_elem *
-malloc_elem_alloc(struct malloc_elem *elem, size_t size, unsigned align);
-
-/*
- * free a malloc_elem block by adding it to the free list. If the
- * blocks either immediately before or immediately after newly freed block
- * are also free, the blocks are merged together.
- */
-int
-malloc_elem_free(struct malloc_elem *elem);
-
-/*
- * attempt to resize a malloc_elem by expanding into any free space
- * immediately after it in memory.
- */
-int
-malloc_elem_resize(struct malloc_elem *elem, size_t size);
-
-/*
- * Given an element size, compute its freelist index.
- */
-size_t
-malloc_elem_free_list_index(size_t size);
-
-/*
- * Add element to its heap's free list.
- */
-void
-malloc_elem_free_list_insert(struct malloc_elem *elem);
-
-#endif /* MALLOC_ELEM_H_ */
diff --git a/lib/librte_malloc/malloc_heap.c b/lib/librte_malloc/malloc_heap.c
deleted file mode 100644
index 95fcfec..0000000
--- a/lib/librte_malloc/malloc_heap.c
+++ /dev/null
@@ -1,210 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-#include <stdint.h>
-#include <stddef.h>
-#include <stdlib.h>
-#include <stdio.h>
-#include <stdarg.h>
-#include <errno.h>
-#include <sys/queue.h>
-
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_eal_memconfig.h>
-#include <rte_launch.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_common.h>
-#include <rte_string_fns.h>
-#include <rte_spinlock.h>
-#include <rte_memcpy.h>
-#include <rte_atomic.h>
-
-#include "malloc_elem.h"
-#include "malloc_heap.h"
-
-/* since the memzone size starts with a digit, it will appear unquoted in
- * rte_config.h, so quote it so it can be passed to rte_str_to_size */
-#define MALLOC_MEMZONE_SIZE RTE_STR(RTE_MALLOC_MEMZONE_SIZE)
-
-/*
- * returns the configuration setting for the memzone size as a size_t value
- */
-static inline size_t
-get_malloc_memzone_size(void)
-{
-	return rte_str_to_size(MALLOC_MEMZONE_SIZE);
-}
-
-/*
- * reserve an extra memory zone and make it available for use by a particular
- * heap. This reserves the zone and sets a dummy malloc_elem header at the end
- * to prevent overflow. The rest of the zone is added to free list as a single
- * large free block
- */
-static int
-malloc_heap_add_memzone(struct malloc_heap *heap, size_t size, unsigned align)
-{
-	const unsigned mz_flags = 0;
-	const size_t block_size = get_malloc_memzone_size();
-	/* ensure the data we want to allocate will fit in the memzone */
-	const size_t min_size = size + align + MALLOC_ELEM_OVERHEAD * 2;
-	const struct rte_memzone *mz = NULL;
-	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
-	unsigned numa_socket = heap - mcfg->malloc_heaps;
-
-	size_t mz_size = min_size;
-	if (mz_size < block_size)
-		mz_size = block_size;
-
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	snprintf(mz_name, sizeof(mz_name), "MALLOC_S%u_HEAP_%u",
-		     numa_socket, heap->mz_count++);
-
-	/* try getting a block. if we fail and we don't need as big a block
-	 * as given in the config, we can shrink our request and try again
-	 */
-	do {
-		mz = rte_memzone_reserve(mz_name, mz_size, numa_socket,
-					 mz_flags);
-		if (mz == NULL)
-			mz_size /= 2;
-	} while (mz == NULL && mz_size > min_size);
-	if (mz == NULL)
-		return -1;
-
-	/* allocate the memory block headers, one at end, one at start */
-	struct malloc_elem *start_elem = (struct malloc_elem *)mz->addr;
-	struct malloc_elem *end_elem = RTE_PTR_ADD(mz->addr,
-			mz_size - MALLOC_ELEM_OVERHEAD);
-	end_elem = RTE_PTR_ALIGN_FLOOR(end_elem, RTE_CACHE_LINE_SIZE);
-
-	const unsigned elem_size = (uintptr_t)end_elem - (uintptr_t)start_elem;
-	malloc_elem_init(start_elem, heap, mz, elem_size);
-	malloc_elem_mkend(end_elem, start_elem);
-	malloc_elem_free_list_insert(start_elem);
-
-	/* increase heap total size by size of new memzone */
-	heap->total_size+=mz_size - MALLOC_ELEM_OVERHEAD;
-	return 0;
-}
-
-/*
- * Iterates through the freelist for a heap to find a free element
- * which can store data of the required size and with the requested alignment.
- * Returns null on failure, or pointer to element on success.
- */
-static struct malloc_elem *
-find_suitable_element(struct malloc_heap *heap, size_t size, unsigned align)
-{
-	size_t idx;
-	struct malloc_elem *elem;
-
-	for (idx = malloc_elem_free_list_index(size);
-		idx < RTE_HEAP_NUM_FREELISTS; idx++)
-	{
-		for (elem = LIST_FIRST(&heap->free_head[idx]);
-			!!elem; elem = LIST_NEXT(elem, free_list))
-		{
-			if (malloc_elem_can_hold(elem, size, align))
-				return elem;
-		}
-	}
-	return NULL;
-}
-
-/*
- * Main function called by malloc to allocate a block of memory from the
- * heap. It locks the free list, scans it, and adds a new memzone if the
- * scan fails. Once the new memzone is added, it re-scans and should return
- * the new element after releasing the lock.
- */
-void *
-malloc_heap_alloc(struct malloc_heap *heap,
-		const char *type __attribute__((unused)), size_t size, unsigned align)
-{
-	size = RTE_CACHE_LINE_ROUNDUP(size);
-	align = RTE_CACHE_LINE_ROUNDUP(align);
-	rte_spinlock_lock(&heap->lock);
-	struct malloc_elem *elem = find_suitable_element(heap, size, align);
-	if (elem == NULL){
-		if ((malloc_heap_add_memzone(heap, size, align)) == 0)
-			elem = find_suitable_element(heap, size, align);
-	}
-
-	if (elem != NULL){
-		elem = malloc_elem_alloc(elem, size, align);
-		/* increase heap's count of allocated elements */
-		heap->alloc_count++;
-	}
-	rte_spinlock_unlock(&heap->lock);
-	return elem == NULL ? NULL : (void *)(&elem[1]);
-
-}
-
-/*
- * Function to retrieve data for heap on given socket
- */
-int
-malloc_heap_get_stats(const struct malloc_heap *heap,
-		struct rte_malloc_socket_stats *socket_stats)
-{
-	size_t idx;
-	struct malloc_elem *elem;
-
-	/* Initialise variables for heap */
-	socket_stats->free_count = 0;
-	socket_stats->heap_freesz_bytes = 0;
-	socket_stats->greatest_free_size = 0;
-
-	/* Iterate through free list */
-	for (idx = 0; idx < RTE_HEAP_NUM_FREELISTS; idx++) {
-		for (elem = LIST_FIRST(&heap->free_head[idx]);
-			!!elem; elem = LIST_NEXT(elem, free_list))
-		{
-			socket_stats->free_count++;
-			socket_stats->heap_freesz_bytes += elem->size;
-			if (elem->size > socket_stats->greatest_free_size)
-				socket_stats->greatest_free_size = elem->size;
-		}
-	}
-	/* Get stats on overall heap and allocated memory on this heap */
-	socket_stats->heap_totalsz_bytes = heap->total_size;
-	socket_stats->heap_allocsz_bytes = (socket_stats->heap_totalsz_bytes -
-			socket_stats->heap_freesz_bytes);
-	socket_stats->alloc_count = heap->alloc_count;
-	return 0;
-}
-
diff --git a/lib/librte_malloc/malloc_heap.h b/lib/librte_malloc/malloc_heap.h
deleted file mode 100644
index b4aec45..0000000
--- a/lib/librte_malloc/malloc_heap.h
+++ /dev/null
@@ -1,65 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef MALLOC_HEAP_H_
-#define MALLOC_HEAP_H_
-
-#include <rte_malloc.h>
-#include <rte_malloc_heap.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-static inline unsigned
-malloc_get_numa_socket(void)
-{
-	return rte_socket_id();
-}
-
-void *
-malloc_heap_alloc(struct malloc_heap *heap, const char *type,
-		size_t size, unsigned align);
-
-int
-malloc_heap_get_stats(const struct malloc_heap *heap,
-		struct rte_malloc_socket_stats *socket_stats);
-
-int
-rte_eal_heap_memzone_init(void);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* MALLOC_HEAP_H_ */
diff --git a/lib/librte_malloc/rte_malloc.c b/lib/librte_malloc/rte_malloc.c
deleted file mode 100644
index b966fc7..0000000
--- a/lib/librte_malloc/rte_malloc.c
+++ /dev/null
@@ -1,261 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <stdint.h>
-#include <stddef.h>
-#include <stdio.h>
-#include <string.h>
-#include <sys/queue.h>
-
-#include <rte_memcpy.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_eal_memconfig.h>
-#include <rte_branch_prediction.h>
-#include <rte_debug.h>
-#include <rte_launch.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_common.h>
-#include <rte_spinlock.h>
-
-#include <rte_malloc.h>
-#include "malloc_elem.h"
-#include "malloc_heap.h"
-
-
-/* Free the memory space back to heap */
-void rte_free(void *addr)
-{
-	if (addr == NULL) return;
-	if (malloc_elem_free(malloc_elem_from_data(addr)) < 0)
-		rte_panic("Fatal error: Invalid memory\n");
-}
-
-/*
- * Allocate memory on specified heap.
- */
-void *
-rte_malloc_socket(const char *type, size_t size, unsigned align, int socket_arg)
-{
-	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
-	int socket, i;
-	void *ret;
-
-	/* return NULL if size is 0 or alignment is not power-of-2 */
-	if (size == 0 || !rte_is_power_of_2(align))
-		return NULL;
-
-	if (socket_arg == SOCKET_ID_ANY)
-		socket = malloc_get_numa_socket();
-	else
-		socket = socket_arg;
-
-	/* Check socket parameter */
-	if (socket >= RTE_MAX_NUMA_NODES)
-		return NULL;
-
-	ret = malloc_heap_alloc(&mcfg->malloc_heaps[socket], type,
-				size, align == 0 ? 1 : align);
-	if (ret != NULL || socket_arg != SOCKET_ID_ANY)
-		return ret;
-
-	/* try other heaps */
-	for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
-		/* we already tried this one */
-		if (i == socket)
-			continue;
-
-		ret = malloc_heap_alloc(&mcfg->malloc_heaps[i], type,
-					size, align == 0 ? 1 : align);
-		if (ret != NULL)
-			return ret;
-	}
-
-	return NULL;
-}
-
-/*
- * Allocate memory on default heap.
- */
-void *
-rte_malloc(const char *type, size_t size, unsigned align)
-{
-	return rte_malloc_socket(type, size, align, SOCKET_ID_ANY);
-}
-
-/*
- * Allocate zero'd memory on specified heap.
- */
-void *
-rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket)
-{
-	void *ptr = rte_malloc_socket(type, size, align, socket);
-
-	if (ptr != NULL)
-		memset(ptr, 0, size);
-	return ptr;
-}
-
-/*
- * Allocate zero'd memory on default heap.
- */
-void *
-rte_zmalloc(const char *type, size_t size, unsigned align)
-{
-	return rte_zmalloc_socket(type, size, align, SOCKET_ID_ANY);
-}
-
-/*
- * Allocate zero'd memory on specified heap.
- */
-void *
-rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket)
-{
-	return rte_zmalloc_socket(type, num * size, align, socket);
-}
-
-/*
- * Allocate zero'd memory on default heap.
- */
-void *
-rte_calloc(const char *type, size_t num, size_t size, unsigned align)
-{
-	return rte_zmalloc(type, num * size, align);
-}
-
-/*
- * Resize allocated memory.
- */
-void *
-rte_realloc(void *ptr, size_t size, unsigned align)
-{
-	if (ptr == NULL)
-		return rte_malloc(NULL, size, align);
-
-	struct malloc_elem *elem = malloc_elem_from_data(ptr);
-	if (elem == NULL)
-		rte_panic("Fatal error: memory corruption detected\n");
-
-	size = RTE_CACHE_LINE_ROUNDUP(size), align = RTE_CACHE_LINE_ROUNDUP(align);
-	/* check alignment matches first, and if ok, see if we can resize block */
-	if (RTE_PTR_ALIGN(ptr,align) == ptr &&
-			malloc_elem_resize(elem, size) == 0)
-		return ptr;
-
-	/* either alignment is off, or we have no room to expand,
-	 * so move data. */
-	void *new_ptr = rte_malloc(NULL, size, align);
-	if (new_ptr == NULL)
-		return NULL;
-	const unsigned old_size = elem->size - MALLOC_ELEM_OVERHEAD;
-	rte_memcpy(new_ptr, ptr, old_size < size ? old_size : size);
-	rte_free(ptr);
-
-	return new_ptr;
-}
-
-int
-rte_malloc_validate(const void *ptr, size_t *size)
-{
-	const struct malloc_elem *elem = malloc_elem_from_data(ptr);
-	if (!malloc_elem_cookies_ok(elem))
-		return -1;
-	if (size != NULL)
-		*size = elem->size - elem->pad - MALLOC_ELEM_OVERHEAD;
-	return 0;
-}
-
-/*
- * Function to retrieve data for heap on given socket
- */
-int
-rte_malloc_get_socket_stats(int socket,
-		struct rte_malloc_socket_stats *socket_stats)
-{
-	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
-
-	if (socket >= RTE_MAX_NUMA_NODES || socket < 0)
-		return -1;
-
-	return malloc_heap_get_stats(&mcfg->malloc_heaps[socket], socket_stats);
-}
-
-/*
- * Print stats on memory type. If type is NULL, info on all types is printed
- */
-void
-rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
-{
-	unsigned int socket;
-	struct rte_malloc_socket_stats sock_stats;
-	/* Iterate through all initialised heaps */
-	for (socket=0; socket< RTE_MAX_NUMA_NODES; socket++) {
-		if ((rte_malloc_get_socket_stats(socket, &sock_stats) < 0))
-			continue;
-
-		fprintf(f, "Socket:%u\n", socket);
-		fprintf(f, "\tHeap_size:%zu,\n", sock_stats.heap_totalsz_bytes);
-		fprintf(f, "\tFree_size:%zu,\n", sock_stats.heap_freesz_bytes);
-		fprintf(f, "\tAlloc_size:%zu,\n", sock_stats.heap_allocsz_bytes);
-		fprintf(f, "\tGreatest_free_size:%zu,\n",
-				sock_stats.greatest_free_size);
-		fprintf(f, "\tAlloc_count:%u,\n",sock_stats.alloc_count);
-		fprintf(f, "\tFree_count:%u,\n", sock_stats.free_count);
-	}
-	return;
-}
-
-/*
- * TODO: Set limit to memory that can be allocated to memory type
- */
-int
-rte_malloc_set_limit(__rte_unused const char *type,
-		__rte_unused size_t max)
-{
-	return 0;
-}
-
-/*
- * Return the physical address of a virtual address obtained through rte_malloc
- */
-phys_addr_t
-rte_malloc_virt2phy(const void *addr)
-{
-	const struct malloc_elem *elem = malloc_elem_from_data(addr);
-	if (elem == NULL)
-		return 0;
-	return elem->mz->phys_addr + ((uintptr_t)addr - (uintptr_t)elem->mz->addr);
-}
diff --git a/lib/librte_malloc/rte_malloc.h b/lib/librte_malloc/rte_malloc.h
deleted file mode 100644
index 74bb78c..0000000
--- a/lib/librte_malloc/rte_malloc.h
+++ /dev/null
@@ -1,342 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_MALLOC_H_
-#define _RTE_MALLOC_H_
-
-/**
- * @file
- * RTE Malloc. This library provides methods for dynamically allocating memory
- * from hugepages.
- */
-
-#include <stdio.h>
-#include <stddef.h>
-#include <rte_memory.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/**
- *  Structure to hold heap statistics obtained from rte_malloc_get_socket_stats function.
- */
-struct rte_malloc_socket_stats {
-	size_t heap_totalsz_bytes; /**< Total bytes on heap */
-	size_t heap_freesz_bytes;  /**< Total free bytes on heap */
-	size_t greatest_free_size; /**< Size in bytes of largest free block */
-	unsigned free_count;       /**< Number of free elements on heap */
-	unsigned alloc_count;      /**< Number of allocated elements on heap */
-	size_t heap_allocsz_bytes; /**< Total allocated bytes on heap */
-};
-
-/**
- * This function allocates memory from the huge-page area of memory. The memory
- * is not cleared. In NUMA systems, the memory allocated resides on the same
- * NUMA socket as the core that calls this function.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param size
- *   Size (in bytes) to be allocated.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_malloc(const char *type, size_t size, unsigned align);
-
-/**
- * Allocate zero'ed memory from the heap.
- *
- * Equivalent to rte_malloc() except that the memory zone is
- * initialised with zeros. In NUMA systems, the memory allocated resides on the
- * same NUMA socket as the core that calls this function.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param size
- *   Size (in bytes) to be allocated.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must obviously be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_zmalloc(const char *type, size_t size, unsigned align);
-
-/**
- * Replacement function for calloc(), using huge-page memory. Memory area is
- * initialised with zeros. In NUMA systems, the memory allocated resides on the
- * same NUMA socket as the core that calls this function.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param num
- *   Number of elements to be allocated.
- * @param size
- *   Size (in bytes) of a single element.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must obviously be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_calloc(const char *type, size_t num, size_t size, unsigned align);
-
-/**
- * Replacement function for realloc(), using huge-page memory. Reserved area
- * memory is resized, preserving contents. In NUMA systems, the new area
- * resides on the same NUMA socket as the old area.
- *
- * @param ptr
- *   Pointer to already allocated memory
- * @param size
- *   Size (in bytes) of new area. If this is 0, memory is freed.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must obviously be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the reallocated memory.
- */
-void *
-rte_realloc(void *ptr, size_t size, unsigned align);
-
-/**
- * This function allocates memory from the huge-page area of memory. The memory
- * is not cleared.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param size
- *   Size (in bytes) to be allocated.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @param socket
- *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
- *   will behave the same as rte_malloc().
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_malloc_socket(const char *type, size_t size, unsigned align, int socket);
-
-/**
- * Allocate zero'ed memory from the heap.
- *
- * Equivalent to rte_malloc() except that the memory zone is
- * initialised with zeros.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param size
- *   Size (in bytes) to be allocated.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must obviously be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @param socket
- *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
- *   will behave the same as rte_zmalloc().
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket);
-
-/**
- * Replacement function for calloc(), using huge-page memory. Memory area is
- * initialised with zeros.
- *
- * @param type
- *   A string identifying the type of allocated objects (useful for debug
- *   purposes, such as identifying the cause of a memory leak). Can be NULL.
- * @param num
- *   Number of elements to be allocated.
- * @param size
- *   Size (in bytes) of a single element.
- * @param align
- *   If 0, the return is a pointer that is suitably aligned for any kind of
- *   variable (in the same manner as malloc()).
- *   Otherwise, the return is a pointer that is a multiple of *align*. In
- *   this case, it must obviously be a power of two. (Minimum alignment is the
- *   cacheline size, i.e. 64-bytes)
- * @param socket
- *   NUMA socket to allocate memory on. If SOCKET_ID_ANY is used, this function
- *   will behave the same as rte_calloc().
- * @return
- *   - NULL on error. Not enough memory, or invalid arguments (size is 0,
- *     align is not a power of two).
- *   - Otherwise, the pointer to the allocated object.
- */
-void *
-rte_calloc_socket(const char *type, size_t num, size_t size, unsigned align, int socket);
-
-/**
- * Frees the memory space pointed to by the provided pointer.
- *
- * This pointer must have been returned by a previous call to
- * rte_malloc(), rte_zmalloc(), rte_calloc() or rte_realloc(). The behaviour of
- * rte_free() is undefined if the pointer does not match this requirement.
- *
- * If the pointer is NULL, the function does nothing.
- *
- * @param ptr
- *   The pointer to memory to be freed.
- */
-void
-rte_free(void *ptr);
-
-/**
- * If malloc debug is enabled, check a memory block for header
- * and trailer markers to indicate that all is well with the block.
- * If size is non-null, also return the size of the block.
- *
- * @param ptr
- *   pointer to the start of a data block, must have been returned
- *   by a previous call to rte_malloc(), rte_zmalloc(), rte_calloc()
- *   or rte_realloc()
- * @param size
- *   if non-null, and memory block pointer is valid, returns the size
- *   of the memory block
- * @return
- *   -1 on error, invalid pointer passed or header and trailer markers
- *   are missing or corrupted
- *   0 on success
- */
-int
-rte_malloc_validate(const void *ptr, size_t *size);
-
-/**
- * Get heap statistics for the specified heap.
- *
- * @param socket
- *   An unsigned integer specifying the socket to get heap statistics for
- * @param socket_stats
- *   A structure which provides memory to store statistics
- * @return
- *   Null on error
- *   Pointer to structure storing statistics on success
- */
-int
-rte_malloc_get_socket_stats(int socket,
-		struct rte_malloc_socket_stats *socket_stats);
-
-/**
- * Dump statistics.
- *
- * Dump for the specified type to the console. If the type argument is
- * NULL, all memory types will be dumped.
- *
- * @param f
- *   A pointer to a file for output
- * @param type
- *   A string identifying the type of objects to dump, or NULL
- *   to dump all objects.
- */
-void
-rte_malloc_dump_stats(FILE *f, const char *type);
-
-/**
- * Set the maximum amount of allocated memory for this type.
- *
- * This is not yet implemented
- *
- * @param type
- *   A string identifying the type of allocated objects.
- * @param max
- *   The maximum amount of allocated bytes for this type.
- * @return
- *   - 0: Success.
- *   - (-1): Error.
- */
-int
-rte_malloc_set_limit(const char *type, size_t max);
-
-/**
- * Return the physical address of a virtual address obtained through
- * rte_malloc
- *
- * @param addr
- *   Adress obtained from a previous rte_malloc call
- * @return
- *   NULL on error
- *   otherwise return physical address of the buffer
- */
-phys_addr_t
-rte_malloc_virt2phy(const void *addr);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_MALLOC_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 05/13] core: move librte_mempool to core subdir
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (3 preceding siblings ...)
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 04/13] core: move librte_malloc " Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 06/13] core: move librte_mbuf " Sergio Gonzalez Monroy
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This is equivalent to:

git mv lib/librte_mempool lib/core

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_mempool/Makefile           |   51 +
 lib/core/librte_mempool/rte_dom0_mempool.c |  134 +++
 lib/core/librte_mempool/rte_mempool.c      |  901 ++++++++++++++++++
 lib/core/librte_mempool/rte_mempool.h      | 1392 ++++++++++++++++++++++++++++
 lib/librte_mempool/Makefile                |   51 -
 lib/librte_mempool/rte_dom0_mempool.c      |  134 ---
 lib/librte_mempool/rte_mempool.c           |  901 ------------------
 lib/librte_mempool/rte_mempool.h           | 1392 ----------------------------
 8 files changed, 2478 insertions(+), 2478 deletions(-)
 create mode 100644 lib/core/librte_mempool/Makefile
 create mode 100644 lib/core/librte_mempool/rte_dom0_mempool.c
 create mode 100644 lib/core/librte_mempool/rte_mempool.c
 create mode 100644 lib/core/librte_mempool/rte_mempool.h
 delete mode 100644 lib/librte_mempool/Makefile
 delete mode 100644 lib/librte_mempool/rte_dom0_mempool.c
 delete mode 100644 lib/librte_mempool/rte_mempool.c
 delete mode 100644 lib/librte_mempool/rte_mempool.h

diff --git a/lib/core/librte_mempool/Makefile b/lib/core/librte_mempool/Makefile
new file mode 100644
index 0000000..9939e10
--- /dev/null
+++ b/lib/core/librte_mempool/Makefile
@@ -0,0 +1,51 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mempool.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
+ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
+SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
+endif
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
+
+# this lib needs eal, rte_ring and rte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal lib/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_malloc
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_mempool/rte_dom0_mempool.c b/lib/core/librte_mempool/rte_dom0_mempool.c
new file mode 100644
index 0000000..9ec68fb
--- /dev/null
+++ b/lib/core/librte_mempool/rte_dom0_mempool.c
@@ -0,0 +1,134 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_atomic.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_errno.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "rte_mempool.h"
+
+static void
+get_phys_map(void *va, phys_addr_t pa[], uint32_t pg_num,
+            uint32_t pg_sz, uint32_t memseg_id)
+{
+    uint32_t i;
+    uint64_t virt_addr, mfn_id;
+    struct rte_mem_config *mcfg;
+    uint32_t page_size = getpagesize();
+
+    /* get pointer to global configuration */
+    mcfg = rte_eal_get_configuration()->mem_config;
+    virt_addr =(uintptr_t) mcfg->memseg[memseg_id].addr;
+
+    for (i = 0; i != pg_num; i++) {
+        mfn_id = ((uintptr_t)va + i * pg_sz - virt_addr) / RTE_PGSIZE_2M;
+        pa[i] = mcfg->memseg[memseg_id].mfn[mfn_id] * page_size;
+    }
+}
+
+/* create the mempool for supporting Dom0 */
+struct rte_mempool *
+rte_dom0_mempool_create(const char *name, unsigned elt_num, unsigned elt_size,
+           unsigned cache_size, unsigned private_data_size,
+           rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+           rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+           int socket_id, unsigned flags)
+{
+	struct rte_mempool *mp = NULL;
+	phys_addr_t *pa;
+	char *va;
+	size_t sz;
+	uint32_t pg_num, pg_shift, pg_sz, total_size;
+	const struct rte_memzone *mz;
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+
+	pg_sz = RTE_PGSIZE_2M;
+
+	pg_shift = rte_bsf32(pg_sz);
+	total_size = rte_mempool_calc_obj_size(elt_size, flags, NULL);
+
+	/* calc max memory size and max number of pages needed. */
+	sz = rte_mempool_xmem_size(elt_num, total_size, pg_shift) +
+		RTE_PGSIZE_2M;
+	pg_num = sz >> pg_shift;
+
+	/* extract physical mappings of the allocated memory. */
+	pa = calloc(pg_num, sizeof (*pa));
+	if (pa == NULL)
+		return mp;
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_OBJ_NAME, name);
+	mz = rte_memzone_reserve(mz_name, sz, socket_id, mz_flags);
+	if (mz == NULL) {
+		free(pa);
+		return mp;
+	}
+
+	va = (char *)RTE_ALIGN_CEIL((uintptr_t)mz->addr, RTE_PGSIZE_2M);
+	/* extract physical mappings of the allocated memory. */
+	get_phys_map(va, pa, pg_num, pg_sz, mz->memseg_id);
+
+	mp = rte_mempool_xmem_create(name, elt_num, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id, flags, va, pa, pg_num, pg_shift);
+
+	free(pa);
+
+	return (mp);
+}
diff --git a/lib/core/librte_mempool/rte_mempool.c b/lib/core/librte_mempool/rte_mempool.c
new file mode 100644
index 0000000..4cf6c25
--- /dev/null
+++ b/lib/core/librte_mempool/rte_mempool.c
@@ -0,0 +1,901 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_errno.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "rte_mempool.h"
+
+TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
+
+#define CACHE_FLUSHTHRESH_MULTIPLIER 1.5
+
+/*
+ * return the greatest common divisor between a and b (fast algorithm)
+ *
+ */
+static unsigned get_gcd(unsigned a, unsigned b)
+{
+	unsigned c;
+
+	if (0 == a)
+		return b;
+	if (0 == b)
+		return a;
+
+	if (a < b) {
+		c = a;
+		a = b;
+		b = c;
+	}
+
+	while (b != 0) {
+		c = a % b;
+		a = b;
+		b = c;
+	}
+
+	return a;
+}
+
+/*
+ * Depending on memory configuration, objects addresses are spread
+ * between channels and ranks in RAM: the pool allocator will add
+ * padding between objects. This function return the new size of the
+ * object.
+ */
+static unsigned optimize_object_size(unsigned obj_size)
+{
+	unsigned nrank, nchan;
+	unsigned new_obj_size;
+
+	/* get number of channels */
+	nchan = rte_memory_get_nchannel();
+	if (nchan == 0)
+		nchan = 1;
+
+	nrank = rte_memory_get_nrank();
+	if (nrank == 0)
+		nrank = 1;
+
+	/* process new object size */
+	new_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
+	while (get_gcd(new_obj_size, nrank * nchan) != 1)
+		new_obj_size++;
+	return new_obj_size * RTE_CACHE_LINE_SIZE;
+}
+
+static void
+mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
+	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)
+{
+	struct rte_mempool **mpp;
+
+	obj = (char *)obj + mp->header_size;
+
+	/* set mempool ptr in header */
+	mpp = __mempool_from_obj(obj);
+	*mpp = mp;
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	__mempool_write_header_cookie(obj, 1);
+	__mempool_write_trailer_cookie(obj);
+#endif
+	/* call the initializer */
+	if (obj_init)
+		obj_init(mp, obj_init_arg, obj, obj_idx);
+
+	/* enqueue in ring */
+	rte_ring_sp_enqueue(mp->ring, obj);
+}
+
+uint32_t
+rte_mempool_obj_iter(void *vaddr, uint32_t elt_num, size_t elt_sz, size_t align,
+	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,
+	rte_mempool_obj_iter_t obj_iter, void *obj_iter_arg)
+{
+	uint32_t i, j, k;
+	uint32_t pgn;
+	uintptr_t end, start, va;
+	uintptr_t pg_sz;
+
+	pg_sz = (uintptr_t)1 << pg_shift;
+	va = (uintptr_t)vaddr;
+
+	i = 0;
+	j = 0;
+
+	while (i != elt_num && j != pg_num) {
+
+		start = RTE_ALIGN_CEIL(va, align);
+		end = start + elt_sz;
+
+		pgn = (end >> pg_shift) - (start >> pg_shift);
+		pgn += j;
+
+		/* do we have enough space left for the next element. */
+		if (pgn >= pg_num)
+			break;
+
+		for (k = j;
+				k != pgn &&
+				paddr[k] + pg_sz == paddr[k + 1];
+				k++)
+			;
+
+		/*
+		 * if next pgn chunks of memory physically continuous,
+		 * use it to create next element.
+		 * otherwise, just skip that chunk unused.
+		 */
+		if (k == pgn) {
+			if (obj_iter != NULL)
+				obj_iter(obj_iter_arg, (void *)start,
+					(void *)end, i);
+			va = end;
+			j = pgn;
+			i++;
+		} else {
+			va = RTE_ALIGN_CEIL((va + 1), pg_sz);
+			j++;
+		}
+	}
+
+	return (i);
+}
+
+/*
+ * Populate  mempool with the objects.
+ */
+
+struct mempool_populate_arg {
+	struct rte_mempool     *mp;
+	rte_mempool_obj_ctor_t *obj_init;
+	void                   *obj_init_arg;
+};
+
+static void
+mempool_obj_populate(void *arg, void *start, void *end, uint32_t idx)
+{
+	struct mempool_populate_arg *pa = arg;
+
+	mempool_add_elem(pa->mp, start, idx, pa->obj_init, pa->obj_init_arg);
+	pa->mp->elt_va_end = (uintptr_t)end;
+}
+
+static void
+mempool_populate(struct rte_mempool *mp, size_t num, size_t align,
+	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)
+{
+	uint32_t elt_sz;
+	struct mempool_populate_arg arg;
+
+	elt_sz = mp->elt_size + mp->header_size + mp->trailer_size;
+	arg.mp = mp;
+	arg.obj_init = obj_init;
+	arg.obj_init_arg = obj_init_arg;
+
+	mp->size = rte_mempool_obj_iter((void *)mp->elt_va_start,
+		num, elt_sz, align,
+		mp->elt_pa, mp->pg_num, mp->pg_shift,
+		mempool_obj_populate, &arg);
+}
+
+uint32_t
+rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
+	struct rte_mempool_objsz *sz)
+{
+	struct rte_mempool_objsz lsz;
+
+	sz = (sz != NULL) ? sz : &lsz;
+
+	/*
+	 * In header, we have at least the pointer to the pool, and
+	 * optionaly a 64 bits cookie.
+	 */
+	sz->header_size = 0;
+	sz->header_size += sizeof(struct rte_mempool *); /* ptr to pool */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	sz->header_size += sizeof(uint64_t); /* cookie */
+#endif
+	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
+		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
+			RTE_CACHE_LINE_SIZE);
+
+	/* trailer contains the cookie in debug mode */
+	sz->trailer_size = 0;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	sz->trailer_size += sizeof(uint64_t); /* cookie */
+#endif
+	/* element size is 8 bytes-aligned at least */
+	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
+
+	/* expand trailer to next cache line */
+	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
+		sz->total_size = sz->header_size + sz->elt_size +
+			sz->trailer_size;
+		sz->trailer_size += ((RTE_CACHE_LINE_SIZE -
+				  (sz->total_size & RTE_CACHE_LINE_MASK)) &
+				 RTE_CACHE_LINE_MASK);
+	}
+
+	/*
+	 * increase trailer to add padding between objects in order to
+	 * spread them across memory channels/ranks
+	 */
+	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
+		unsigned new_size;
+		new_size = optimize_object_size(sz->header_size + sz->elt_size +
+			sz->trailer_size);
+		sz->trailer_size = new_size - sz->header_size - sz->elt_size;
+	}
+
+	if (! rte_eal_has_hugepages()) {
+		/*
+		 * compute trailer size so that pool elements fit exactly in
+		 * a standard page
+		 */
+		int page_size = getpagesize();
+		int new_size = page_size - sz->header_size - sz->elt_size;
+		if (new_size < 0 || (unsigned int)new_size < sz->trailer_size) {
+			printf("When hugepages are disabled, pool objects "
+			       "can't exceed PAGE_SIZE: %d + %d + %d > %d\n",
+			       sz->header_size, sz->elt_size, sz->trailer_size,
+			       page_size);
+			return 0;
+		}
+		sz->trailer_size = new_size;
+	}
+
+	/* this is the size of an object, including header and trailer */
+	sz->total_size = sz->header_size + sz->elt_size + sz->trailer_size;
+
+	return (sz->total_size);
+}
+
+
+/*
+ * Calculate maximum amount of memory required to store given number of objects.
+ */
+size_t
+rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz, uint32_t pg_shift)
+{
+	size_t n, pg_num, pg_sz, sz;
+
+	pg_sz = (size_t)1 << pg_shift;
+
+	if ((n = pg_sz / elt_sz) > 0) {
+		pg_num = (elt_num + n - 1) / n;
+		sz = pg_num << pg_shift;
+	} else {
+		sz = RTE_ALIGN_CEIL(elt_sz, pg_sz) * elt_num;
+	}
+
+	return (sz);
+}
+
+/*
+ * Calculate how much memory would be actually required with the
+ * given memory footprint to store required number of elements.
+ */
+static void
+mempool_lelem_iter(void *arg, __rte_unused void *start, void *end,
+        __rte_unused uint32_t idx)
+{
+        *(uintptr_t *)arg = (uintptr_t)end;
+}
+
+ssize_t
+rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
+	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	uint32_t n;
+	uintptr_t va, uv;
+	size_t pg_sz, usz;
+
+	pg_sz = (size_t)1 << pg_shift;
+	va = (uintptr_t)vaddr;
+	uv = va;
+
+	if ((n = rte_mempool_obj_iter(vaddr, elt_num, elt_sz, 1,
+			paddr, pg_num, pg_shift, mempool_lelem_iter,
+			&uv)) != elt_num) {
+		return (-n);
+	}
+
+	uv = RTE_ALIGN_CEIL(uv, pg_sz);
+	usz = uv - va;
+	return (usz);
+}
+
+/* create the mempool */
+struct rte_mempool *
+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
+		   unsigned cache_size, unsigned private_data_size,
+		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		   int socket_id, unsigned flags)
+{
+#ifdef RTE_LIBRTE_XEN_DOM0
+	return (rte_dom0_mempool_create(name, n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id, flags));
+#else
+	return (rte_mempool_xmem_create(name, n, elt_size,
+		cache_size, private_data_size,
+		mp_init, mp_init_arg,
+		obj_init, obj_init_arg,
+		socket_id, flags,
+		NULL, NULL, MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX));
+#endif
+}
+
+/*
+ * Create the mempool over already allocated chunk of memory.
+ * That external memory buffer can consists of physically disjoint pages.
+ * Setting vaddr to NULL, makes mempool to fallback to original behaviour
+ * and allocate space for mempool and it's elements as one big chunk of
+ * physically continuos memory.
+ * */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	char rg_name[RTE_RING_NAMESIZE];
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_ring *r;
+	const struct rte_memzone *mz;
+	size_t mempool_size;
+	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
+	int rg_flags = 0;
+	void *obj;
+	struct rte_mempool_objsz objsz;
+	void *startaddr;
+	int page_size = getpagesize();
+
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+	/* check that we have an initialised tail queue */
+	if (RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
+			rte_mempool_list) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return NULL;
+	}
+
+	/* asked cache too big */
+	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* check that we have both VA and PA */
+	if (vaddr != NULL && paddr == NULL) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* Check that pg_num and pg_shift parameters are valid. */
+	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	/* "no cache align" imply "no spread" */
+	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
+		flags |= MEMPOOL_F_NO_SPREAD;
+
+	/* ring flags */
+	if (flags & MEMPOOL_F_SP_PUT)
+		rg_flags |= RING_F_SP_ENQ;
+	if (flags & MEMPOOL_F_SC_GET)
+		rg_flags |= RING_F_SC_DEQ;
+
+	/* calculate mempool object sizes. */
+	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	/* allocate the ring that will be used to store objects */
+	/* Ring functions will return appropriate errors if we are
+	 * running as a secondary process etc., so no checks made
+	 * in this function for that condition */
+	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
+	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
+	if (r == NULL)
+		goto exit;
+
+	/*
+	 * reserve a memory zone for this mempool: private data is
+	 * cache-aligned
+	 */
+	private_data_size = (private_data_size +
+			     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);
+
+	if (! rte_eal_has_hugepages()) {
+		/*
+		 * expand private data size to a whole page, so that the
+		 * first pool element will start on a new standard page
+		 */
+		int head = sizeof(struct rte_mempool);
+		int new_size = (private_data_size + head) % page_size;
+		if (new_size) {
+			private_data_size += page_size - new_size;
+		}
+	}
+
+	/* try to allocate tailq entry */
+	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+		goto exit;
+	}
+
+	/*
+	 * If user provided an external memory buffer, then use it to
+	 * store mempool objects. Otherwise reserve memzone big enough to
+	 * hold mempool header and metadata plus mempool objects.
+	 */
+	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
+	if (vaddr == NULL)
+		mempool_size += (size_t)objsz.total_size * n;
+
+	if (! rte_eal_has_hugepages()) {
+		/*
+		 * we want the memory pool to start on a page boundary,
+		 * because pool elements crossing page boundaries would
+		 * result in discontiguous physical addresses
+		 */
+		mempool_size += page_size;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
+
+	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
+
+	/*
+	 * no more memory: in this case we loose previously reserved
+	 * space for the as we cannot free it
+	 */
+	if (mz == NULL) {
+		rte_free(te);
+		goto exit;
+	}
+
+	if (rte_eal_has_hugepages()) {
+		startaddr = (void*)mz->addr;
+	} else {
+		/* align memory pool start address on a page boundary */
+		unsigned long addr = (unsigned long)mz->addr;
+		if (addr & (page_size - 1)) {
+			addr += page_size;
+			addr &= ~(page_size - 1);
+		}
+		startaddr = (void*)addr;
+	}
+
+	/* init the mempool structure */
+	mp = startaddr;
+	memset(mp, 0, sizeof(*mp));
+	snprintf(mp->name, sizeof(mp->name), "%s", name);
+	mp->phys_addr = mz->phys_addr;
+	mp->ring = r;
+	mp->size = n;
+	mp->flags = flags;
+	mp->elt_size = objsz.elt_size;
+	mp->header_size = objsz.header_size;
+	mp->trailer_size = objsz.trailer_size;
+	mp->cache_size = cache_size;
+	mp->cache_flushthresh = (uint32_t)
+		(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);
+	mp->private_data_size = private_data_size;
+
+	/* calculate address of the first element for continuous mempool. */
+	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
+		private_data_size;
+
+	/* populate address translation fields. */
+	mp->pg_num = pg_num;
+	mp->pg_shift = pg_shift;
+	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
+
+	/* mempool elements allocated together with mempool */
+	if (vaddr == NULL) {
+		mp->elt_va_start = (uintptr_t)obj;
+		mp->elt_pa[0] = mp->phys_addr +
+			(mp->elt_va_start - (uintptr_t)mp);
+
+	/* mempool elements in a separate chunk of memory. */
+	} else {
+		mp->elt_va_start = (uintptr_t)vaddr;
+		memcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);
+	}
+
+	mp->elt_va_end = mp->elt_va_start;
+
+	/* call the initializer */
+	if (mp_init)
+		mp_init(mp, mp_init_arg);
+
+	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
+
+	te->data = (void *) mp;
+
+	RTE_EAL_TAILQ_INSERT_TAIL(RTE_TAILQ_MEMPOOL, rte_mempool_list, te);
+
+exit:
+	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	return mp;
+}
+
+/* Return the number of entries in the mempool */
+unsigned
+rte_mempool_count(const struct rte_mempool *mp)
+{
+	unsigned count;
+
+	count = rte_ring_count(mp->ring);
+
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	{
+		unsigned lcore_id;
+		if (mp->cache_size == 0)
+			return count;
+
+		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
+			count += mp->local_cache[lcore_id].len;
+	}
+#endif
+
+	/*
+	 * due to race condition (access to len is not locked), the
+	 * total can be greater than size... so fix the result
+	 */
+	if (count > mp->size)
+		return mp->size;
+	return count;
+}
+
+/* dump the cache status */
+static unsigned
+rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
+{
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	unsigned lcore_id;
+	unsigned count = 0;
+	unsigned cache_count;
+
+	fprintf(f, "  cache infos:\n");
+	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		cache_count = mp->local_cache[lcore_id].len;
+		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
+		count += cache_count;
+	}
+	fprintf(f, "    total_cache_count=%u\n", count);
+	return count;
+#else
+	RTE_SET_USED(mp);
+	fprintf(f, "  cache disabled\n");
+	return 0;
+#endif
+}
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+/* check cookies before and after objects */
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+struct mempool_audit_arg {
+	const struct rte_mempool *mp;
+	uintptr_t obj_end;
+	uint32_t obj_num;
+};
+
+static void
+mempool_obj_audit(void *arg, void *start, void *end, uint32_t idx)
+{
+	struct mempool_audit_arg *pa = arg;
+	void *obj;
+
+	obj = (char *)start + pa->mp->header_size;
+	pa->obj_end = (uintptr_t)end;
+	pa->obj_num = idx + 1;
+	__mempool_check_cookies(pa->mp, &obj, 1, 2);
+}
+
+static void
+mempool_audit_cookies(const struct rte_mempool *mp)
+{
+	uint32_t elt_sz, num;
+	struct mempool_audit_arg arg;
+
+	elt_sz = mp->elt_size + mp->header_size + mp->trailer_size;
+
+	arg.mp = mp;
+	arg.obj_end = mp->elt_va_start;
+	arg.obj_num = 0;
+
+	num = rte_mempool_obj_iter((void *)mp->elt_va_start,
+		mp->size, elt_sz, 1,
+		mp->elt_pa, mp->pg_num, mp->pg_shift,
+		mempool_obj_audit, &arg);
+
+	if (num != mp->size) {
+			rte_panic("rte_mempool_obj_iter(mempool=%p, size=%u) "
+			"iterated only over %u elements\n",
+			mp, mp->size, num);
+	} else if (arg.obj_end != mp->elt_va_end || arg.obj_num != mp->size) {
+			rte_panic("rte_mempool_obj_iter(mempool=%p, size=%u) "
+			"last callback va_end: %#tx (%#tx expeceted), "
+			"num of objects: %u (%u expected)\n",
+			mp, mp->size,
+			arg.obj_end, mp->elt_va_end,
+			arg.obj_num, mp->size);
+	}
+}
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic error "-Wcast-qual"
+#endif
+#else
+#define mempool_audit_cookies(mp) do {} while(0)
+#endif
+
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+/* check cookies before and after objects */
+static void
+mempool_audit_cache(const struct rte_mempool *mp)
+{
+	/* check cache size consistency */
+	unsigned lcore_id;
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
+			RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
+				lcore_id);
+			rte_panic("MEMPOOL: invalid cache len\n");
+		}
+	}
+}
+#else
+#define mempool_audit_cache(mp) do {} while(0)
+#endif
+
+
+/* check the consistency of mempool (size, cookies, ...) */
+void
+rte_mempool_audit(const struct rte_mempool *mp)
+{
+	mempool_audit_cache(mp);
+	mempool_audit_cookies(mp);
+
+	/* For case where mempool DEBUG is not set, and cache size is 0 */
+	RTE_SET_USED(mp);
+}
+
+/* dump the status of the mempool on the console */
+void
+rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
+{
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	struct rte_mempool_debug_stats sum;
+	unsigned lcore_id;
+#endif
+	unsigned common_count;
+	unsigned cache_count;
+
+	RTE_VERIFY(f != NULL);
+	RTE_VERIFY(mp != NULL);
+
+	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
+	fprintf(f, "  flags=%x\n", mp->flags);
+	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
+	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
+	fprintf(f, "  size=%"PRIu32"\n", mp->size);
+	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
+	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
+	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
+	fprintf(f, "  total_obj_size=%"PRIu32"\n",
+	       mp->header_size + mp->elt_size + mp->trailer_size);
+
+	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
+	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);
+	fprintf(f, "  pg_shift=%"PRIu32"\n", mp->pg_shift);
+	fprintf(f, "  pg_mask=%#tx\n", mp->pg_mask);
+	fprintf(f, "  elt_va_start=%#tx\n", mp->elt_va_start);
+	fprintf(f, "  elt_va_end=%#tx\n", mp->elt_va_end);
+	fprintf(f, "  elt_pa[0]=0x%" PRIx64 "\n", mp->elt_pa[0]);
+
+	if (mp->size != 0)
+		fprintf(f, "  avg bytes/object=%#Lf\n",
+			(long double)(mp->elt_va_end - mp->elt_va_start) /
+			mp->size);
+
+	cache_count = rte_mempool_dump_cache(f, mp);
+	common_count = rte_ring_count(mp->ring);
+	if ((cache_count + common_count) > mp->size)
+		common_count = mp->size - cache_count;
+	fprintf(f, "  common_pool_count=%u\n", common_count);
+
+	/* sum and dump statistics */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	memset(&sum, 0, sizeof(sum));
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		sum.put_bulk += mp->stats[lcore_id].put_bulk;
+		sum.put_objs += mp->stats[lcore_id].put_objs;
+		sum.get_success_bulk += mp->stats[lcore_id].get_success_bulk;
+		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
+		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
+		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
+	}
+	fprintf(f, "  stats:\n");
+	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
+	fprintf(f, "    put_objs=%"PRIu64"\n", sum.put_objs);
+	fprintf(f, "    get_success_bulk=%"PRIu64"\n", sum.get_success_bulk);
+	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
+	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
+	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
+#else
+	fprintf(f, "  no statistics available\n");
+#endif
+
+	rte_mempool_audit(mp);
+}
+
+/* dump the status of all mempools on the console */
+void
+rte_mempool_list_dump(FILE *f)
+{
+	const struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_mempool_list *mempool_list;
+
+	if ((mempool_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		mp = (struct rte_mempool *) te->data;
+		rte_mempool_dump(f, mp);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+}
+
+/* search a mempool from its name */
+struct rte_mempool *
+rte_mempool_lookup(const char *name)
+{
+	struct rte_mempool *mp = NULL;
+	struct rte_tailq_entry *te;
+	struct rte_mempool_list *mempool_list;
+
+	if ((mempool_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return NULL;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		mp = (struct rte_mempool *) te->data;
+		if (strncmp(name, mp->name, RTE_MEMPOOL_NAMESIZE) == 0)
+			break;
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	if (te == NULL) {
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	return mp;
+}
+
+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
+		      void *arg)
+{
+	struct rte_tailq_entry *te = NULL;
+	struct rte_mempool_list *mempool_list;
+
+	if ((mempool_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
+
+	TAILQ_FOREACH(te, mempool_list, next) {
+		(*func)((struct rte_mempool *) te->data, arg);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
+}
diff --git a/lib/core/librte_mempool/rte_mempool.h b/lib/core/librte_mempool/rte_mempool.h
new file mode 100644
index 0000000..3314651
--- /dev/null
+++ b/lib/core/librte_mempool/rte_mempool.h
@@ -0,0 +1,1392 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMPOOL_H_
+#define _RTE_MEMPOOL_H_
+
+/**
+ * @file
+ * RTE Mempool.
+ *
+ * A memory pool is an allocator of fixed-size object. It is
+ * identified by its name, and uses a ring to store free objects. It
+ * provides some other optional services, like a per-core object
+ * cache, and an alignment helper to ensure that objects are padded
+ * to spread them equally on all RAM channels, ranks, and so on.
+ *
+ * Objects owned by a mempool should never be added in another
+ * mempool. When an object is freed using rte_mempool_put() or
+ * equivalent, the object data is not modified; the user can save some
+ * meta-data in the object data and retrieve them when allocating a
+ * new object.
+ *
+ * Note: the mempool implementation is not preemptable. A lcore must
+ * not be interrupted by another task that uses the same mempool
+ * (because it uses a ring which is not preemptable). Also, mempool
+ * functions must not be used outside the DPDK environment: for
+ * example, in linuxapp environment, a thread that is not created by
+ * the EAL must not use mempools. This is due to the per-lcore cache
+ * that won't work as rte_lcore_id() will not return a correct value.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <errno.h>
+#include <inttypes.h>
+#include <sys/queue.h>
+
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_lcore.h>
+#include <rte_memory.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#define RTE_MEMPOOL_HEADER_COOKIE1  0xbadbadbadadd2e55ULL /**< Header cookie. */
+#define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. */
+#define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+/**
+ * A structure that stores the mempool statistics (per-lcore).
+ */
+struct rte_mempool_debug_stats {
+	uint64_t put_bulk;         /**< Number of puts. */
+	uint64_t put_objs;         /**< Number of objects successfully put. */
+	uint64_t get_success_bulk; /**< Successful allocation number. */
+	uint64_t get_success_objs; /**< Objects successfully allocated. */
+	uint64_t get_fail_bulk;    /**< Failed allocation number. */
+	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
+} __rte_cache_aligned;
+#endif
+
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+/**
+ * A structure that stores a per-core object cache.
+ */
+struct rte_mempool_cache {
+	unsigned len; /**< Cache len */
+	/*
+	 * Cache is allocated to this size to allow it to overflow in certain
+	 * cases to avoid needless emptying of cache.
+	 */
+	void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */
+} __rte_cache_aligned;
+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
+
+struct rte_mempool_objsz {
+	uint32_t elt_size;     /**< Size of an element. */
+	uint32_t header_size;  /**< Size of header (before elt). */
+	uint32_t trailer_size; /**< Size of trailer (after elt). */
+	uint32_t total_size;
+	/**< Total size of an object (header + elt + trailer). */
+};
+
+#define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
+#define RTE_MEMPOOL_MZ_PREFIX "MP_"
+
+/* "MP_<name>" */
+#define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
+
+#ifdef RTE_LIBRTE_XEN_DOM0
+
+/* "<name>_MP_elt" */
+#define	RTE_MEMPOOL_OBJ_NAME	"%s_" RTE_MEMPOOL_MZ_PREFIX "elt"
+
+#else
+
+#define	RTE_MEMPOOL_OBJ_NAME	RTE_MEMPOOL_MZ_FORMAT
+
+#endif /* RTE_LIBRTE_XEN_DOM0 */
+
+#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
+
+/** Mempool over one chunk of physically continuous memory */
+#define	MEMPOOL_PG_NUM_DEFAULT	1
+
+/**
+ * The RTE mempool structure.
+ */
+struct rte_mempool {
+	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
+	struct rte_ring *ring;           /**< Ring to store objects. */
+	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
+	int flags;                       /**< Flags of the mempool. */
+	uint32_t size;                   /**< Size of the mempool. */
+	uint32_t cache_size;             /**< Size of per-lcore local cache. */
+	uint32_t cache_flushthresh;
+	/**< Threshold before we flush excess elements. */
+
+	uint32_t elt_size;               /**< Size of an element. */
+	uint32_t header_size;            /**< Size of header (before elt). */
+	uint32_t trailer_size;           /**< Size of trailer (after elt). */
+
+	unsigned private_data_size;      /**< Size of private data. */
+
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	/** Per-lcore local cache. */
+	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
+#endif
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	/** Per-lcore statistics. */
+	struct rte_mempool_debug_stats stats[RTE_MAX_LCORE];
+#endif
+
+	/* Address translation support, starts from next cache line. */
+
+	/** Number of elements in the elt_pa array. */
+	uint32_t    pg_num __rte_cache_aligned;
+	uint32_t    pg_shift;     /**< LOG2 of the physical pages. */
+	uintptr_t   pg_mask;      /**< physical page mask value. */
+	uintptr_t   elt_va_start;
+	/**< Virtual address of the first mempool object. */
+	uintptr_t   elt_va_end;
+	/**< Virtual address of the <size + 1> mempool object. */
+	phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
+	/**< Array of physical pages addresses for the mempool objects buffer. */
+
+}  __rte_cache_aligned;
+
+#define MEMPOOL_F_NO_SPREAD      0x0001 /**< Do not spread in memory. */
+#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
+#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
+#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
+
+/**
+ * @internal When debug is enabled, store some statistics.
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param name
+ *   Name of the statistics field to increment in the memory pool.
+ * @param n
+ *   Number to add to the object-oriented statistics.
+ */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#define __MEMPOOL_STAT_ADD(mp, name, n) do {			\
+		unsigned __lcore_id = rte_lcore_id();		\
+		mp->stats[__lcore_id].name##_objs += n;		\
+		mp->stats[__lcore_id].name##_bulk += 1;		\
+	} while(0)
+#else
+#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
+#endif
+
+/**
+ * Calculates size of the mempool header.
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param pgn
+ *   Number of page used to store mempool objects.
+ */
+#define	MEMPOOL_HEADER_SIZE(mp, pgn)	(sizeof(*(mp)) + \
+	RTE_ALIGN_CEIL(((pgn) - RTE_DIM((mp)->elt_pa)) * \
+	sizeof ((mp)->elt_pa[0]), RTE_CACHE_LINE_SIZE))
+
+/**
+ * Returns TRUE if whole mempool is allocated in one contiguous block of memory.
+ */
+#define	MEMPOOL_IS_CONTIG(mp)                      \
+	((mp)->pg_num == MEMPOOL_PG_NUM_DEFAULT && \
+	(mp)->phys_addr == (mp)->elt_pa[0])
+
+/**
+ * @internal Get a pointer to a mempool pointer in the object header.
+ * @param obj
+ *   Pointer to object.
+ * @return
+ *   The pointer to the mempool from which the object was allocated.
+ */
+static inline struct rte_mempool **__mempool_from_obj(void *obj)
+{
+	struct rte_mempool **mpp;
+	unsigned off;
+
+	off = sizeof(struct rte_mempool *);
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	off += sizeof(uint64_t);
+#endif
+	mpp = (struct rte_mempool **)((char *)obj - off);
+	return mpp;
+}
+
+/**
+ * Return a pointer to the mempool owning this object.
+ *
+ * @param obj
+ *   An object that is owned by a pool. If this is not the case,
+ *   the behavior is undefined.
+ * @return
+ *   A pointer to the mempool structure.
+ */
+static inline const struct rte_mempool *rte_mempool_from_obj(void *obj)
+{
+	struct rte_mempool * const *mpp;
+	mpp = __mempool_from_obj(obj);
+	return *mpp;
+}
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+/* get header cookie value */
+static inline uint64_t __mempool_read_header_cookie(const void *obj)
+{
+	return *(const uint64_t *)((const char *)obj - sizeof(uint64_t));
+}
+
+/* get trailer cookie value */
+static inline uint64_t __mempool_read_trailer_cookie(void *obj)
+{
+	struct rte_mempool **mpp = __mempool_from_obj(obj);
+	return *(uint64_t *)((char *)obj + (*mpp)->elt_size);
+}
+
+/* write header cookie value */
+static inline void __mempool_write_header_cookie(void *obj, int free)
+{
+	uint64_t *cookie_p;
+	cookie_p = (uint64_t *)((char *)obj - sizeof(uint64_t));
+	if (free == 0)
+		*cookie_p = RTE_MEMPOOL_HEADER_COOKIE1;
+	else
+		*cookie_p = RTE_MEMPOOL_HEADER_COOKIE2;
+
+}
+
+/* write trailer cookie value */
+static inline void __mempool_write_trailer_cookie(void *obj)
+{
+	uint64_t *cookie_p;
+	struct rte_mempool **mpp = __mempool_from_obj(obj);
+	cookie_p = (uint64_t *)((char *)obj + (*mpp)->elt_size);
+	*cookie_p = RTE_MEMPOOL_TRAILER_COOKIE;
+}
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
+/**
+ * @internal Check and update cookies or panic.
+ *
+ * @param mp
+ *   Pointer to the memory pool.
+ * @param obj_table_const
+ *   Pointer to a table of void * pointers (objects).
+ * @param n
+ *   Index of object in object table.
+ * @param free
+ *   - 0: object is supposed to be allocated, mark it as free
+ *   - 1: object is supposed to be free, mark it as allocated
+ *   - 2: just check that cookie is valid (free or allocated)
+ */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+static inline void __mempool_check_cookies(const struct rte_mempool *mp,
+					   void * const *obj_table_const,
+					   unsigned n, int free)
+{
+	uint64_t cookie;
+	void *tmp;
+	void *obj;
+	void **obj_table;
+
+	/* Force to drop the "const" attribute. This is done only when
+	 * DEBUG is enabled */
+	tmp = (void *) obj_table_const;
+	obj_table = (void **) tmp;
+
+	while (n--) {
+		obj = obj_table[n];
+
+		if (rte_mempool_from_obj(obj) != mp)
+			rte_panic("MEMPOOL: object is owned by another "
+				  "mempool\n");
+
+		cookie = __mempool_read_header_cookie(obj);
+
+		if (free == 0) {
+			if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) {
+				rte_log_set_history(0);
+				RTE_LOG(CRIT, MEMPOOL,
+					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
+					obj, mp, cookie);
+				rte_panic("MEMPOOL: bad header cookie (put)\n");
+			}
+			__mempool_write_header_cookie(obj, 1);
+		}
+		else if (free == 1) {
+			if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
+				rte_log_set_history(0);
+				RTE_LOG(CRIT, MEMPOOL,
+					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
+					obj, mp, cookie);
+				rte_panic("MEMPOOL: bad header cookie (get)\n");
+			}
+			__mempool_write_header_cookie(obj, 0);
+		}
+		else if (free == 2) {
+			if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 &&
+			    cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
+				rte_log_set_history(0);
+				RTE_LOG(CRIT, MEMPOOL,
+					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
+					obj, mp, cookie);
+				rte_panic("MEMPOOL: bad header cookie (audit)\n");
+			}
+		}
+		cookie = __mempool_read_trailer_cookie(obj);
+		if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
+			rte_log_set_history(0);
+			RTE_LOG(CRIT, MEMPOOL,
+				"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
+				obj, mp, cookie);
+			rte_panic("MEMPOOL: bad trailer cookie\n");
+		}
+	}
+}
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic error "-Wcast-qual"
+#endif
+#else
+#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
+
+/**
+ * An mempool's object iterator callback function.
+ */
+typedef void (*rte_mempool_obj_iter_t)(void * /*obj_iter_arg*/,
+	void * /*obj_start*/,
+	void * /*obj_end*/,
+	uint32_t /*obj_index */);
+
+/*
+ * Iterates across objects of the given size and alignment in the
+ * provided chunk of memory. The given memory buffer can consist of
+ * disjoint physical pages.
+ * For each object calls the provided callback (if any).
+ * Used to populate mempool, walk through all elements of the mempool,
+ * estimate how many elements of the given size could be created in the given
+ * memory buffer.
+ * @param vaddr
+ *   Virtual address of the memory buffer.
+ * @param elt_num
+ *   Maximum number of objects to iterate through.
+ * @param elt_sz
+ *   Size of each object.
+ * @param paddr
+ *   Array of phyiscall addresses of the pages that comprises given memory
+ *   buffer.
+ * @param pg_num
+ *   Number of elements in the paddr array.
+ * @param pg_shift
+ *   LOG2 of the physical pages size.
+ * @param obj_iter
+ *   Object iterator callback function (could be NULL).
+ * @param obj_iter_arg
+ *   User defined Prameter for the object iterator callback function.
+ *
+ * @return
+ *   Number of objects iterated through.
+ */
+
+uint32_t rte_mempool_obj_iter(void *vaddr,
+	uint32_t elt_num, size_t elt_sz, size_t align,
+	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,
+	rte_mempool_obj_iter_t obj_iter, void *obj_iter_arg);
+
+/**
+ * An object constructor callback function for mempool.
+ *
+ * Arguments are the mempool, the opaque pointer given by the user in
+ * rte_mempool_create(), the pointer to the element and the index of
+ * the element in the pool.
+ */
+typedef void (rte_mempool_obj_ctor_t)(struct rte_mempool *, void *,
+				      void *, unsigned);
+
+/**
+ * A mempool constructor callback function.
+ *
+ * Arguments are the mempool and the opaque pointer given by the user in
+ * rte_mempool_create().
+ */
+typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
+
+/**
+ * Creates a new mempool named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory. The
+ * pool contains n elements of elt_size. Its size is set to n.
+ * All elements of the mempool are allocated together with the mempool header,
+ * in one physically continuous chunk of memory.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param elt_size
+ *   The size of each element.
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   The *flags* arguments is an OR of following flags:
+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *     between channels in RAM: the pool allocator will add padding
+ *     between objects depending on the hardware configuration. See
+ *     Memory alignment constraints for details. If this flag is set,
+ *     the allocator will just align them to a cache line.
+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *     cache-aligned. This flag removes this constraint, and no
+ *     padding will be present between objects. This flag implies
+ *     MEMPOOL_F_NO_SPREAD.
+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is
+ *     "single-producer". Otherwise, it is "multi-producers".
+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is
+ *     "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
+ *    - EINVAL - cache size provided is too large
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_mempool *
+rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
+		   unsigned cache_size, unsigned private_data_size,
+		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		   int socket_id, unsigned flags);
+
+/**
+ * Creates a new mempool named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory. The
+ * pool contains n elements of elt_size. Its size is set to n.
+ * Depending on the input parameters, mempool elements can be either allocated
+ * together with the mempool header, or an externally provided memory buffer
+ * could be used to store mempool objects. In later case, that external
+ * memory buffer can consist of set of disjoint phyiscal pages.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param elt_size
+ *   The size of each element.
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   The *flags* arguments is an OR of following flags:
+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *     between channels in RAM: the pool allocator will add padding
+ *     between objects depending on the hardware configuration. See
+ *     Memory alignment constraints for details. If this flag is set,
+ *     the allocator will just align them to a cache line.
+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *     cache-aligned. This flag removes this constraint, and no
+ *     padding will be present between objects. This flag implies
+ *     MEMPOOL_F_NO_SPREAD.
+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is
+ *     "single-producer". Otherwise, it is "multi-producers".
+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is
+ *     "single-consumer". Otherwise, it is "multi-consumers".
+ * @param vaddr
+ *   Virtual address of the externally allocated memory buffer.
+ *   Will be used to store mempool objects.
+ * @param paddr
+ *   Array of phyiscall addresses of the pages that comprises given memory
+ *   buffer.
+ * @param pg_num
+ *   Number of elements in the paddr array.
+ * @param pg_shift
+ *   LOG2 of the physical pages size.
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
+ *    - EINVAL - cache size provided is too large
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_mempool *
+rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags, void *vaddr,
+		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);
+
+#ifdef RTE_LIBRTE_XEN_DOM0
+/**
+ * Creates a new mempool named *name* in memory on Xen Dom0.
+ *
+ * This function uses ``rte_mempool_xmem_create()`` to allocate memory. The
+ * pool contains n elements of elt_size. Its size is set to n.
+ * All elements of the mempool are allocated together with the mempool header,
+ * and memory buffer can consist of set of disjoint phyiscal pages.
+ *
+ * @param name
+ *   The name of the mempool.
+ * @param n
+ *   The number of elements in the mempool. The optimum size (in terms of
+ *   memory usage) for a mempool is when n is a power of two minus one:
+ *   n = (2^q - 1).
+ * @param elt_size
+ *   The size of each element.
+ * @param cache_size
+ *   If cache_size is non-zero, the rte_mempool library will try to
+ *   limit the accesses to the common lockless pool, by maintaining a
+ *   per-lcore object cache. This argument must be lower or equal to
+ *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
+ *   cache_size to have "n modulo cache_size == 0": if this is
+ *   not the case, some elements will always stay in the pool and will
+ *   never be used. The access to the per-lcore table is of course
+ *   faster than the multi-producer/consumer pool. The cache can be
+ *   disabled if the cache_size argument is set to 0; it can be useful to
+ *   avoid losing objects in cache. Note that even if not used, the
+ *   memory space for cache is always reserved in a mempool structure,
+ *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
+ * @param private_data_size
+ *   The size of the private data appended after the mempool
+ *   structure. This is useful for storing some private data after the
+ *   mempool structure, as is done for rte_mbuf_pool for example.
+ * @param mp_init
+ *   A function pointer that is called for initialization of the pool,
+ *   before object initialization. The user can initialize the private
+ *   data in this function if needed. This parameter can be NULL if
+ *   not needed.
+ * @param mp_init_arg
+ *   An opaque pointer to data that can be used in the mempool
+ *   constructor function.
+ * @param obj_init
+ *   A function pointer that is called for each object at
+ *   initialization of the pool. The user can set some meta data in
+ *   objects if needed. This parameter can be NULL if not needed.
+ *   The obj_init() function takes the mempool pointer, the init_arg,
+ *   the object pointer and the object number as parameters.
+ * @param obj_init_arg
+ *   An opaque pointer to data that can be used as an argument for
+ *   each call to the object constructor function.
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in the case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   The *flags* arguments is an OR of following flags:
+ *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
+ *     between channels in RAM: the pool allocator will add padding
+ *     between objects depending on the hardware configuration. See
+ *     Memory alignment constraints for details. If this flag is set,
+ *     the allocator will just align them to a cache line.
+ *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
+ *     cache-aligned. This flag removes this constraint, and no
+ *     padding will be present between objects. This flag implies
+ *     MEMPOOL_F_NO_SPREAD.
+ *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
+ *     when using rte_mempool_put() or rte_mempool_put_bulk() is
+ *     "single-producer". Otherwise, it is "multi-producers".
+ *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
+ *     when using rte_mempool_get() or rte_mempool_get_bulk() is
+ *     "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   The pointer to the new allocated mempool, on success. NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
+ *    - EINVAL - cache size provided is too large
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_mempool *
+rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
+		unsigned cache_size, unsigned private_data_size,
+		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
+		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
+		int socket_id, unsigned flags);
+#endif
+
+/**
+ * Dump the status of the mempool to the console.
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param mp
+ *   A pointer to the mempool structure.
+ */
+void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
+
+/**
+ * @internal Put several objects back in the mempool; used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to store back in the mempool, must be strictly
+ *   positive.
+ * @param is_mp
+ *   Mono-producer (0) or multi-producers (1).
+ */
+static inline void __attribute__((always_inline))
+__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		    unsigned n, int is_mp)
+{
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	struct rte_mempool_cache *cache;
+	uint32_t index;
+	void **cache_objs;
+	unsigned lcore_id = rte_lcore_id();
+	uint32_t cache_size = mp->cache_size;
+	uint32_t flushthresh = mp->cache_flushthresh;
+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
+
+	/* increment stat now, adding in mempool always success */
+	__MEMPOOL_STAT_ADD(mp, put, n);
+
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	/* cache is not enabled or single producer */
+	if (unlikely(cache_size == 0 || is_mp == 0))
+		goto ring_enqueue;
+
+	/* Go straight to ring if put would overflow mem allocated for cache */
+	if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))
+		goto ring_enqueue;
+
+	cache = &mp->local_cache[lcore_id];
+	cache_objs = &cache->objs[cache->len];
+
+	/*
+	 * The cache follows the following algorithm
+	 *   1. Add the objects to the cache
+	 *   2. Anything greater than the cache min value (if it crosses the
+	 *   cache flush threshold) is flushed to the ring.
+	 */
+
+	/* Add elements back into the cache */
+	for (index = 0; index < n; ++index, obj_table++)
+		cache_objs[index] = *obj_table;
+
+	cache->len += n;
+
+	if (cache->len >= flushthresh) {
+		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
+				cache->len - cache_size);
+		cache->len = cache_size;
+	}
+
+	return;
+
+ring_enqueue:
+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
+
+	/* push remaining objects in ring */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (is_mp) {
+		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	}
+	else {
+		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
+			rte_panic("cannot put objects in mempool\n");
+	}
+#else
+	if (is_mp)
+		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
+	else
+		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
+#endif
+}
+
+
+/**
+ * Put several objects back in the mempool (multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from the obj_table.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+			unsigned n)
+{
+	__mempool_check_cookies(mp, obj_table, n, 0);
+	__mempool_put_bulk(mp, obj_table, n, 1);
+}
+
+/**
+ * Put several objects back in the mempool (NOT multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from obj_table.
+ */
+static inline void
+rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+			unsigned n)
+{
+	__mempool_check_cookies(mp, obj_table, n, 0);
+	__mempool_put_bulk(mp, obj_table, n, 0);
+}
+
+/**
+ * Put several objects back in the mempool.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * mempool creation time (see flags).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the mempool from obj_table.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
+		     unsigned n)
+{
+	__mempool_check_cookies(mp, obj_table, n, 0);
+	__mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));
+}
+
+/**
+ * Put one object in the mempool (multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_mp_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * Put one object back in the mempool (NOT multi-producers safe).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_sp_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * Put one object back in the mempool.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * mempool creation time (see flags).
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ */
+static inline void __attribute__((always_inline))
+rte_mempool_put(struct rte_mempool *mp, void *obj)
+{
+	rte_mempool_put_bulk(mp, &obj, 1);
+}
+
+/**
+ * @internal Get several objects from the mempool; used internally.
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to get, must be strictly positive.
+ * @param is_mc
+ *   Mono-consumer (0) or multi-consumers (1).
+ * @return
+ *   - >=0: Success; number of objects supplied.
+ *   - <0: Error; code of ring dequeue function.
+ */
+static inline int __attribute__((always_inline))
+__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
+		   unsigned n, int is_mc)
+{
+	int ret;
+#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
+	struct rte_mempool_cache *cache;
+	uint32_t index, len;
+	void **cache_objs;
+	unsigned lcore_id = rte_lcore_id();
+	uint32_t cache_size = mp->cache_size;
+
+	/* cache is not enabled or single consumer */
+	if (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size))
+		goto ring_dequeue;
+
+	cache = &mp->local_cache[lcore_id];
+	cache_objs = cache->objs;
+
+	/* Can this be satisfied from the cache? */
+	if (cache->len < n) {
+		/* No. Backfill the cache first, and then fill from it */
+		uint32_t req = n + (cache_size - cache->len);
+
+		/* How many do we require i.e. number to fill the cache + the request */
+		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
+		if (unlikely(ret < 0)) {
+			/*
+			 * In the offchance that we are buffer constrained,
+			 * where we are not able to allocate cache + n, go to
+			 * the ring directly. If that fails, we are truly out of
+			 * buffers.
+			 */
+			goto ring_dequeue;
+		}
+
+		cache->len += req;
+	}
+
+	/* Now fill in the response ... */
+	for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++)
+		*obj_table = cache_objs[len];
+
+	cache->len -= n;
+
+	__MEMPOOL_STAT_ADD(mp, get_success, n);
+
+	return 0;
+
+ring_dequeue:
+#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
+
+	/* get remaining objects from ring */
+	if (is_mc)
+		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
+	else
+		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
+
+	if (ret < 0)
+		__MEMPOOL_STAT_ADD(mp, get_fail, n);
+	else
+		__MEMPOOL_STAT_ADD(mp, get_success, n);
+
+	return ret;
+}
+
+/**
+ * Get several objects from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	int ret;
+	ret = __mempool_get_bulk(mp, obj_table, n, 1);
+	if (ret == 0)
+		__mempool_check_cookies(mp, obj_table, n, 1);
+	return ret;
+}
+
+/**
+ * Get several objects from the mempool (NOT multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from the mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is
+ *     retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	int ret;
+	ret = __mempool_get_bulk(mp, obj_table, n, 0);
+	if (ret == 0)
+		__mempool_check_cookies(mp, obj_table, n, 1);
+	return ret;
+}
+
+/**
+ * Get several objects from the mempool.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * mempool creation time (see flags).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to get from the mempool to obj_table.
+ * @return
+ *   - 0: Success; objects taken
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
+{
+	int ret;
+	ret = __mempool_get_bulk(mp, obj_table, n,
+				 !(mp->flags & MEMPOOL_F_SC_GET));
+	if (ret == 0)
+		__mempool_check_cookies(mp, obj_table, n, 1);
+	return ret;
+}
+
+/**
+ * Get one object from the mempool (multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_mc_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * Get one object from the mempool (NOT multi-consumers safe).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_sc_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * Get one object from the mempool.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behavior that was specified at
+ * mempool creation (see flags).
+ *
+ * If cache is enabled, objects will be retrieved first from cache,
+ * subsequently from the common pool. Note that it can return -ENOENT when
+ * the local cache and common pool are empty, even if cache from other
+ * lcores are full.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects taken.
+ *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
+ */
+static inline int __attribute__((always_inline))
+rte_mempool_get(struct rte_mempool *mp, void **obj_p)
+{
+	return rte_mempool_get_bulk(mp, obj_p, 1);
+}
+
+/**
+ * Return the number of entries in the mempool.
+ *
+ * When cache is enabled, this function has to browse the length of
+ * all lcores, so it should not be used in a data path, but only for
+ * debug purposes.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The number of entries in the mempool.
+ */
+unsigned rte_mempool_count(const struct rte_mempool *mp);
+
+/**
+ * Return the number of free entries in the mempool ring.
+ * i.e. how many entries can be freed back to the mempool.
+ *
+ * NOTE: This corresponds to the number of elements *allocated* from the
+ * memory pool, not the number of elements in the pool itself. To count
+ * the number elements currently available in the pool, use "rte_mempool_count"
+ *
+ * When cache is enabled, this function has to browse the length of
+ * all lcores, so it should not be used in a data path, but only for
+ * debug purposes.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   The number of free entries in the mempool.
+ */
+static inline unsigned
+rte_mempool_free_count(const struct rte_mempool *mp)
+{
+	return mp->size - rte_mempool_count(mp);
+}
+
+/**
+ * Test if the mempool is full.
+ *
+ * When cache is enabled, this function has to browse the length of all
+ * lcores, so it should not be used in a data path, but only for debug
+ * purposes.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   - 1: The mempool is full.
+ *   - 0: The mempool is not full.
+ */
+static inline int
+rte_mempool_full(const struct rte_mempool *mp)
+{
+	return !!(rte_mempool_count(mp) == mp->size);
+}
+
+/**
+ * Test if the mempool is empty.
+ *
+ * When cache is enabled, this function has to browse the length of all
+ * lcores, so it should not be used in a data path, but only for debug
+ * purposes.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   - 1: The mempool is empty.
+ *   - 0: The mempool is not empty.
+ */
+static inline int
+rte_mempool_empty(const struct rte_mempool *mp)
+{
+	return !!(rte_mempool_count(mp) == 0);
+}
+
+/**
+ * Return the physical address of elt, which is an element of the pool mp.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @param elt
+ *   A pointer (virtual address) to the element of the pool.
+ * @return
+ *   The physical address of the elt element.
+ */
+static inline phys_addr_t
+rte_mempool_virt2phy(const struct rte_mempool *mp, const void *elt)
+{
+	if (rte_eal_has_hugepages()) {
+		uintptr_t off;
+
+		off = (const char *)elt - (const char *)mp->elt_va_start;
+		return (mp->elt_pa[off >> mp->pg_shift] + (off & mp->pg_mask));
+	} else {
+		/*
+		 * If huge pages are disabled, we cannot assume the
+		 * memory region to be physically contiguous.
+		 * Lookup for each element.
+		 */
+		return rte_mem_virt2phy(elt);
+	}
+}
+
+/**
+ * Check the consistency of mempool objects.
+ *
+ * Verify the coherency of fields in the mempool structure. Also check
+ * that the cookies of mempool objects (even the ones that are not
+ * present in pool) have a correct value. If not, a panic will occur.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ */
+void rte_mempool_audit(const struct rte_mempool *mp);
+
+/**
+ * Return a pointer to the private data in an mempool structure.
+ *
+ * @param mp
+ *   A pointer to the mempool structure.
+ * @return
+ *   A pointer to the private data.
+ */
+static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
+{
+	return (char *)mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num);
+}
+
+/**
+ * Dump the status of all mempools on the console
+ *
+ * @param f
+ *   A pointer to a file for output
+ */
+void rte_mempool_list_dump(FILE *f);
+
+/**
+ * Search a mempool from its name
+ *
+ * @param name
+ *   The name of the mempool.
+ * @return
+ *   The pointer to the mempool matching the name, or NULL if not found.
+ *   NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ *
+ */
+struct rte_mempool *rte_mempool_lookup(const char *name);
+
+/**
+ * Given a desired size of the mempool element and mempool flags,
+ * caluclates header, trailer, body and total sizes of the mempool object.
+ * @param elt_size
+ *   The size of each element.
+ * @param flags
+ *   The flags used for the mempool creation.
+ *   Consult rte_mempool_create() for more information about possible values.
+ *   The size of each element.
+ * @return
+ *   Total size of the mempool object.
+ */
+uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
+	struct rte_mempool_objsz *sz);
+
+/**
+ * Calculate maximum amount of memory required to store given number of objects.
+ * Assumes that the memory buffer will be aligned at page boundary.
+ * Note, that if object size is bigger then page size, then it assumes that
+ * we have a subsets of physically continuous  pages big enough to store
+ * at least one object.
+ * @param elt_num
+ *   Number of elements.
+ * @param elt_sz
+ *   The size of each element.
+ * @param pg_shift
+ *   LOG2 of the physical pages size.
+ * @return
+ *   Required memory size aligned at page boundary.
+ */
+size_t rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz,
+	uint32_t pg_shift);
+
+/**
+ * Calculate how much memory would be actually required with the given
+ * memory footprint to store required number of objects.
+ * @param vaddr
+ *   Virtual address of the externally allocated memory buffer.
+ *   Will be used to store mempool objects.
+ * @param elt_num
+ *   Number of elements.
+ * @param elt_sz
+ *   The size of each element.
+ * @param paddr
+ *   Array of phyiscall addresses of the pages that comprises given memory
+ *   buffer.
+ * @param pg_num
+ *   Number of elements in the paddr array.
+ * @param pg_shift
+ *   LOG2 of the physical pages size.
+ * @return
+ *   Number of bytes needed to store given number of objects,
+ *   aligned to the given page size.
+ *   If provided memory buffer is not big enough:
+ *   (-1) * actual number of elemnts that can be stored in that buffer.
+ */
+ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
+	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);
+
+/**
+ * Walk list of all memory pools
+ *
+ * @param func
+ *   Iterator function
+ * @param arg
+ *   Argument passed to iterator
+ */
+void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
+		      void *arg);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MEMPOOL_H_ */
diff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile
deleted file mode 100644
index 9939e10..0000000
--- a/lib/librte_mempool/Makefile
+++ /dev/null
@@ -1,51 +0,0 @@
-#   BSD LICENSE
-#
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-#   All rights reserved.
-#
-#   Redistribution and use in source and binary forms, with or without
-#   modification, are permitted provided that the following conditions
-#   are met:
-#
-#     * Redistributions of source code must retain the above copyright
-#       notice, this list of conditions and the following disclaimer.
-#     * Redistributions in binary form must reproduce the above copyright
-#       notice, this list of conditions and the following disclaimer in
-#       the documentation and/or other materials provided with the
-#       distribution.
-#     * Neither the name of Intel Corporation nor the names of its
-#       contributors may be used to endorse or promote products derived
-#       from this software without specific prior written permission.
-#
-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-# library name
-LIB = librte_mempool.a
-
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
-
-# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_mempool.c
-ifeq ($(CONFIG_RTE_LIBRTE_XEN_DOM0),y)
-SRCS-$(CONFIG_RTE_LIBRTE_MEMPOOL) +=  rte_dom0_mempool.c
-endif
-# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
-
-# this lib needs eal, rte_ring and rte_malloc
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal lib/librte_ring
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_malloc
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mempool/rte_dom0_mempool.c b/lib/librte_mempool/rte_dom0_mempool.c
deleted file mode 100644
index 9ec68fb..0000000
--- a/lib/librte_mempool/rte_dom0_mempool.c
+++ /dev/null
@@ -1,134 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <stdio.h>
-#include <string.h>
-#include <stdint.h>
-#include <unistd.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/queue.h>
-
-#include <rte_common.h>
-#include <rte_log.h>
-#include <rte_debug.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_atomic.h>
-#include <rte_launch.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_eal_memconfig.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_ring.h>
-#include <rte_errno.h>
-#include <rte_string_fns.h>
-#include <rte_spinlock.h>
-
-#include "rte_mempool.h"
-
-static void
-get_phys_map(void *va, phys_addr_t pa[], uint32_t pg_num,
-            uint32_t pg_sz, uint32_t memseg_id)
-{
-    uint32_t i;
-    uint64_t virt_addr, mfn_id;
-    struct rte_mem_config *mcfg;
-    uint32_t page_size = getpagesize();
-
-    /* get pointer to global configuration */
-    mcfg = rte_eal_get_configuration()->mem_config;
-    virt_addr =(uintptr_t) mcfg->memseg[memseg_id].addr;
-
-    for (i = 0; i != pg_num; i++) {
-        mfn_id = ((uintptr_t)va + i * pg_sz - virt_addr) / RTE_PGSIZE_2M;
-        pa[i] = mcfg->memseg[memseg_id].mfn[mfn_id] * page_size;
-    }
-}
-
-/* create the mempool for supporting Dom0 */
-struct rte_mempool *
-rte_dom0_mempool_create(const char *name, unsigned elt_num, unsigned elt_size,
-           unsigned cache_size, unsigned private_data_size,
-           rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-           rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-           int socket_id, unsigned flags)
-{
-	struct rte_mempool *mp = NULL;
-	phys_addr_t *pa;
-	char *va;
-	size_t sz;
-	uint32_t pg_num, pg_shift, pg_sz, total_size;
-	const struct rte_memzone *mz;
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
-
-	pg_sz = RTE_PGSIZE_2M;
-
-	pg_shift = rte_bsf32(pg_sz);
-	total_size = rte_mempool_calc_obj_size(elt_size, flags, NULL);
-
-	/* calc max memory size and max number of pages needed. */
-	sz = rte_mempool_xmem_size(elt_num, total_size, pg_shift) +
-		RTE_PGSIZE_2M;
-	pg_num = sz >> pg_shift;
-
-	/* extract physical mappings of the allocated memory. */
-	pa = calloc(pg_num, sizeof (*pa));
-	if (pa == NULL)
-		return mp;
-
-	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_OBJ_NAME, name);
-	mz = rte_memzone_reserve(mz_name, sz, socket_id, mz_flags);
-	if (mz == NULL) {
-		free(pa);
-		return mp;
-	}
-
-	va = (char *)RTE_ALIGN_CEIL((uintptr_t)mz->addr, RTE_PGSIZE_2M);
-	/* extract physical mappings of the allocated memory. */
-	get_phys_map(va, pa, pg_num, pg_sz, mz->memseg_id);
-
-	mp = rte_mempool_xmem_create(name, elt_num, elt_size,
-		cache_size, private_data_size,
-		mp_init, mp_init_arg,
-		obj_init, obj_init_arg,
-		socket_id, flags, va, pa, pg_num, pg_shift);
-
-	free(pa);
-
-	return (mp);
-}
diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
deleted file mode 100644
index 4cf6c25..0000000
--- a/lib/librte_mempool/rte_mempool.c
+++ /dev/null
@@ -1,901 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <stdio.h>
-#include <string.h>
-#include <stdint.h>
-#include <stdarg.h>
-#include <unistd.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/queue.h>
-
-#include <rte_common.h>
-#include <rte_log.h>
-#include <rte_debug.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_malloc.h>
-#include <rte_atomic.h>
-#include <rte_launch.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_eal_memconfig.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_ring.h>
-#include <rte_errno.h>
-#include <rte_string_fns.h>
-#include <rte_spinlock.h>
-
-#include "rte_mempool.h"
-
-TAILQ_HEAD(rte_mempool_list, rte_tailq_entry);
-
-#define CACHE_FLUSHTHRESH_MULTIPLIER 1.5
-
-/*
- * return the greatest common divisor between a and b (fast algorithm)
- *
- */
-static unsigned get_gcd(unsigned a, unsigned b)
-{
-	unsigned c;
-
-	if (0 == a)
-		return b;
-	if (0 == b)
-		return a;
-
-	if (a < b) {
-		c = a;
-		a = b;
-		b = c;
-	}
-
-	while (b != 0) {
-		c = a % b;
-		a = b;
-		b = c;
-	}
-
-	return a;
-}
-
-/*
- * Depending on memory configuration, objects addresses are spread
- * between channels and ranks in RAM: the pool allocator will add
- * padding between objects. This function return the new size of the
- * object.
- */
-static unsigned optimize_object_size(unsigned obj_size)
-{
-	unsigned nrank, nchan;
-	unsigned new_obj_size;
-
-	/* get number of channels */
-	nchan = rte_memory_get_nchannel();
-	if (nchan == 0)
-		nchan = 1;
-
-	nrank = rte_memory_get_nrank();
-	if (nrank == 0)
-		nrank = 1;
-
-	/* process new object size */
-	new_obj_size = (obj_size + RTE_CACHE_LINE_MASK) / RTE_CACHE_LINE_SIZE;
-	while (get_gcd(new_obj_size, nrank * nchan) != 1)
-		new_obj_size++;
-	return new_obj_size * RTE_CACHE_LINE_SIZE;
-}
-
-static void
-mempool_add_elem(struct rte_mempool *mp, void *obj, uint32_t obj_idx,
-	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)
-{
-	struct rte_mempool **mpp;
-
-	obj = (char *)obj + mp->header_size;
-
-	/* set mempool ptr in header */
-	mpp = __mempool_from_obj(obj);
-	*mpp = mp;
-
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	__mempool_write_header_cookie(obj, 1);
-	__mempool_write_trailer_cookie(obj);
-#endif
-	/* call the initializer */
-	if (obj_init)
-		obj_init(mp, obj_init_arg, obj, obj_idx);
-
-	/* enqueue in ring */
-	rte_ring_sp_enqueue(mp->ring, obj);
-}
-
-uint32_t
-rte_mempool_obj_iter(void *vaddr, uint32_t elt_num, size_t elt_sz, size_t align,
-	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,
-	rte_mempool_obj_iter_t obj_iter, void *obj_iter_arg)
-{
-	uint32_t i, j, k;
-	uint32_t pgn;
-	uintptr_t end, start, va;
-	uintptr_t pg_sz;
-
-	pg_sz = (uintptr_t)1 << pg_shift;
-	va = (uintptr_t)vaddr;
-
-	i = 0;
-	j = 0;
-
-	while (i != elt_num && j != pg_num) {
-
-		start = RTE_ALIGN_CEIL(va, align);
-		end = start + elt_sz;
-
-		pgn = (end >> pg_shift) - (start >> pg_shift);
-		pgn += j;
-
-		/* do we have enough space left for the next element. */
-		if (pgn >= pg_num)
-			break;
-
-		for (k = j;
-				k != pgn &&
-				paddr[k] + pg_sz == paddr[k + 1];
-				k++)
-			;
-
-		/*
-		 * if next pgn chunks of memory physically continuous,
-		 * use it to create next element.
-		 * otherwise, just skip that chunk unused.
-		 */
-		if (k == pgn) {
-			if (obj_iter != NULL)
-				obj_iter(obj_iter_arg, (void *)start,
-					(void *)end, i);
-			va = end;
-			j = pgn;
-			i++;
-		} else {
-			va = RTE_ALIGN_CEIL((va + 1), pg_sz);
-			j++;
-		}
-	}
-
-	return (i);
-}
-
-/*
- * Populate  mempool with the objects.
- */
-
-struct mempool_populate_arg {
-	struct rte_mempool     *mp;
-	rte_mempool_obj_ctor_t *obj_init;
-	void                   *obj_init_arg;
-};
-
-static void
-mempool_obj_populate(void *arg, void *start, void *end, uint32_t idx)
-{
-	struct mempool_populate_arg *pa = arg;
-
-	mempool_add_elem(pa->mp, start, idx, pa->obj_init, pa->obj_init_arg);
-	pa->mp->elt_va_end = (uintptr_t)end;
-}
-
-static void
-mempool_populate(struct rte_mempool *mp, size_t num, size_t align,
-	rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg)
-{
-	uint32_t elt_sz;
-	struct mempool_populate_arg arg;
-
-	elt_sz = mp->elt_size + mp->header_size + mp->trailer_size;
-	arg.mp = mp;
-	arg.obj_init = obj_init;
-	arg.obj_init_arg = obj_init_arg;
-
-	mp->size = rte_mempool_obj_iter((void *)mp->elt_va_start,
-		num, elt_sz, align,
-		mp->elt_pa, mp->pg_num, mp->pg_shift,
-		mempool_obj_populate, &arg);
-}
-
-uint32_t
-rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
-	struct rte_mempool_objsz *sz)
-{
-	struct rte_mempool_objsz lsz;
-
-	sz = (sz != NULL) ? sz : &lsz;
-
-	/*
-	 * In header, we have at least the pointer to the pool, and
-	 * optionaly a 64 bits cookie.
-	 */
-	sz->header_size = 0;
-	sz->header_size += sizeof(struct rte_mempool *); /* ptr to pool */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	sz->header_size += sizeof(uint64_t); /* cookie */
-#endif
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0)
-		sz->header_size = RTE_ALIGN_CEIL(sz->header_size,
-			RTE_CACHE_LINE_SIZE);
-
-	/* trailer contains the cookie in debug mode */
-	sz->trailer_size = 0;
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	sz->trailer_size += sizeof(uint64_t); /* cookie */
-#endif
-	/* element size is 8 bytes-aligned at least */
-	sz->elt_size = RTE_ALIGN_CEIL(elt_size, sizeof(uint64_t));
-
-	/* expand trailer to next cache line */
-	if ((flags & MEMPOOL_F_NO_CACHE_ALIGN) == 0) {
-		sz->total_size = sz->header_size + sz->elt_size +
-			sz->trailer_size;
-		sz->trailer_size += ((RTE_CACHE_LINE_SIZE -
-				  (sz->total_size & RTE_CACHE_LINE_MASK)) &
-				 RTE_CACHE_LINE_MASK);
-	}
-
-	/*
-	 * increase trailer to add padding between objects in order to
-	 * spread them across memory channels/ranks
-	 */
-	if ((flags & MEMPOOL_F_NO_SPREAD) == 0) {
-		unsigned new_size;
-		new_size = optimize_object_size(sz->header_size + sz->elt_size +
-			sz->trailer_size);
-		sz->trailer_size = new_size - sz->header_size - sz->elt_size;
-	}
-
-	if (! rte_eal_has_hugepages()) {
-		/*
-		 * compute trailer size so that pool elements fit exactly in
-		 * a standard page
-		 */
-		int page_size = getpagesize();
-		int new_size = page_size - sz->header_size - sz->elt_size;
-		if (new_size < 0 || (unsigned int)new_size < sz->trailer_size) {
-			printf("When hugepages are disabled, pool objects "
-			       "can't exceed PAGE_SIZE: %d + %d + %d > %d\n",
-			       sz->header_size, sz->elt_size, sz->trailer_size,
-			       page_size);
-			return 0;
-		}
-		sz->trailer_size = new_size;
-	}
-
-	/* this is the size of an object, including header and trailer */
-	sz->total_size = sz->header_size + sz->elt_size + sz->trailer_size;
-
-	return (sz->total_size);
-}
-
-
-/*
- * Calculate maximum amount of memory required to store given number of objects.
- */
-size_t
-rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz, uint32_t pg_shift)
-{
-	size_t n, pg_num, pg_sz, sz;
-
-	pg_sz = (size_t)1 << pg_shift;
-
-	if ((n = pg_sz / elt_sz) > 0) {
-		pg_num = (elt_num + n - 1) / n;
-		sz = pg_num << pg_shift;
-	} else {
-		sz = RTE_ALIGN_CEIL(elt_sz, pg_sz) * elt_num;
-	}
-
-	return (sz);
-}
-
-/*
- * Calculate how much memory would be actually required with the
- * given memory footprint to store required number of elements.
- */
-static void
-mempool_lelem_iter(void *arg, __rte_unused void *start, void *end,
-        __rte_unused uint32_t idx)
-{
-        *(uintptr_t *)arg = (uintptr_t)end;
-}
-
-ssize_t
-rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
-	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
-{
-	uint32_t n;
-	uintptr_t va, uv;
-	size_t pg_sz, usz;
-
-	pg_sz = (size_t)1 << pg_shift;
-	va = (uintptr_t)vaddr;
-	uv = va;
-
-	if ((n = rte_mempool_obj_iter(vaddr, elt_num, elt_sz, 1,
-			paddr, pg_num, pg_shift, mempool_lelem_iter,
-			&uv)) != elt_num) {
-		return (-n);
-	}
-
-	uv = RTE_ALIGN_CEIL(uv, pg_sz);
-	usz = uv - va;
-	return (usz);
-}
-
-/* create the mempool */
-struct rte_mempool *
-rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
-		   unsigned cache_size, unsigned private_data_size,
-		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags)
-{
-#ifdef RTE_LIBRTE_XEN_DOM0
-	return (rte_dom0_mempool_create(name, n, elt_size,
-		cache_size, private_data_size,
-		mp_init, mp_init_arg,
-		obj_init, obj_init_arg,
-		socket_id, flags));
-#else
-	return (rte_mempool_xmem_create(name, n, elt_size,
-		cache_size, private_data_size,
-		mp_init, mp_init_arg,
-		obj_init, obj_init_arg,
-		socket_id, flags,
-		NULL, NULL, MEMPOOL_PG_NUM_DEFAULT, MEMPOOL_PG_SHIFT_MAX));
-#endif
-}
-
-/*
- * Create the mempool over already allocated chunk of memory.
- * That external memory buffer can consists of physically disjoint pages.
- * Setting vaddr to NULL, makes mempool to fallback to original behaviour
- * and allocate space for mempool and it's elements as one big chunk of
- * physically continuos memory.
- * */
-struct rte_mempool *
-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
-		unsigned cache_size, unsigned private_data_size,
-		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		int socket_id, unsigned flags, void *vaddr,
-		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift)
-{
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	char rg_name[RTE_RING_NAMESIZE];
-	struct rte_mempool *mp = NULL;
-	struct rte_tailq_entry *te;
-	struct rte_ring *r;
-	const struct rte_memzone *mz;
-	size_t mempool_size;
-	int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
-	int rg_flags = 0;
-	void *obj;
-	struct rte_mempool_objsz objsz;
-	void *startaddr;
-	int page_size = getpagesize();
-
-	/* compilation-time checks */
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, local_cache) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	RTE_BUILD_BUG_ON((sizeof(struct rte_mempool_debug_stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_mempool, stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-
-	/* check that we have an initialised tail queue */
-	if (RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL,
-			rte_mempool_list) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return NULL;
-	}
-
-	/* asked cache too big */
-	if (cache_size > RTE_MEMPOOL_CACHE_MAX_SIZE) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* check that we have both VA and PA */
-	if (vaddr != NULL && paddr == NULL) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* Check that pg_num and pg_shift parameters are valid. */
-	if (pg_num < RTE_DIM(mp->elt_pa) || pg_shift > MEMPOOL_PG_SHIFT_MAX) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	/* "no cache align" imply "no spread" */
-	if (flags & MEMPOOL_F_NO_CACHE_ALIGN)
-		flags |= MEMPOOL_F_NO_SPREAD;
-
-	/* ring flags */
-	if (flags & MEMPOOL_F_SP_PUT)
-		rg_flags |= RING_F_SP_ENQ;
-	if (flags & MEMPOOL_F_SC_GET)
-		rg_flags |= RING_F_SC_DEQ;
-
-	/* calculate mempool object sizes. */
-	if (!rte_mempool_calc_obj_size(elt_size, flags, &objsz)) {
-		rte_errno = EINVAL;
-		return NULL;
-	}
-
-	rte_rwlock_write_lock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	/* allocate the ring that will be used to store objects */
-	/* Ring functions will return appropriate errors if we are
-	 * running as a secondary process etc., so no checks made
-	 * in this function for that condition */
-	snprintf(rg_name, sizeof(rg_name), RTE_MEMPOOL_MZ_FORMAT, name);
-	r = rte_ring_create(rg_name, rte_align32pow2(n+1), socket_id, rg_flags);
-	if (r == NULL)
-		goto exit;
-
-	/*
-	 * reserve a memory zone for this mempool: private data is
-	 * cache-aligned
-	 */
-	private_data_size = (private_data_size +
-			     RTE_CACHE_LINE_MASK) & (~RTE_CACHE_LINE_MASK);
-
-	if (! rte_eal_has_hugepages()) {
-		/*
-		 * expand private data size to a whole page, so that the
-		 * first pool element will start on a new standard page
-		 */
-		int head = sizeof(struct rte_mempool);
-		int new_size = (private_data_size + head) % page_size;
-		if (new_size) {
-			private_data_size += page_size - new_size;
-		}
-	}
-
-	/* try to allocate tailq entry */
-	te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
-	if (te == NULL) {
-		RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
-		goto exit;
-	}
-
-	/*
-	 * If user provided an external memory buffer, then use it to
-	 * store mempool objects. Otherwise reserve memzone big enough to
-	 * hold mempool header and metadata plus mempool objects.
-	 */
-	mempool_size = MEMPOOL_HEADER_SIZE(mp, pg_num) + private_data_size;
-	if (vaddr == NULL)
-		mempool_size += (size_t)objsz.total_size * n;
-
-	if (! rte_eal_has_hugepages()) {
-		/*
-		 * we want the memory pool to start on a page boundary,
-		 * because pool elements crossing page boundaries would
-		 * result in discontiguous physical addresses
-		 */
-		mempool_size += page_size;
-	}
-
-	snprintf(mz_name, sizeof(mz_name), RTE_MEMPOOL_MZ_FORMAT, name);
-
-	mz = rte_memzone_reserve(mz_name, mempool_size, socket_id, mz_flags);
-
-	/*
-	 * no more memory: in this case we loose previously reserved
-	 * space for the as we cannot free it
-	 */
-	if (mz == NULL) {
-		rte_free(te);
-		goto exit;
-	}
-
-	if (rte_eal_has_hugepages()) {
-		startaddr = (void*)mz->addr;
-	} else {
-		/* align memory pool start address on a page boundary */
-		unsigned long addr = (unsigned long)mz->addr;
-		if (addr & (page_size - 1)) {
-			addr += page_size;
-			addr &= ~(page_size - 1);
-		}
-		startaddr = (void*)addr;
-	}
-
-	/* init the mempool structure */
-	mp = startaddr;
-	memset(mp, 0, sizeof(*mp));
-	snprintf(mp->name, sizeof(mp->name), "%s", name);
-	mp->phys_addr = mz->phys_addr;
-	mp->ring = r;
-	mp->size = n;
-	mp->flags = flags;
-	mp->elt_size = objsz.elt_size;
-	mp->header_size = objsz.header_size;
-	mp->trailer_size = objsz.trailer_size;
-	mp->cache_size = cache_size;
-	mp->cache_flushthresh = (uint32_t)
-		(cache_size * CACHE_FLUSHTHRESH_MULTIPLIER);
-	mp->private_data_size = private_data_size;
-
-	/* calculate address of the first element for continuous mempool. */
-	obj = (char *)mp + MEMPOOL_HEADER_SIZE(mp, pg_num) +
-		private_data_size;
-
-	/* populate address translation fields. */
-	mp->pg_num = pg_num;
-	mp->pg_shift = pg_shift;
-	mp->pg_mask = RTE_LEN2MASK(mp->pg_shift, typeof(mp->pg_mask));
-
-	/* mempool elements allocated together with mempool */
-	if (vaddr == NULL) {
-		mp->elt_va_start = (uintptr_t)obj;
-		mp->elt_pa[0] = mp->phys_addr +
-			(mp->elt_va_start - (uintptr_t)mp);
-
-	/* mempool elements in a separate chunk of memory. */
-	} else {
-		mp->elt_va_start = (uintptr_t)vaddr;
-		memcpy(mp->elt_pa, paddr, sizeof (mp->elt_pa[0]) * pg_num);
-	}
-
-	mp->elt_va_end = mp->elt_va_start;
-
-	/* call the initializer */
-	if (mp_init)
-		mp_init(mp, mp_init_arg);
-
-	mempool_populate(mp, n, 1, obj_init, obj_init_arg);
-
-	te->data = (void *) mp;
-
-	RTE_EAL_TAILQ_INSERT_TAIL(RTE_TAILQ_MEMPOOL, rte_mempool_list, te);
-
-exit:
-	rte_rwlock_write_unlock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	return mp;
-}
-
-/* Return the number of entries in the mempool */
-unsigned
-rte_mempool_count(const struct rte_mempool *mp)
-{
-	unsigned count;
-
-	count = rte_ring_count(mp->ring);
-
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	{
-		unsigned lcore_id;
-		if (mp->cache_size == 0)
-			return count;
-
-		for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
-			count += mp->local_cache[lcore_id].len;
-	}
-#endif
-
-	/*
-	 * due to race condition (access to len is not locked), the
-	 * total can be greater than size... so fix the result
-	 */
-	if (count > mp->size)
-		return mp->size;
-	return count;
-}
-
-/* dump the cache status */
-static unsigned
-rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
-{
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	unsigned lcore_id;
-	unsigned count = 0;
-	unsigned cache_count;
-
-	fprintf(f, "  cache infos:\n");
-	fprintf(f, "    cache_size=%"PRIu32"\n", mp->cache_size);
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		cache_count = mp->local_cache[lcore_id].len;
-		fprintf(f, "    cache_count[%u]=%u\n", lcore_id, cache_count);
-		count += cache_count;
-	}
-	fprintf(f, "    total_cache_count=%u\n", count);
-	return count;
-#else
-	RTE_SET_USED(mp);
-	fprintf(f, "  cache disabled\n");
-	return 0;
-#endif
-}
-
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-/* check cookies before and after objects */
-#ifndef __INTEL_COMPILER
-#pragma GCC diagnostic ignored "-Wcast-qual"
-#endif
-
-struct mempool_audit_arg {
-	const struct rte_mempool *mp;
-	uintptr_t obj_end;
-	uint32_t obj_num;
-};
-
-static void
-mempool_obj_audit(void *arg, void *start, void *end, uint32_t idx)
-{
-	struct mempool_audit_arg *pa = arg;
-	void *obj;
-
-	obj = (char *)start + pa->mp->header_size;
-	pa->obj_end = (uintptr_t)end;
-	pa->obj_num = idx + 1;
-	__mempool_check_cookies(pa->mp, &obj, 1, 2);
-}
-
-static void
-mempool_audit_cookies(const struct rte_mempool *mp)
-{
-	uint32_t elt_sz, num;
-	struct mempool_audit_arg arg;
-
-	elt_sz = mp->elt_size + mp->header_size + mp->trailer_size;
-
-	arg.mp = mp;
-	arg.obj_end = mp->elt_va_start;
-	arg.obj_num = 0;
-
-	num = rte_mempool_obj_iter((void *)mp->elt_va_start,
-		mp->size, elt_sz, 1,
-		mp->elt_pa, mp->pg_num, mp->pg_shift,
-		mempool_obj_audit, &arg);
-
-	if (num != mp->size) {
-			rte_panic("rte_mempool_obj_iter(mempool=%p, size=%u) "
-			"iterated only over %u elements\n",
-			mp, mp->size, num);
-	} else if (arg.obj_end != mp->elt_va_end || arg.obj_num != mp->size) {
-			rte_panic("rte_mempool_obj_iter(mempool=%p, size=%u) "
-			"last callback va_end: %#tx (%#tx expeceted), "
-			"num of objects: %u (%u expected)\n",
-			mp, mp->size,
-			arg.obj_end, mp->elt_va_end,
-			arg.obj_num, mp->size);
-	}
-}
-
-#ifndef __INTEL_COMPILER
-#pragma GCC diagnostic error "-Wcast-qual"
-#endif
-#else
-#define mempool_audit_cookies(mp) do {} while(0)
-#endif
-
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-/* check cookies before and after objects */
-static void
-mempool_audit_cache(const struct rte_mempool *mp)
-{
-	/* check cache size consistency */
-	unsigned lcore_id;
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
-			RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
-				lcore_id);
-			rte_panic("MEMPOOL: invalid cache len\n");
-		}
-	}
-}
-#else
-#define mempool_audit_cache(mp) do {} while(0)
-#endif
-
-
-/* check the consistency of mempool (size, cookies, ...) */
-void
-rte_mempool_audit(const struct rte_mempool *mp)
-{
-	mempool_audit_cache(mp);
-	mempool_audit_cookies(mp);
-
-	/* For case where mempool DEBUG is not set, and cache size is 0 */
-	RTE_SET_USED(mp);
-}
-
-/* dump the status of the mempool on the console */
-void
-rte_mempool_dump(FILE *f, const struct rte_mempool *mp)
-{
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	struct rte_mempool_debug_stats sum;
-	unsigned lcore_id;
-#endif
-	unsigned common_count;
-	unsigned cache_count;
-
-	RTE_VERIFY(f != NULL);
-	RTE_VERIFY(mp != NULL);
-
-	fprintf(f, "mempool <%s>@%p\n", mp->name, mp);
-	fprintf(f, "  flags=%x\n", mp->flags);
-	fprintf(f, "  ring=<%s>@%p\n", mp->ring->name, mp->ring);
-	fprintf(f, "  phys_addr=0x%" PRIx64 "\n", mp->phys_addr);
-	fprintf(f, "  size=%"PRIu32"\n", mp->size);
-	fprintf(f, "  header_size=%"PRIu32"\n", mp->header_size);
-	fprintf(f, "  elt_size=%"PRIu32"\n", mp->elt_size);
-	fprintf(f, "  trailer_size=%"PRIu32"\n", mp->trailer_size);
-	fprintf(f, "  total_obj_size=%"PRIu32"\n",
-	       mp->header_size + mp->elt_size + mp->trailer_size);
-
-	fprintf(f, "  private_data_size=%"PRIu32"\n", mp->private_data_size);
-	fprintf(f, "  pg_num=%"PRIu32"\n", mp->pg_num);
-	fprintf(f, "  pg_shift=%"PRIu32"\n", mp->pg_shift);
-	fprintf(f, "  pg_mask=%#tx\n", mp->pg_mask);
-	fprintf(f, "  elt_va_start=%#tx\n", mp->elt_va_start);
-	fprintf(f, "  elt_va_end=%#tx\n", mp->elt_va_end);
-	fprintf(f, "  elt_pa[0]=0x%" PRIx64 "\n", mp->elt_pa[0]);
-
-	if (mp->size != 0)
-		fprintf(f, "  avg bytes/object=%#Lf\n",
-			(long double)(mp->elt_va_end - mp->elt_va_start) /
-			mp->size);
-
-	cache_count = rte_mempool_dump_cache(f, mp);
-	common_count = rte_ring_count(mp->ring);
-	if ((cache_count + common_count) > mp->size)
-		common_count = mp->size - cache_count;
-	fprintf(f, "  common_pool_count=%u\n", common_count);
-
-	/* sum and dump statistics */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	memset(&sum, 0, sizeof(sum));
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		sum.put_bulk += mp->stats[lcore_id].put_bulk;
-		sum.put_objs += mp->stats[lcore_id].put_objs;
-		sum.get_success_bulk += mp->stats[lcore_id].get_success_bulk;
-		sum.get_success_objs += mp->stats[lcore_id].get_success_objs;
-		sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;
-		sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
-	}
-	fprintf(f, "  stats:\n");
-	fprintf(f, "    put_bulk=%"PRIu64"\n", sum.put_bulk);
-	fprintf(f, "    put_objs=%"PRIu64"\n", sum.put_objs);
-	fprintf(f, "    get_success_bulk=%"PRIu64"\n", sum.get_success_bulk);
-	fprintf(f, "    get_success_objs=%"PRIu64"\n", sum.get_success_objs);
-	fprintf(f, "    get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk);
-	fprintf(f, "    get_fail_objs=%"PRIu64"\n", sum.get_fail_objs);
-#else
-	fprintf(f, "  no statistics available\n");
-#endif
-
-	rte_mempool_audit(mp);
-}
-
-/* dump the status of all mempools on the console */
-void
-rte_mempool_list_dump(FILE *f)
-{
-	const struct rte_mempool *mp = NULL;
-	struct rte_tailq_entry *te;
-	struct rte_mempool_list *mempool_list;
-
-	if ((mempool_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return;
-	}
-
-	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	TAILQ_FOREACH(te, mempool_list, next) {
-		mp = (struct rte_mempool *) te->data;
-		rte_mempool_dump(f, mp);
-	}
-
-	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
-}
-
-/* search a mempool from its name */
-struct rte_mempool *
-rte_mempool_lookup(const char *name)
-{
-	struct rte_mempool *mp = NULL;
-	struct rte_tailq_entry *te;
-	struct rte_mempool_list *mempool_list;
-
-	if ((mempool_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return NULL;
-	}
-
-	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	TAILQ_FOREACH(te, mempool_list, next) {
-		mp = (struct rte_mempool *) te->data;
-		if (strncmp(name, mp->name, RTE_MEMPOOL_NAMESIZE) == 0)
-			break;
-	}
-
-	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	if (te == NULL) {
-		rte_errno = ENOENT;
-		return NULL;
-	}
-
-	return mp;
-}
-
-void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *),
-		      void *arg)
-{
-	struct rte_tailq_entry *te = NULL;
-	struct rte_mempool_list *mempool_list;
-
-	if ((mempool_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_MEMPOOL, rte_mempool_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return;
-	}
-
-	rte_rwlock_read_lock(RTE_EAL_MEMPOOL_RWLOCK);
-
-	TAILQ_FOREACH(te, mempool_list, next) {
-		(*func)((struct rte_mempool *) te->data, arg);
-	}
-
-	rte_rwlock_read_unlock(RTE_EAL_MEMPOOL_RWLOCK);
-}
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
deleted file mode 100644
index 3314651..0000000
--- a/lib/librte_mempool/rte_mempool.h
+++ /dev/null
@@ -1,1392 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_MEMPOOL_H_
-#define _RTE_MEMPOOL_H_
-
-/**
- * @file
- * RTE Mempool.
- *
- * A memory pool is an allocator of fixed-size object. It is
- * identified by its name, and uses a ring to store free objects. It
- * provides some other optional services, like a per-core object
- * cache, and an alignment helper to ensure that objects are padded
- * to spread them equally on all RAM channels, ranks, and so on.
- *
- * Objects owned by a mempool should never be added in another
- * mempool. When an object is freed using rte_mempool_put() or
- * equivalent, the object data is not modified; the user can save some
- * meta-data in the object data and retrieve them when allocating a
- * new object.
- *
- * Note: the mempool implementation is not preemptable. A lcore must
- * not be interrupted by another task that uses the same mempool
- * (because it uses a ring which is not preemptable). Also, mempool
- * functions must not be used outside the DPDK environment: for
- * example, in linuxapp environment, a thread that is not created by
- * the EAL must not use mempools. This is due to the per-lcore cache
- * that won't work as rte_lcore_id() will not return a correct value.
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <errno.h>
-#include <inttypes.h>
-#include <sys/queue.h>
-
-#include <rte_log.h>
-#include <rte_debug.h>
-#include <rte_lcore.h>
-#include <rte_memory.h>
-#include <rte_branch_prediction.h>
-#include <rte_ring.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#define RTE_MEMPOOL_HEADER_COOKIE1  0xbadbadbadadd2e55ULL /**< Header cookie. */
-#define RTE_MEMPOOL_HEADER_COOKIE2  0xf2eef2eedadd2e55ULL /**< Header cookie. */
-#define RTE_MEMPOOL_TRAILER_COOKIE  0xadd2e55badbadbadULL /**< Trailer cookie.*/
-
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-/**
- * A structure that stores the mempool statistics (per-lcore).
- */
-struct rte_mempool_debug_stats {
-	uint64_t put_bulk;         /**< Number of puts. */
-	uint64_t put_objs;         /**< Number of objects successfully put. */
-	uint64_t get_success_bulk; /**< Successful allocation number. */
-	uint64_t get_success_objs; /**< Objects successfully allocated. */
-	uint64_t get_fail_bulk;    /**< Failed allocation number. */
-	uint64_t get_fail_objs;    /**< Objects that failed to be allocated. */
-} __rte_cache_aligned;
-#endif
-
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-/**
- * A structure that stores a per-core object cache.
- */
-struct rte_mempool_cache {
-	unsigned len; /**< Cache len */
-	/*
-	 * Cache is allocated to this size to allow it to overflow in certain
-	 * cases to avoid needless emptying of cache.
-	 */
-	void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */
-} __rte_cache_aligned;
-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
-
-struct rte_mempool_objsz {
-	uint32_t elt_size;     /**< Size of an element. */
-	uint32_t header_size;  /**< Size of header (before elt). */
-	uint32_t trailer_size; /**< Size of trailer (after elt). */
-	uint32_t total_size;
-	/**< Total size of an object (header + elt + trailer). */
-};
-
-#define RTE_MEMPOOL_NAMESIZE 32 /**< Maximum length of a memory pool. */
-#define RTE_MEMPOOL_MZ_PREFIX "MP_"
-
-/* "MP_<name>" */
-#define	RTE_MEMPOOL_MZ_FORMAT	RTE_MEMPOOL_MZ_PREFIX "%s"
-
-#ifdef RTE_LIBRTE_XEN_DOM0
-
-/* "<name>_MP_elt" */
-#define	RTE_MEMPOOL_OBJ_NAME	"%s_" RTE_MEMPOOL_MZ_PREFIX "elt"
-
-#else
-
-#define	RTE_MEMPOOL_OBJ_NAME	RTE_MEMPOOL_MZ_FORMAT
-
-#endif /* RTE_LIBRTE_XEN_DOM0 */
-
-#define	MEMPOOL_PG_SHIFT_MAX	(sizeof(uintptr_t) * CHAR_BIT - 1)
-
-/** Mempool over one chunk of physically continuous memory */
-#define	MEMPOOL_PG_NUM_DEFAULT	1
-
-/**
- * The RTE mempool structure.
- */
-struct rte_mempool {
-	char name[RTE_MEMPOOL_NAMESIZE]; /**< Name of mempool. */
-	struct rte_ring *ring;           /**< Ring to store objects. */
-	phys_addr_t phys_addr;           /**< Phys. addr. of mempool struct. */
-	int flags;                       /**< Flags of the mempool. */
-	uint32_t size;                   /**< Size of the mempool. */
-	uint32_t cache_size;             /**< Size of per-lcore local cache. */
-	uint32_t cache_flushthresh;
-	/**< Threshold before we flush excess elements. */
-
-	uint32_t elt_size;               /**< Size of an element. */
-	uint32_t header_size;            /**< Size of header (before elt). */
-	uint32_t trailer_size;           /**< Size of trailer (after elt). */
-
-	unsigned private_data_size;      /**< Size of private data. */
-
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	/** Per-lcore local cache. */
-	struct rte_mempool_cache local_cache[RTE_MAX_LCORE];
-#endif
-
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	/** Per-lcore statistics. */
-	struct rte_mempool_debug_stats stats[RTE_MAX_LCORE];
-#endif
-
-	/* Address translation support, starts from next cache line. */
-
-	/** Number of elements in the elt_pa array. */
-	uint32_t    pg_num __rte_cache_aligned;
-	uint32_t    pg_shift;     /**< LOG2 of the physical pages. */
-	uintptr_t   pg_mask;      /**< physical page mask value. */
-	uintptr_t   elt_va_start;
-	/**< Virtual address of the first mempool object. */
-	uintptr_t   elt_va_end;
-	/**< Virtual address of the <size + 1> mempool object. */
-	phys_addr_t elt_pa[MEMPOOL_PG_NUM_DEFAULT];
-	/**< Array of physical pages addresses for the mempool objects buffer. */
-
-}  __rte_cache_aligned;
-
-#define MEMPOOL_F_NO_SPREAD      0x0001 /**< Do not spread in memory. */
-#define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/
-#define MEMPOOL_F_SP_PUT         0x0004 /**< Default put is "single-producer".*/
-#define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
-
-/**
- * @internal When debug is enabled, store some statistics.
- * @param mp
- *   Pointer to the memory pool.
- * @param name
- *   Name of the statistics field to increment in the memory pool.
- * @param n
- *   Number to add to the object-oriented statistics.
- */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {			\
-		unsigned __lcore_id = rte_lcore_id();		\
-		mp->stats[__lcore_id].name##_objs += n;		\
-		mp->stats[__lcore_id].name##_bulk += 1;		\
-	} while(0)
-#else
-#define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)
-#endif
-
-/**
- * Calculates size of the mempool header.
- * @param mp
- *   Pointer to the memory pool.
- * @param pgn
- *   Number of page used to store mempool objects.
- */
-#define	MEMPOOL_HEADER_SIZE(mp, pgn)	(sizeof(*(mp)) + \
-	RTE_ALIGN_CEIL(((pgn) - RTE_DIM((mp)->elt_pa)) * \
-	sizeof ((mp)->elt_pa[0]), RTE_CACHE_LINE_SIZE))
-
-/**
- * Returns TRUE if whole mempool is allocated in one contiguous block of memory.
- */
-#define	MEMPOOL_IS_CONTIG(mp)                      \
-	((mp)->pg_num == MEMPOOL_PG_NUM_DEFAULT && \
-	(mp)->phys_addr == (mp)->elt_pa[0])
-
-/**
- * @internal Get a pointer to a mempool pointer in the object header.
- * @param obj
- *   Pointer to object.
- * @return
- *   The pointer to the mempool from which the object was allocated.
- */
-static inline struct rte_mempool **__mempool_from_obj(void *obj)
-{
-	struct rte_mempool **mpp;
-	unsigned off;
-
-	off = sizeof(struct rte_mempool *);
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	off += sizeof(uint64_t);
-#endif
-	mpp = (struct rte_mempool **)((char *)obj - off);
-	return mpp;
-}
-
-/**
- * Return a pointer to the mempool owning this object.
- *
- * @param obj
- *   An object that is owned by a pool. If this is not the case,
- *   the behavior is undefined.
- * @return
- *   A pointer to the mempool structure.
- */
-static inline const struct rte_mempool *rte_mempool_from_obj(void *obj)
-{
-	struct rte_mempool * const *mpp;
-	mpp = __mempool_from_obj(obj);
-	return *mpp;
-}
-
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-/* get header cookie value */
-static inline uint64_t __mempool_read_header_cookie(const void *obj)
-{
-	return *(const uint64_t *)((const char *)obj - sizeof(uint64_t));
-}
-
-/* get trailer cookie value */
-static inline uint64_t __mempool_read_trailer_cookie(void *obj)
-{
-	struct rte_mempool **mpp = __mempool_from_obj(obj);
-	return *(uint64_t *)((char *)obj + (*mpp)->elt_size);
-}
-
-/* write header cookie value */
-static inline void __mempool_write_header_cookie(void *obj, int free)
-{
-	uint64_t *cookie_p;
-	cookie_p = (uint64_t *)((char *)obj - sizeof(uint64_t));
-	if (free == 0)
-		*cookie_p = RTE_MEMPOOL_HEADER_COOKIE1;
-	else
-		*cookie_p = RTE_MEMPOOL_HEADER_COOKIE2;
-
-}
-
-/* write trailer cookie value */
-static inline void __mempool_write_trailer_cookie(void *obj)
-{
-	uint64_t *cookie_p;
-	struct rte_mempool **mpp = __mempool_from_obj(obj);
-	cookie_p = (uint64_t *)((char *)obj + (*mpp)->elt_size);
-	*cookie_p = RTE_MEMPOOL_TRAILER_COOKIE;
-}
-#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
-
-/**
- * @internal Check and update cookies or panic.
- *
- * @param mp
- *   Pointer to the memory pool.
- * @param obj_table_const
- *   Pointer to a table of void * pointers (objects).
- * @param n
- *   Index of object in object table.
- * @param free
- *   - 0: object is supposed to be allocated, mark it as free
- *   - 1: object is supposed to be free, mark it as allocated
- *   - 2: just check that cookie is valid (free or allocated)
- */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-#ifndef __INTEL_COMPILER
-#pragma GCC diagnostic ignored "-Wcast-qual"
-#endif
-static inline void __mempool_check_cookies(const struct rte_mempool *mp,
-					   void * const *obj_table_const,
-					   unsigned n, int free)
-{
-	uint64_t cookie;
-	void *tmp;
-	void *obj;
-	void **obj_table;
-
-	/* Force to drop the "const" attribute. This is done only when
-	 * DEBUG is enabled */
-	tmp = (void *) obj_table_const;
-	obj_table = (void **) tmp;
-
-	while (n--) {
-		obj = obj_table[n];
-
-		if (rte_mempool_from_obj(obj) != mp)
-			rte_panic("MEMPOOL: object is owned by another "
-				  "mempool\n");
-
-		cookie = __mempool_read_header_cookie(obj);
-
-		if (free == 0) {
-			if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) {
-				rte_log_set_history(0);
-				RTE_LOG(CRIT, MEMPOOL,
-					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
-					obj, mp, cookie);
-				rte_panic("MEMPOOL: bad header cookie (put)\n");
-			}
-			__mempool_write_header_cookie(obj, 1);
-		}
-		else if (free == 1) {
-			if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
-				rte_log_set_history(0);
-				RTE_LOG(CRIT, MEMPOOL,
-					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
-					obj, mp, cookie);
-				rte_panic("MEMPOOL: bad header cookie (get)\n");
-			}
-			__mempool_write_header_cookie(obj, 0);
-		}
-		else if (free == 2) {
-			if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 &&
-			    cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
-				rte_log_set_history(0);
-				RTE_LOG(CRIT, MEMPOOL,
-					"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
-					obj, mp, cookie);
-				rte_panic("MEMPOOL: bad header cookie (audit)\n");
-			}
-		}
-		cookie = __mempool_read_trailer_cookie(obj);
-		if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
-			rte_log_set_history(0);
-			RTE_LOG(CRIT, MEMPOOL,
-				"obj=%p, mempool=%p, cookie=%"PRIx64"\n",
-				obj, mp, cookie);
-			rte_panic("MEMPOOL: bad trailer cookie\n");
-		}
-	}
-}
-#ifndef __INTEL_COMPILER
-#pragma GCC diagnostic error "-Wcast-qual"
-#endif
-#else
-#define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)
-#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */
-
-/**
- * An mempool's object iterator callback function.
- */
-typedef void (*rte_mempool_obj_iter_t)(void * /*obj_iter_arg*/,
-	void * /*obj_start*/,
-	void * /*obj_end*/,
-	uint32_t /*obj_index */);
-
-/*
- * Iterates across objects of the given size and alignment in the
- * provided chunk of memory. The given memory buffer can consist of
- * disjoint physical pages.
- * For each object calls the provided callback (if any).
- * Used to populate mempool, walk through all elements of the mempool,
- * estimate how many elements of the given size could be created in the given
- * memory buffer.
- * @param vaddr
- *   Virtual address of the memory buffer.
- * @param elt_num
- *   Maximum number of objects to iterate through.
- * @param elt_sz
- *   Size of each object.
- * @param paddr
- *   Array of phyiscall addresses of the pages that comprises given memory
- *   buffer.
- * @param pg_num
- *   Number of elements in the paddr array.
- * @param pg_shift
- *   LOG2 of the physical pages size.
- * @param obj_iter
- *   Object iterator callback function (could be NULL).
- * @param obj_iter_arg
- *   User defined Prameter for the object iterator callback function.
- *
- * @return
- *   Number of objects iterated through.
- */
-
-uint32_t rte_mempool_obj_iter(void *vaddr,
-	uint32_t elt_num, size_t elt_sz, size_t align,
-	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift,
-	rte_mempool_obj_iter_t obj_iter, void *obj_iter_arg);
-
-/**
- * An object constructor callback function for mempool.
- *
- * Arguments are the mempool, the opaque pointer given by the user in
- * rte_mempool_create(), the pointer to the element and the index of
- * the element in the pool.
- */
-typedef void (rte_mempool_obj_ctor_t)(struct rte_mempool *, void *,
-				      void *, unsigned);
-
-/**
- * A mempool constructor callback function.
- *
- * Arguments are the mempool and the opaque pointer given by the user in
- * rte_mempool_create().
- */
-typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
-
-/**
- * Creates a new mempool named *name* in memory.
- *
- * This function uses ``memzone_reserve()`` to allocate memory. The
- * pool contains n elements of elt_size. Its size is set to n.
- * All elements of the mempool are allocated together with the mempool header,
- * in one physically continuous chunk of memory.
- *
- * @param name
- *   The name of the mempool.
- * @param n
- *   The number of elements in the mempool. The optimum size (in terms of
- *   memory usage) for a mempool is when n is a power of two minus one:
- *   n = (2^q - 1).
- * @param elt_size
- *   The size of each element.
- * @param cache_size
- *   If cache_size is non-zero, the rte_mempool library will try to
- *   limit the accesses to the common lockless pool, by maintaining a
- *   per-lcore object cache. This argument must be lower or equal to
- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
- *   cache_size to have "n modulo cache_size == 0": if this is
- *   not the case, some elements will always stay in the pool and will
- *   never be used. The access to the per-lcore table is of course
- *   faster than the multi-producer/consumer pool. The cache can be
- *   disabled if the cache_size argument is set to 0; it can be useful to
- *   avoid losing objects in cache. Note that even if not used, the
- *   memory space for cache is always reserved in a mempool structure,
- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
- * @param private_data_size
- *   The size of the private data appended after the mempool
- *   structure. This is useful for storing some private data after the
- *   mempool structure, as is done for rte_mbuf_pool for example.
- * @param mp_init
- *   A function pointer that is called for initialization of the pool,
- *   before object initialization. The user can initialize the private
- *   data in this function if needed. This parameter can be NULL if
- *   not needed.
- * @param mp_init_arg
- *   An opaque pointer to data that can be used in the mempool
- *   constructor function.
- * @param obj_init
- *   A function pointer that is called for each object at
- *   initialization of the pool. The user can set some meta data in
- *   objects if needed. This parameter can be NULL if not needed.
- *   The obj_init() function takes the mempool pointer, the init_arg,
- *   the object pointer and the object number as parameters.
- * @param obj_init_arg
- *   An opaque pointer to data that can be used as an argument for
- *   each call to the object constructor function.
- * @param socket_id
- *   The *socket_id* argument is the socket identifier in the case of
- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
- *   constraint for the reserved zone.
- * @param flags
- *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
- *     between channels in RAM: the pool allocator will add padding
- *     between objects depending on the hardware configuration. See
- *     Memory alignment constraints for details. If this flag is set,
- *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
- *     cache-aligned. This flag removes this constraint, and no
- *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
- *     when using rte_mempool_put() or rte_mempool_put_bulk() is
- *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
- *     when using rte_mempool_get() or rte_mempool_get_bulk() is
- *     "single-consumer". Otherwise, it is "multi-consumers".
- * @return
- *   The pointer to the new allocated mempool, on success. NULL on error
- *   with rte_errno set appropriately. Possible rte_errno values include:
- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
- *    - E_RTE_SECONDARY - function was called from a secondary process instance
- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
- *    - EINVAL - cache size provided is too large
- *    - ENOSPC - the maximum number of memzones has already been allocated
- *    - EEXIST - a memzone with the same name already exists
- *    - ENOMEM - no appropriate memory area found in which to create memzone
- */
-struct rte_mempool *
-rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
-		   unsigned cache_size, unsigned private_data_size,
-		   rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		   rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		   int socket_id, unsigned flags);
-
-/**
- * Creates a new mempool named *name* in memory.
- *
- * This function uses ``memzone_reserve()`` to allocate memory. The
- * pool contains n elements of elt_size. Its size is set to n.
- * Depending on the input parameters, mempool elements can be either allocated
- * together with the mempool header, or an externally provided memory buffer
- * could be used to store mempool objects. In later case, that external
- * memory buffer can consist of set of disjoint phyiscal pages.
- *
- * @param name
- *   The name of the mempool.
- * @param n
- *   The number of elements in the mempool. The optimum size (in terms of
- *   memory usage) for a mempool is when n is a power of two minus one:
- *   n = (2^q - 1).
- * @param elt_size
- *   The size of each element.
- * @param cache_size
- *   If cache_size is non-zero, the rte_mempool library will try to
- *   limit the accesses to the common lockless pool, by maintaining a
- *   per-lcore object cache. This argument must be lower or equal to
- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
- *   cache_size to have "n modulo cache_size == 0": if this is
- *   not the case, some elements will always stay in the pool and will
- *   never be used. The access to the per-lcore table is of course
- *   faster than the multi-producer/consumer pool. The cache can be
- *   disabled if the cache_size argument is set to 0; it can be useful to
- *   avoid losing objects in cache. Note that even if not used, the
- *   memory space for cache is always reserved in a mempool structure,
- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
- * @param private_data_size
- *   The size of the private data appended after the mempool
- *   structure. This is useful for storing some private data after the
- *   mempool structure, as is done for rte_mbuf_pool for example.
- * @param mp_init
- *   A function pointer that is called for initialization of the pool,
- *   before object initialization. The user can initialize the private
- *   data in this function if needed. This parameter can be NULL if
- *   not needed.
- * @param mp_init_arg
- *   An opaque pointer to data that can be used in the mempool
- *   constructor function.
- * @param obj_init
- *   A function pointer that is called for each object at
- *   initialization of the pool. The user can set some meta data in
- *   objects if needed. This parameter can be NULL if not needed.
- *   The obj_init() function takes the mempool pointer, the init_arg,
- *   the object pointer and the object number as parameters.
- * @param obj_init_arg
- *   An opaque pointer to data that can be used as an argument for
- *   each call to the object constructor function.
- * @param socket_id
- *   The *socket_id* argument is the socket identifier in the case of
- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
- *   constraint for the reserved zone.
- * @param flags
- *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
- *     between channels in RAM: the pool allocator will add padding
- *     between objects depending on the hardware configuration. See
- *     Memory alignment constraints for details. If this flag is set,
- *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
- *     cache-aligned. This flag removes this constraint, and no
- *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
- *     when using rte_mempool_put() or rte_mempool_put_bulk() is
- *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
- *     when using rte_mempool_get() or rte_mempool_get_bulk() is
- *     "single-consumer". Otherwise, it is "multi-consumers".
- * @param vaddr
- *   Virtual address of the externally allocated memory buffer.
- *   Will be used to store mempool objects.
- * @param paddr
- *   Array of phyiscall addresses of the pages that comprises given memory
- *   buffer.
- * @param pg_num
- *   Number of elements in the paddr array.
- * @param pg_shift
- *   LOG2 of the physical pages size.
- * @return
- *   The pointer to the new allocated mempool, on success. NULL on error
- *   with rte_errno set appropriately. Possible rte_errno values include:
- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
- *    - E_RTE_SECONDARY - function was called from a secondary process instance
- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
- *    - EINVAL - cache size provided is too large
- *    - ENOSPC - the maximum number of memzones has already been allocated
- *    - EEXIST - a memzone with the same name already exists
- *    - ENOMEM - no appropriate memory area found in which to create memzone
- */
-struct rte_mempool *
-rte_mempool_xmem_create(const char *name, unsigned n, unsigned elt_size,
-		unsigned cache_size, unsigned private_data_size,
-		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		int socket_id, unsigned flags, void *vaddr,
-		const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);
-
-#ifdef RTE_LIBRTE_XEN_DOM0
-/**
- * Creates a new mempool named *name* in memory on Xen Dom0.
- *
- * This function uses ``rte_mempool_xmem_create()`` to allocate memory. The
- * pool contains n elements of elt_size. Its size is set to n.
- * All elements of the mempool are allocated together with the mempool header,
- * and memory buffer can consist of set of disjoint phyiscal pages.
- *
- * @param name
- *   The name of the mempool.
- * @param n
- *   The number of elements in the mempool. The optimum size (in terms of
- *   memory usage) for a mempool is when n is a power of two minus one:
- *   n = (2^q - 1).
- * @param elt_size
- *   The size of each element.
- * @param cache_size
- *   If cache_size is non-zero, the rte_mempool library will try to
- *   limit the accesses to the common lockless pool, by maintaining a
- *   per-lcore object cache. This argument must be lower or equal to
- *   CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE. It is advised to choose
- *   cache_size to have "n modulo cache_size == 0": if this is
- *   not the case, some elements will always stay in the pool and will
- *   never be used. The access to the per-lcore table is of course
- *   faster than the multi-producer/consumer pool. The cache can be
- *   disabled if the cache_size argument is set to 0; it can be useful to
- *   avoid losing objects in cache. Note that even if not used, the
- *   memory space for cache is always reserved in a mempool structure,
- *   except if CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE is set to 0.
- * @param private_data_size
- *   The size of the private data appended after the mempool
- *   structure. This is useful for storing some private data after the
- *   mempool structure, as is done for rte_mbuf_pool for example.
- * @param mp_init
- *   A function pointer that is called for initialization of the pool,
- *   before object initialization. The user can initialize the private
- *   data in this function if needed. This parameter can be NULL if
- *   not needed.
- * @param mp_init_arg
- *   An opaque pointer to data that can be used in the mempool
- *   constructor function.
- * @param obj_init
- *   A function pointer that is called for each object at
- *   initialization of the pool. The user can set some meta data in
- *   objects if needed. This parameter can be NULL if not needed.
- *   The obj_init() function takes the mempool pointer, the init_arg,
- *   the object pointer and the object number as parameters.
- * @param obj_init_arg
- *   An opaque pointer to data that can be used as an argument for
- *   each call to the object constructor function.
- * @param socket_id
- *   The *socket_id* argument is the socket identifier in the case of
- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
- *   constraint for the reserved zone.
- * @param flags
- *   The *flags* arguments is an OR of following flags:
- *   - MEMPOOL_F_NO_SPREAD: By default, objects addresses are spread
- *     between channels in RAM: the pool allocator will add padding
- *     between objects depending on the hardware configuration. See
- *     Memory alignment constraints for details. If this flag is set,
- *     the allocator will just align them to a cache line.
- *   - MEMPOOL_F_NO_CACHE_ALIGN: By default, the returned objects are
- *     cache-aligned. This flag removes this constraint, and no
- *     padding will be present between objects. This flag implies
- *     MEMPOOL_F_NO_SPREAD.
- *   - MEMPOOL_F_SP_PUT: If this flag is set, the default behavior
- *     when using rte_mempool_put() or rte_mempool_put_bulk() is
- *     "single-producer". Otherwise, it is "multi-producers".
- *   - MEMPOOL_F_SC_GET: If this flag is set, the default behavior
- *     when using rte_mempool_get() or rte_mempool_get_bulk() is
- *     "single-consumer". Otherwise, it is "multi-consumers".
- * @return
- *   The pointer to the new allocated mempool, on success. NULL on error
- *   with rte_errno set appropriately. Possible rte_errno values include:
- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
- *    - E_RTE_SECONDARY - function was called from a secondary process instance
- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring or mempool list
- *    - EINVAL - cache size provided is too large
- *    - ENOSPC - the maximum number of memzones has already been allocated
- *    - EEXIST - a memzone with the same name already exists
- *    - ENOMEM - no appropriate memory area found in which to create memzone
- */
-struct rte_mempool *
-rte_dom0_mempool_create(const char *name, unsigned n, unsigned elt_size,
-		unsigned cache_size, unsigned private_data_size,
-		rte_mempool_ctor_t *mp_init, void *mp_init_arg,
-		rte_mempool_obj_ctor_t *obj_init, void *obj_init_arg,
-		int socket_id, unsigned flags);
-#endif
-
-/**
- * Dump the status of the mempool to the console.
- *
- * @param f
- *   A pointer to a file for output
- * @param mp
- *   A pointer to the mempool structure.
- */
-void rte_mempool_dump(FILE *f, const struct rte_mempool *mp);
-
-/**
- * @internal Put several objects back in the mempool; used internally.
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to store back in the mempool, must be strictly
- *   positive.
- * @param is_mp
- *   Mono-producer (0) or multi-producers (1).
- */
-static inline void __attribute__((always_inline))
-__mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		    unsigned n, int is_mp)
-{
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	struct rte_mempool_cache *cache;
-	uint32_t index;
-	void **cache_objs;
-	unsigned lcore_id = rte_lcore_id();
-	uint32_t cache_size = mp->cache_size;
-	uint32_t flushthresh = mp->cache_flushthresh;
-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
-
-	/* increment stat now, adding in mempool always success */
-	__MEMPOOL_STAT_ADD(mp, put, n);
-
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	/* cache is not enabled or single producer */
-	if (unlikely(cache_size == 0 || is_mp == 0))
-		goto ring_enqueue;
-
-	/* Go straight to ring if put would overflow mem allocated for cache */
-	if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE))
-		goto ring_enqueue;
-
-	cache = &mp->local_cache[lcore_id];
-	cache_objs = &cache->objs[cache->len];
-
-	/*
-	 * The cache follows the following algorithm
-	 *   1. Add the objects to the cache
-	 *   2. Anything greater than the cache min value (if it crosses the
-	 *   cache flush threshold) is flushed to the ring.
-	 */
-
-	/* Add elements back into the cache */
-	for (index = 0; index < n; ++index, obj_table++)
-		cache_objs[index] = *obj_table;
-
-	cache->len += n;
-
-	if (cache->len >= flushthresh) {
-		rte_ring_mp_enqueue_bulk(mp->ring, &cache->objs[cache_size],
-				cache->len - cache_size);
-		cache->len = cache_size;
-	}
-
-	return;
-
-ring_enqueue:
-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
-
-	/* push remaining objects in ring */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-	if (is_mp) {
-		if (rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-	else {
-		if (rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n) < 0)
-			rte_panic("cannot put objects in mempool\n");
-	}
-#else
-	if (is_mp)
-		rte_ring_mp_enqueue_bulk(mp->ring, obj_table, n);
-	else
-		rte_ring_sp_enqueue_bulk(mp->ring, obj_table, n);
-#endif
-}
-
-
-/**
- * Put several objects back in the mempool (multi-producers safe).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the mempool from the obj_table.
- */
-static inline void __attribute__((always_inline))
-rte_mempool_mp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n)
-{
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_put_bulk(mp, obj_table, n, 1);
-}
-
-/**
- * Put several objects back in the mempool (NOT multi-producers safe).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the mempool from obj_table.
- */
-static inline void
-rte_mempool_sp_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-			unsigned n)
-{
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_put_bulk(mp, obj_table, n, 0);
-}
-
-/**
- * Put several objects back in the mempool.
- *
- * This function calls the multi-producer or the single-producer
- * version depending on the default behavior that was specified at
- * mempool creation time (see flags).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the mempool from obj_table.
- */
-static inline void __attribute__((always_inline))
-rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
-		     unsigned n)
-{
-	__mempool_check_cookies(mp, obj_table, n, 0);
-	__mempool_put_bulk(mp, obj_table, n, !(mp->flags & MEMPOOL_F_SP_PUT));
-}
-
-/**
- * Put one object in the mempool (multi-producers safe).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj
- *   A pointer to the object to be added.
- */
-static inline void __attribute__((always_inline))
-rte_mempool_mp_put(struct rte_mempool *mp, void *obj)
-{
-	rte_mempool_mp_put_bulk(mp, &obj, 1);
-}
-
-/**
- * Put one object back in the mempool (NOT multi-producers safe).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj
- *   A pointer to the object to be added.
- */
-static inline void __attribute__((always_inline))
-rte_mempool_sp_put(struct rte_mempool *mp, void *obj)
-{
-	rte_mempool_sp_put_bulk(mp, &obj, 1);
-}
-
-/**
- * Put one object back in the mempool.
- *
- * This function calls the multi-producer or the single-producer
- * version depending on the default behavior that was specified at
- * mempool creation time (see flags).
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj
- *   A pointer to the object to be added.
- */
-static inline void __attribute__((always_inline))
-rte_mempool_put(struct rte_mempool *mp, void *obj)
-{
-	rte_mempool_put_bulk(mp, &obj, 1);
-}
-
-/**
- * @internal Get several objects from the mempool; used internally.
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to get, must be strictly positive.
- * @param is_mc
- *   Mono-consumer (0) or multi-consumers (1).
- * @return
- *   - >=0: Success; number of objects supplied.
- *   - <0: Error; code of ring dequeue function.
- */
-static inline int __attribute__((always_inline))
-__mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
-		   unsigned n, int is_mc)
-{
-	int ret;
-#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
-	struct rte_mempool_cache *cache;
-	uint32_t index, len;
-	void **cache_objs;
-	unsigned lcore_id = rte_lcore_id();
-	uint32_t cache_size = mp->cache_size;
-
-	/* cache is not enabled or single consumer */
-	if (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size))
-		goto ring_dequeue;
-
-	cache = &mp->local_cache[lcore_id];
-	cache_objs = cache->objs;
-
-	/* Can this be satisfied from the cache? */
-	if (cache->len < n) {
-		/* No. Backfill the cache first, and then fill from it */
-		uint32_t req = n + (cache_size - cache->len);
-
-		/* How many do we require i.e. number to fill the cache + the request */
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, &cache->objs[cache->len], req);
-		if (unlikely(ret < 0)) {
-			/*
-			 * In the offchance that we are buffer constrained,
-			 * where we are not able to allocate cache + n, go to
-			 * the ring directly. If that fails, we are truly out of
-			 * buffers.
-			 */
-			goto ring_dequeue;
-		}
-
-		cache->len += req;
-	}
-
-	/* Now fill in the response ... */
-	for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++)
-		*obj_table = cache_objs[len];
-
-	cache->len -= n;
-
-	__MEMPOOL_STAT_ADD(mp, get_success, n);
-
-	return 0;
-
-ring_dequeue:
-#endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
-
-	/* get remaining objects from ring */
-	if (is_mc)
-		ret = rte_ring_mc_dequeue_bulk(mp->ring, obj_table, n);
-	else
-		ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
-
-	if (ret < 0)
-		__MEMPOOL_STAT_ADD(mp, get_fail, n);
-	else
-		__MEMPOOL_STAT_ADD(mp, get_success, n);
-
-	return ret;
-}
-
-/**
- * Get several objects from the mempool (multi-consumers safe).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to get from mempool to obj_table.
- * @return
- *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_mc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
-{
-	int ret;
-	ret = __mempool_get_bulk(mp, obj_table, n, 1);
-	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
-	return ret;
-}
-
-/**
- * Get several objects from the mempool (NOT multi-consumers safe).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to get from the mempool to obj_table.
- * @return
- *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is
- *     retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_sc_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
-{
-	int ret;
-	ret = __mempool_get_bulk(mp, obj_table, n, 0);
-	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
-	return ret;
-}
-
-/**
- * Get several objects from the mempool.
- *
- * This function calls the multi-consumers or the single-consumer
- * version, depending on the default behaviour that was specified at
- * mempool creation time (see flags).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to get from the mempool to obj_table.
- * @return
- *   - 0: Success; objects taken
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned n)
-{
-	int ret;
-	ret = __mempool_get_bulk(mp, obj_table, n,
-				 !(mp->flags & MEMPOOL_F_SC_GET));
-	if (ret == 0)
-		__mempool_check_cookies(mp, obj_table, n, 1);
-	return ret;
-}
-
-/**
- * Get one object from the mempool (multi-consumers safe).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_mc_get(struct rte_mempool *mp, void **obj_p)
-{
-	return rte_mempool_mc_get_bulk(mp, obj_p, 1);
-}
-
-/**
- * Get one object from the mempool (NOT multi-consumers safe).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_sc_get(struct rte_mempool *mp, void **obj_p)
-{
-	return rte_mempool_sc_get_bulk(mp, obj_p, 1);
-}
-
-/**
- * Get one object from the mempool.
- *
- * This function calls the multi-consumers or the single-consumer
- * version, depending on the default behavior that was specified at
- * mempool creation (see flags).
- *
- * If cache is enabled, objects will be retrieved first from cache,
- * subsequently from the common pool. Note that it can return -ENOENT when
- * the local cache and common pool are empty, even if cache from other
- * lcores are full.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success; objects taken.
- *   - -ENOENT: Not enough entries in the mempool; no object is retrieved.
- */
-static inline int __attribute__((always_inline))
-rte_mempool_get(struct rte_mempool *mp, void **obj_p)
-{
-	return rte_mempool_get_bulk(mp, obj_p, 1);
-}
-
-/**
- * Return the number of entries in the mempool.
- *
- * When cache is enabled, this function has to browse the length of
- * all lcores, so it should not be used in a data path, but only for
- * debug purposes.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @return
- *   The number of entries in the mempool.
- */
-unsigned rte_mempool_count(const struct rte_mempool *mp);
-
-/**
- * Return the number of free entries in the mempool ring.
- * i.e. how many entries can be freed back to the mempool.
- *
- * NOTE: This corresponds to the number of elements *allocated* from the
- * memory pool, not the number of elements in the pool itself. To count
- * the number elements currently available in the pool, use "rte_mempool_count"
- *
- * When cache is enabled, this function has to browse the length of
- * all lcores, so it should not be used in a data path, but only for
- * debug purposes.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @return
- *   The number of free entries in the mempool.
- */
-static inline unsigned
-rte_mempool_free_count(const struct rte_mempool *mp)
-{
-	return mp->size - rte_mempool_count(mp);
-}
-
-/**
- * Test if the mempool is full.
- *
- * When cache is enabled, this function has to browse the length of all
- * lcores, so it should not be used in a data path, but only for debug
- * purposes.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @return
- *   - 1: The mempool is full.
- *   - 0: The mempool is not full.
- */
-static inline int
-rte_mempool_full(const struct rte_mempool *mp)
-{
-	return !!(rte_mempool_count(mp) == mp->size);
-}
-
-/**
- * Test if the mempool is empty.
- *
- * When cache is enabled, this function has to browse the length of all
- * lcores, so it should not be used in a data path, but only for debug
- * purposes.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @return
- *   - 1: The mempool is empty.
- *   - 0: The mempool is not empty.
- */
-static inline int
-rte_mempool_empty(const struct rte_mempool *mp)
-{
-	return !!(rte_mempool_count(mp) == 0);
-}
-
-/**
- * Return the physical address of elt, which is an element of the pool mp.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @param elt
- *   A pointer (virtual address) to the element of the pool.
- * @return
- *   The physical address of the elt element.
- */
-static inline phys_addr_t
-rte_mempool_virt2phy(const struct rte_mempool *mp, const void *elt)
-{
-	if (rte_eal_has_hugepages()) {
-		uintptr_t off;
-
-		off = (const char *)elt - (const char *)mp->elt_va_start;
-		return (mp->elt_pa[off >> mp->pg_shift] + (off & mp->pg_mask));
-	} else {
-		/*
-		 * If huge pages are disabled, we cannot assume the
-		 * memory region to be physically contiguous.
-		 * Lookup for each element.
-		 */
-		return rte_mem_virt2phy(elt);
-	}
-}
-
-/**
- * Check the consistency of mempool objects.
- *
- * Verify the coherency of fields in the mempool structure. Also check
- * that the cookies of mempool objects (even the ones that are not
- * present in pool) have a correct value. If not, a panic will occur.
- *
- * @param mp
- *   A pointer to the mempool structure.
- */
-void rte_mempool_audit(const struct rte_mempool *mp);
-
-/**
- * Return a pointer to the private data in an mempool structure.
- *
- * @param mp
- *   A pointer to the mempool structure.
- * @return
- *   A pointer to the private data.
- */
-static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
-{
-	return (char *)mp + MEMPOOL_HEADER_SIZE(mp, mp->pg_num);
-}
-
-/**
- * Dump the status of all mempools on the console
- *
- * @param f
- *   A pointer to a file for output
- */
-void rte_mempool_list_dump(FILE *f);
-
-/**
- * Search a mempool from its name
- *
- * @param name
- *   The name of the mempool.
- * @return
- *   The pointer to the mempool matching the name, or NULL if not found.
- *   NULL on error
- *   with rte_errno set appropriately. Possible rte_errno values include:
- *    - ENOENT - required entry not available to return.
- *
- */
-struct rte_mempool *rte_mempool_lookup(const char *name);
-
-/**
- * Given a desired size of the mempool element and mempool flags,
- * caluclates header, trailer, body and total sizes of the mempool object.
- * @param elt_size
- *   The size of each element.
- * @param flags
- *   The flags used for the mempool creation.
- *   Consult rte_mempool_create() for more information about possible values.
- *   The size of each element.
- * @return
- *   Total size of the mempool object.
- */
-uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
-	struct rte_mempool_objsz *sz);
-
-/**
- * Calculate maximum amount of memory required to store given number of objects.
- * Assumes that the memory buffer will be aligned at page boundary.
- * Note, that if object size is bigger then page size, then it assumes that
- * we have a subsets of physically continuous  pages big enough to store
- * at least one object.
- * @param elt_num
- *   Number of elements.
- * @param elt_sz
- *   The size of each element.
- * @param pg_shift
- *   LOG2 of the physical pages size.
- * @return
- *   Required memory size aligned at page boundary.
- */
-size_t rte_mempool_xmem_size(uint32_t elt_num, size_t elt_sz,
-	uint32_t pg_shift);
-
-/**
- * Calculate how much memory would be actually required with the given
- * memory footprint to store required number of objects.
- * @param vaddr
- *   Virtual address of the externally allocated memory buffer.
- *   Will be used to store mempool objects.
- * @param elt_num
- *   Number of elements.
- * @param elt_sz
- *   The size of each element.
- * @param paddr
- *   Array of phyiscall addresses of the pages that comprises given memory
- *   buffer.
- * @param pg_num
- *   Number of elements in the paddr array.
- * @param pg_shift
- *   LOG2 of the physical pages size.
- * @return
- *   Number of bytes needed to store given number of objects,
- *   aligned to the given page size.
- *   If provided memory buffer is not big enough:
- *   (-1) * actual number of elemnts that can be stored in that buffer.
- */
-ssize_t rte_mempool_xmem_usage(void *vaddr, uint32_t elt_num, size_t elt_sz,
-	const phys_addr_t paddr[], uint32_t pg_num, uint32_t pg_shift);
-
-/**
- * Walk list of all memory pools
- *
- * @param func
- *   Iterator function
- * @param arg
- *   Argument passed to iterator
- */
-void rte_mempool_walk(void (*func)(const struct rte_mempool *, void *arg),
-		      void *arg);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_MEMPOOL_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 06/13] core: move librte_mbuf to core subdir
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (4 preceding siblings ...)
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 05/13] core: move librte_mempool " Sergio Gonzalez Monroy
@ 2015-01-12 16:33 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 07/13] core: move librte_ring " Sergio Gonzalez Monroy
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:33 UTC (permalink / raw)
  To: dev

This is equivalent to:

git mv lib/librte_mbuf lib/core

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_mbuf/Makefile   |   48 ++
 lib/core/librte_mbuf/rte_mbuf.c |  252 +++++++++
 lib/core/librte_mbuf/rte_mbuf.h | 1133 +++++++++++++++++++++++++++++++++++++++
 lib/librte_mbuf/Makefile        |   48 --
 lib/librte_mbuf/rte_mbuf.c      |  252 ---------
 lib/librte_mbuf/rte_mbuf.h      | 1133 ---------------------------------------
 6 files changed, 1433 insertions(+), 1433 deletions(-)
 create mode 100644 lib/core/librte_mbuf/Makefile
 create mode 100644 lib/core/librte_mbuf/rte_mbuf.c
 create mode 100644 lib/core/librte_mbuf/rte_mbuf.h
 delete mode 100644 lib/librte_mbuf/Makefile
 delete mode 100644 lib/librte_mbuf/rte_mbuf.c
 delete mode 100644 lib/librte_mbuf/rte_mbuf.h

diff --git a/lib/core/librte_mbuf/Makefile b/lib/core/librte_mbuf/Makefile
new file mode 100644
index 0000000..9b45ba4
--- /dev/null
+++ b/lib/core/librte_mbuf/Makefile
@@ -0,0 +1,48 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_mbuf.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF) += lib/librte_eal lib/librte_mempool
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_mbuf/rte_mbuf.c b/lib/core/librte_mbuf/rte_mbuf.c
new file mode 100644
index 0000000..1b14e02
--- /dev/null
+++ b/lib/core/librte_mbuf/rte_mbuf.c
@@ -0,0 +1,252 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <ctype.h>
+#include <sys/queue.h>
+
+#include <rte_debug.h>
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_ring.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_string_fns.h>
+#include <rte_hexdump.h>
+
+/*
+ * ctrlmbuf constructor, given as a callback function to
+ * rte_mempool_create()
+ */
+void
+rte_ctrlmbuf_init(struct rte_mempool *mp,
+		__attribute__((unused)) void *opaque_arg,
+		void *_m,
+		__attribute__((unused)) unsigned i)
+{
+	struct rte_mbuf *m = _m;
+	rte_pktmbuf_init(mp, opaque_arg, _m, i);
+	m->ol_flags |= CTRL_MBUF_FLAG;
+}
+
+/*
+ * pktmbuf pool constructor, given as a callback function to
+ * rte_mempool_create()
+ */
+void
+rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
+{
+	struct rte_pktmbuf_pool_private *mbp_priv;
+	uint16_t roomsz;
+
+	mbp_priv = rte_mempool_get_priv(mp);
+	roomsz = (uint16_t)(uintptr_t)opaque_arg;
+
+	/* Use default data room size. */
+	if (0 == roomsz)
+		roomsz = 2048 + RTE_PKTMBUF_HEADROOM;
+
+	mbp_priv->mbuf_data_room_size = roomsz;
+}
+
+/*
+ * pktmbuf constructor, given as a callback function to
+ * rte_mempool_create().
+ * Set the fields of a packet mbuf to their default values.
+ */
+void
+rte_pktmbuf_init(struct rte_mempool *mp,
+		 __attribute__((unused)) void *opaque_arg,
+		 void *_m,
+		 __attribute__((unused)) unsigned i)
+{
+	struct rte_mbuf *m = _m;
+	uint32_t buf_len = mp->elt_size - sizeof(struct rte_mbuf);
+
+	RTE_MBUF_ASSERT(mp->elt_size >= sizeof(struct rte_mbuf));
+
+	memset(m, 0, mp->elt_size);
+
+	/* start of buffer is just after mbuf structure */
+	m->buf_addr = (char *)m + sizeof(struct rte_mbuf);
+	m->buf_physaddr = rte_mempool_virt2phy(mp, m) +
+			sizeof(struct rte_mbuf);
+	m->buf_len = (uint16_t)buf_len;
+
+	/* keep some headroom between start of buffer and data */
+	m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, (uint16_t)m->buf_len);
+
+	/* init some constant fields */
+	m->pool = mp;
+	m->nb_segs = 1;
+	m->port = 0xff;
+}
+
+/* do some sanity checks on a mbuf: panic if it fails */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+	const struct rte_mbuf *m_seg;
+	unsigned nb_segs;
+
+	if (m == NULL)
+		rte_panic("mbuf is NULL\n");
+
+	/* generic checks */
+	if (m->pool == NULL)
+		rte_panic("bad mbuf pool\n");
+	if (m->buf_physaddr == 0)
+		rte_panic("bad phys addr\n");
+	if (m->buf_addr == NULL)
+		rte_panic("bad virt addr\n");
+
+#ifdef RTE_MBUF_REFCNT
+	uint16_t cnt = rte_mbuf_refcnt_read(m);
+	if ((cnt == 0) || (cnt == UINT16_MAX))
+		rte_panic("bad ref cnt\n");
+#endif
+
+	/* nothing to check for sub-segments */
+	if (is_header == 0)
+		return;
+
+	nb_segs = m->nb_segs;
+	m_seg = m;
+	while (m_seg && nb_segs != 0) {
+		m_seg = m_seg->next;
+		nb_segs--;
+	}
+	if (nb_segs != 0)
+		rte_panic("bad nb_segs\n");
+}
+
+/* dump a mbuf on console */
+void
+rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
+{
+	unsigned int len;
+	unsigned nb_segs;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
+	       m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
+	fprintf(f, "  pkt_len=%"PRIu32", ol_flags=%"PRIx64", nb_segs=%u, "
+	       "in_port=%u\n", m->pkt_len, m->ol_flags,
+	       (unsigned)m->nb_segs, (unsigned)m->port);
+	nb_segs = m->nb_segs;
+
+	while (m && nb_segs != 0) {
+		__rte_mbuf_sanity_check(m, 0);
+
+		fprintf(f, "  segment at 0x%p, data=0x%p, data_len=%u\n",
+			m, rte_pktmbuf_mtod(m, void *), (unsigned)m->data_len);
+		len = dump_len;
+		if (len > m->data_len)
+			len = m->data_len;
+		if (len != 0)
+			rte_hexdump(f, NULL, rte_pktmbuf_mtod(m, void *), len);
+		dump_len -= len;
+		m = m->next;
+		nb_segs --;
+	}
+}
+
+/*
+ * Get the name of a RX offload flag. Must be kept synchronized with flag
+ * definitions in rte_mbuf.h.
+ */
+const char *rte_get_rx_ol_flag_name(uint64_t mask)
+{
+	switch (mask) {
+	case PKT_RX_VLAN_PKT: return "PKT_RX_VLAN_PKT";
+	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
+	case PKT_RX_FDIR: return "PKT_RX_FDIR";
+	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
+	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
+	/* case PKT_RX_EIP_CKSUM_BAD: return "PKT_RX_EIP_CKSUM_BAD"; */
+	/* case PKT_RX_OVERSIZE: return "PKT_RX_OVERSIZE"; */
+	/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
+	/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
+	/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
+	case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
+	case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
+	case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
+	case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
+	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
+	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
+	case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
+	case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
+	default: return NULL;
+	}
+}
+
+/*
+ * Get the name of a TX offload flag. Must be kept synchronized with flag
+ * definitions in rte_mbuf.h.
+ */
+const char *rte_get_tx_ol_flag_name(uint64_t mask)
+{
+	switch (mask) {
+	case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
+	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
+	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
+	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
+	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
+	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
+	case PKT_TX_UDP_TUNNEL_PKT: return "PKT_TX_UDP_TUNNEL_PKT";
+	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
+	case PKT_TX_IPV4: return "PKT_TX_IPV4";
+	case PKT_TX_IPV6: return "PKT_TX_IPV6";
+	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
+	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
+	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
+	default: return NULL;
+	}
+}
diff --git a/lib/core/librte_mbuf/rte_mbuf.h b/lib/core/librte_mbuf/rte_mbuf.h
new file mode 100644
index 0000000..16059c6
--- /dev/null
+++ b/lib/core/librte_mbuf/rte_mbuf.h
@@ -0,0 +1,1133 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright 2014 6WIND S.A.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MBUF_H_
+#define _RTE_MBUF_H_
+
+/**
+ * @file
+ * RTE Mbuf
+ *
+ * The mbuf library provides the ability to create and destroy buffers
+ * that may be used by the RTE application to store message
+ * buffers. The message buffers are stored in a mempool, using the
+ * RTE mempool library.
+ *
+ * This library provide an API to allocate/free packet mbufs, which are
+ * used to carry network packets.
+ *
+ * To understand the concepts of packet buffers or mbufs, you
+ * should read "TCP/IP Illustrated, Volume 2: The Implementation,
+ * Addison-Wesley, 1995, ISBN 0-201-63354-X from Richard Stevens"
+ * http://www.kohala.com/start/tcpipiv2.html
+ */
+
+#include <stdint.h>
+#include <rte_mempool.h>
+#include <rte_memory.h>
+#include <rte_atomic.h>
+#include <rte_prefetch.h>
+#include <rte_branch_prediction.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* deprecated feature, renamed in RTE_MBUF_REFCNT */
+#pragma GCC poison RTE_MBUF_SCATTER_GATHER
+
+/*
+ * Packet Offload Features Flags. It also carry packet type information.
+ * Critical resources. Both rx/tx shared these bits. Be cautious on any change
+ *
+ * - RX flags start at bit position zero, and get added to the left of previous
+ *   flags.
+ * - The most-significant 8 bits are reserved for generic mbuf flags
+ * - TX flags therefore start at bit position 55 (i.e. 63-8), and new flags get
+ *   added to the right of the previously defined flags
+ *
+ * Keep these flags synchronized with rte_get_rx_ol_flag_name() and
+ * rte_get_tx_ol_flag_name().
+ */
+#define PKT_RX_VLAN_PKT      (1ULL << 0)  /**< RX packet is a 802.1q VLAN packet. */
+#define PKT_RX_RSS_HASH      (1ULL << 1)  /**< RX packet with RSS hash result. */
+#define PKT_RX_FDIR          (1ULL << 2)  /**< RX packet with FDIR match indicate. */
+#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)  /**< L4 cksum of RX pkt. is not OK. */
+#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)  /**< IP cksum of RX pkt. is not OK. */
+#define PKT_RX_EIP_CKSUM_BAD (0ULL << 0)  /**< External IP header checksum error. */
+#define PKT_RX_OVERSIZE      (0ULL << 0)  /**< Num of desc of an RX pkt oversize. */
+#define PKT_RX_HBUF_OVERFLOW (0ULL << 0)  /**< Header buffer overflow. */
+#define PKT_RX_RECIP_ERR     (0ULL << 0)  /**< Hardware processing error. */
+#define PKT_RX_MAC_ERR       (0ULL << 0)  /**< MAC error. */
+#define PKT_RX_IPV4_HDR      (1ULL << 5)  /**< RX packet with IPv4 header. */
+#define PKT_RX_IPV4_HDR_EXT  (1ULL << 6)  /**< RX packet with extended IPv4 header. */
+#define PKT_RX_IPV6_HDR      (1ULL << 7)  /**< RX packet with IPv6 header. */
+#define PKT_RX_IPV6_HDR_EXT  (1ULL << 8)  /**< RX packet with extended IPv6 header. */
+#define PKT_RX_IEEE1588_PTP  (1ULL << 9)  /**< RX IEEE1588 L2 Ethernet PT Packet. */
+#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
+#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
+#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
+#define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
+#define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
+/* add new RX flags here */
+
+/* add new TX flags here */
+
+/**
+ * TCP segmentation offload. To enable this offload feature for a
+ * packet to be transmitted on hardware supporting TSO:
+ *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
+ *    PKT_TX_TCP_CKSUM)
+ *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag and write the IP checksum
+ *    to 0 in the packet
+ *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
+ *  - calculate the pseudo header checksum without taking ip_len in account,
+ *    and set it in the TCP header. Refer to rte_ipv4_phdr_cksum() and
+ *    rte_ipv6_phdr_cksum() that can be used as helpers.
+ */
+#define PKT_TX_TCP_SEG       (1ULL << 49)
+
+/** TX packet is an UDP tunneled packet. It must be specified when using
+ *  outer checksum offload (PKT_TX_OUTER_IP_CKSUM) */
+#define PKT_TX_UDP_TUNNEL_PKT (1ULL << 50) /**< TX packet is an UDP tunneled packet */
+#define PKT_TX_IEEE1588_TMST (1ULL << 51) /**< TX IEEE1588 packet to timestamp. */
+
+/**
+ * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
+ * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
+ * L4 checksum offload, the user needs to:
+ *  - fill l2_len and l3_len in mbuf
+ *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
+ *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
+ *  - calculate the pseudo header checksum and set it in the L4 header (only
+ *    for TCP or UDP). See rte_ipv4_phdr_cksum() and rte_ipv6_phdr_cksum().
+ *    For SCTP, set the crc field to 0.
+ */
+#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
+#define PKT_TX_TCP_CKSUM     (1ULL << 52) /**< TCP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_SCTP_CKSUM    (2ULL << 52) /**< SCTP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_UDP_CKSUM     (3ULL << 52) /**< UDP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_L4_MASK       (3ULL << 52) /**< Mask for L4 cksum offload request. */
+
+#define PKT_TX_IP_CKSUM      (1ULL << 54) /**< IP cksum of TX pkt. computed by NIC. */
+#define PKT_TX_IPV4_CSUM     PKT_TX_IP_CKSUM /**< Alias of PKT_TX_IP_CKSUM. */
+
+/** Packet is IPv4 without requiring IP checksum offload. */
+#define PKT_TX_IPV4          (1ULL << 55)
+
+/** Tell the NIC it's an IPv6 packet.*/
+#define PKT_TX_IPV6          (1ULL << 56)
+
+#define PKT_TX_VLAN_PKT      (1ULL << 57) /**< TX packet is a 802.1q VLAN packet. */
+
+/** Outer IP checksum of TX packet, computed by NIC for tunneling packet.
+ *  The tunnel type must also be specified, ex: PKT_TX_UDP_TUNNEL_PKT. */
+#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
+
+/** Packet is outer IPv4 without requiring IP checksum offload for tunneling packet. */
+#define PKT_TX_OUTER_IPV4   (1ULL << 59)
+
+/** Tell the NIC it's an outer IPv6 packet for tunneling packet */
+#define PKT_TX_OUTER_IPV6    (1ULL << 60)
+
+/* Use final bit of flags to indicate a control mbuf */
+#define CTRL_MBUF_FLAG       (1ULL << 63) /**< Mbuf contains control data */
+
+/**
+ * Get the name of a RX offload flag
+ *
+ * @param mask
+ *   The mask describing the flag.
+ * @return
+ *   The name of this flag, or NULL if it's not a valid RX flag.
+ */
+const char *rte_get_rx_ol_flag_name(uint64_t mask);
+
+/**
+ * Get the name of a TX offload flag
+ *
+ * @param mask
+ *   The mask describing the flag. Usually only one bit must be set.
+ *   Several bits can be given if they belong to the same mask.
+ *   Ex: PKT_TX_L4_MASK.
+ * @return
+ *   The name of this flag, or NULL if it's not a valid TX flag.
+ */
+const char *rte_get_tx_ol_flag_name(uint64_t mask);
+
+/* define a set of marker types that can be used to refer to set points in the
+ * mbuf */
+typedef void    *MARKER[0];   /**< generic marker for a point in a structure */
+typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
+typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
+                               * with a single assignment */
+
+/**
+ * The generic rte_mbuf, containing a packet mbuf.
+ */
+struct rte_mbuf {
+	MARKER cacheline0;
+
+	void *buf_addr;           /**< Virtual address of segment buffer. */
+	phys_addr_t buf_physaddr; /**< Physical address of segment buffer. */
+
+	uint16_t buf_len;         /**< Length of segment buffer. */
+
+	/* next 6 bytes are initialised on RX descriptor rearm */
+	MARKER8 rearm_data;
+	uint16_t data_off;
+
+	/**
+	 * 16-bit Reference counter.
+	 * It should only be accessed using the following functions:
+	 * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and
+	 * rte_mbuf_refcnt_set(). The functionality of these functions (atomic,
+	 * or non-atomic) is controlled by the CONFIG_RTE_MBUF_REFCNT_ATOMIC
+	 * config option.
+	 */
+	union {
+#ifdef RTE_MBUF_REFCNT
+		rte_atomic16_t refcnt_atomic; /**< Atomically accessed refcnt */
+		uint16_t refcnt;              /**< Non-atomically accessed refcnt */
+#endif
+		uint16_t refcnt_reserved;     /**< Do not use this field */
+	};
+	uint8_t nb_segs;          /**< Number of segments. */
+	uint8_t port;             /**< Input port. */
+
+	uint64_t ol_flags;        /**< Offload features. */
+
+	/* remaining bytes are set on RX when pulling packet from descriptor */
+	MARKER rx_descriptor_fields1;
+
+	/**
+	 * The packet type, which is used to indicate ordinary packet and also
+	 * tunneled packet format, i.e. each number is represented a type of
+	 * packet.
+	 */
+	uint16_t packet_type;
+
+	uint16_t data_len;        /**< Amount of data in segment buffer. */
+	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
+	uint16_t vlan_tci;        /**< VLAN Tag Control Identifier (CPU order) */
+	uint16_t reserved;
+	union {
+		uint32_t rss;     /**< RSS hash result if RSS enabled */
+		struct {
+			union {
+				struct {
+					uint16_t hash;
+					uint16_t id;
+				};
+				uint32_t lo;
+				/**< Second 4 flexible bytes */
+			};
+			uint32_t hi;
+			/**< First 4 flexible bytes or FD ID, dependent on
+			     PKT_RX_FDIR_* flag in ol_flags. */
+		} fdir;           /**< Filter identifier if FDIR enabled */
+		uint32_t sched;   /**< Hierarchical scheduler */
+		uint32_t usr;	  /**< User defined tags. See @rte_distributor_process */
+	} hash;                   /**< hash information */
+
+	/* second cache line - fields only used in slow path or on TX */
+	MARKER cacheline1 __rte_cache_aligned;
+
+	union {
+		void *userdata;   /**< Can be used for external metadata */
+		uint64_t udata64; /**< Allow 8-byte userdata on 32-bit */
+	};
+
+	struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
+	struct rte_mbuf *next;    /**< Next segment of scattered packet. */
+
+	/* fields to support TX offloads */
+	union {
+		uint64_t tx_offload;       /**< combined for easy fetch */
+		struct {
+			uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
+			uint64_t l3_len:9; /**< L3 (IP) Header Length. */
+			uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
+			uint64_t tso_segsz:16; /**< TCP TSO segment size */
+
+			/* fields for TX offloading of tunnels */
+			uint64_t outer_l3_len:9; /**< Outer L3 (IP) Hdr Length. */
+			uint64_t outer_l2_len:7; /**< Outer L2 (MAC) Hdr Length. */
+
+			/* uint64_t unused:8; */
+		};
+	};
+} __rte_cache_aligned;
+
+/**
+ * Given the buf_addr returns the pointer to corresponding mbuf.
+ */
+#define RTE_MBUF_FROM_BADDR(ba)     (((struct rte_mbuf *)(ba)) - 1)
+
+/**
+ * Given the pointer to mbuf returns an address where it's  buf_addr
+ * should point to.
+ */
+#define RTE_MBUF_TO_BADDR(mb)       (((struct rte_mbuf *)(mb)) + 1)
+
+/**
+ * Returns TRUE if given mbuf is indirect, or FALSE otherwise.
+ */
+#define RTE_MBUF_INDIRECT(mb)   (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb))
+
+/**
+ * Returns TRUE if given mbuf is direct, or FALSE otherwise.
+ */
+#define RTE_MBUF_DIRECT(mb)     (RTE_MBUF_FROM_BADDR((mb)->buf_addr) == (mb))
+
+
+/**
+ * Private data in case of pktmbuf pool.
+ *
+ * A structure that contains some pktmbuf_pool-specific data that are
+ * appended after the mempool structure (in private data).
+ */
+struct rte_pktmbuf_pool_private {
+	uint16_t mbuf_data_room_size; /**< Size of data space in each mbuf.*/
+};
+
+#ifdef RTE_LIBRTE_MBUF_DEBUG
+
+/**  check mbuf type in debug mode */
+#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+
+/**  check mbuf type in debug mode if mbuf pointer is not null */
+#define __rte_mbuf_sanity_check_raw(m, is_h)	do {       \
+	if ((m) != NULL)                                   \
+		rte_mbuf_sanity_check(m, is_h);          \
+} while (0)
+
+/**  MBUF asserts in debug mode */
+#define RTE_MBUF_ASSERT(exp)                                         \
+if (!(exp)) {                                                        \
+	rte_panic("line%d\tassert \"" #exp "\" failed\n", __LINE__); \
+}
+
+#else /*  RTE_LIBRTE_MBUF_DEBUG */
+
+/**  check mbuf type in debug mode */
+#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+
+/**  check mbuf type in debug mode if mbuf pointer is not null */
+#define __rte_mbuf_sanity_check_raw(m, is_h) do { } while (0)
+
+/**  MBUF asserts in debug mode */
+#define RTE_MBUF_ASSERT(exp)                do { } while (0)
+
+#endif /*  RTE_LIBRTE_MBUF_DEBUG */
+
+#ifdef RTE_MBUF_REFCNT
+#ifdef RTE_MBUF_REFCNT_ATOMIC
+
+/**
+ * Adds given value to an mbuf's refcnt and returns its new value.
+ * @param m
+ *   Mbuf to update
+ * @param value
+ *   Value to add/subtract
+ * @return
+ *   Updated value
+ */
+static inline uint16_t
+rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
+{
+	return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value));
+}
+
+/**
+ * Reads the value of an mbuf's refcnt.
+ * @param m
+ *   Mbuf to read
+ * @return
+ *   Reference count number.
+ */
+static inline uint16_t
+rte_mbuf_refcnt_read(const struct rte_mbuf *m)
+{
+	return (uint16_t)(rte_atomic16_read(&m->refcnt_atomic));
+}
+
+/**
+ * Sets an mbuf's refcnt to a defined value.
+ * @param m
+ *   Mbuf to update
+ * @param new_value
+ *   Value set
+ */
+static inline void
+rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
+{
+	rte_atomic16_set(&m->refcnt_atomic, new_value);
+}
+
+#else /* ! RTE_MBUF_REFCNT_ATOMIC */
+
+/**
+ * Adds given value to an mbuf's refcnt and returns its new value.
+ */
+static inline uint16_t
+rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
+{
+	m->refcnt = (uint16_t)(m->refcnt + value);
+	return m->refcnt;
+}
+
+/**
+ * Reads the value of an mbuf's refcnt.
+ */
+static inline uint16_t
+rte_mbuf_refcnt_read(const struct rte_mbuf *m)
+{
+	return m->refcnt;
+}
+
+/**
+ * Sets an mbuf's refcnt to the defined value.
+ */
+static inline void
+rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
+{
+	m->refcnt = new_value;
+}
+
+#endif /* RTE_MBUF_REFCNT_ATOMIC */
+
+/** Mbuf prefetch */
+#define RTE_MBUF_PREFETCH_TO_FREE(m) do {       \
+	if ((m) != NULL)                        \
+		rte_prefetch0(m);               \
+} while (0)
+
+#else /* ! RTE_MBUF_REFCNT */
+
+/** Mbuf prefetch */
+#define RTE_MBUF_PREFETCH_TO_FREE(m) do { } while(0)
+
+#define rte_mbuf_refcnt_set(m,v) do { } while(0)
+
+#endif /* RTE_MBUF_REFCNT */
+
+
+/**
+ * Sanity checks on an mbuf.
+ *
+ * Check the consistency of the given mbuf. The function will cause a
+ * panic if corruption is detected.
+ *
+ * @param m
+ *   The mbuf to be checked.
+ * @param is_header
+ *   True if the mbuf is a packet header, false if it is a sub-segment
+ *   of a packet (in this case, some fields like nb_segs are not checked)
+ */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
+
+/**
+ * @internal Allocate a new mbuf from mempool *mp*.
+ * The use of that function is reserved for RTE internal needs.
+ * Please use rte_pktmbuf_alloc().
+ *
+ * @param mp
+ *   The mempool from which mbuf is allocated.
+ * @return
+ *   - The pointer to the new mbuf on success.
+ *   - NULL if allocation failed.
+ */
+static inline struct rte_mbuf *__rte_mbuf_raw_alloc(struct rte_mempool *mp)
+{
+	struct rte_mbuf *m;
+	void *mb = NULL;
+	if (rte_mempool_get(mp, &mb) < 0)
+		return NULL;
+	m = (struct rte_mbuf *)mb;
+#ifdef RTE_MBUF_REFCNT
+	RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
+	rte_mbuf_refcnt_set(m, 1);
+#endif /* RTE_MBUF_REFCNT */
+	return (m);
+}
+
+/**
+ * @internal Put mbuf back into its original mempool.
+ * The use of that function is reserved for RTE internal needs.
+ * Please use rte_pktmbuf_free().
+ *
+ * @param m
+ *   The mbuf to be freed.
+ */
+static inline void __attribute__((always_inline))
+__rte_mbuf_raw_free(struct rte_mbuf *m)
+{
+#ifdef RTE_MBUF_REFCNT
+	RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
+#endif /* RTE_MBUF_REFCNT */
+	rte_mempool_put(m->pool, m);
+}
+
+/* Operations on ctrl mbuf */
+
+/**
+ * The control mbuf constructor.
+ *
+ * This function initializes some fields in an mbuf structure that are
+ * not modified by the user once created (mbuf type, origin pool, buffer
+ * start address, and so on). This function is given as a callback function
+ * to rte_mempool_create() at pool creation time.
+ *
+ * @param mp
+ *   The mempool from which the mbuf is allocated.
+ * @param opaque_arg
+ *   A pointer that can be used by the user to retrieve useful information
+ *   for mbuf initialization. This pointer comes from the ``init_arg``
+ *   parameter of rte_mempool_create().
+ * @param m
+ *   The mbuf to initialize.
+ * @param i
+ *   The index of the mbuf in the pool table.
+ */
+void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg,
+		void *m, unsigned i);
+
+/**
+ * Allocate a new mbuf (type is ctrl) from mempool *mp*.
+ *
+ * This new mbuf is initialized with data pointing to the beginning of
+ * buffer, and with a length of zero.
+ *
+ * @param mp
+ *   The mempool from which the mbuf is allocated.
+ * @return
+ *   - The pointer to the new mbuf on success.
+ *   - NULL if allocation failed.
+ */
+#define rte_ctrlmbuf_alloc(mp) rte_pktmbuf_alloc(mp)
+
+/**
+ * Free a control mbuf back into its original mempool.
+ *
+ * @param m
+ *   The control mbuf to be freed.
+ */
+#define rte_ctrlmbuf_free(m) rte_pktmbuf_free(m)
+
+/**
+ * A macro that returns the pointer to the carried data.
+ *
+ * The value that can be read or assigned.
+ *
+ * @param m
+ *   The control mbuf.
+ */
+#define rte_ctrlmbuf_data(m) ((char *)((m)->buf_addr) + (m)->data_off)
+
+/**
+ * A macro that returns the length of the carried data.
+ *
+ * The value that can be read or assigned.
+ *
+ * @param m
+ *   The control mbuf.
+ */
+#define rte_ctrlmbuf_len(m) rte_pktmbuf_data_len(m)
+
+/**
+ * Tests if an mbuf is a control mbuf
+ *
+ * @param m
+ *   The mbuf to be tested
+ * @return
+ *   - True (1) if the mbuf is a control mbuf
+ *   - False(0) otherwise
+ */
+static inline int
+rte_is_ctrlmbuf(struct rte_mbuf *m)
+{
+	return (!!(m->ol_flags & CTRL_MBUF_FLAG));
+}
+
+/* Operations on pkt mbuf */
+
+/**
+ * The packet mbuf constructor.
+ *
+ * This function initializes some fields in the mbuf structure that are
+ * not modified by the user once created (origin pool, buffer start
+ * address, and so on). This function is given as a callback function to
+ * rte_mempool_create() at pool creation time.
+ *
+ * @param mp
+ *   The mempool from which mbufs originate.
+ * @param opaque_arg
+ *   A pointer that can be used by the user to retrieve useful information
+ *   for mbuf initialization. This pointer comes from the ``init_arg``
+ *   parameter of rte_mempool_create().
+ * @param m
+ *   The mbuf to initialize.
+ * @param i
+ *   The index of the mbuf in the pool table.
+ */
+void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
+		      void *m, unsigned i);
+
+
+/**
+ * A  packet mbuf pool constructor.
+ *
+ * This function initializes the mempool private data in the case of a
+ * pktmbuf pool. This private data is needed by the driver. The
+ * function is given as a callback function to rte_mempool_create() at
+ * pool creation. It can be extended by the user, for example, to
+ * provide another packet size.
+ *
+ * @param mp
+ *   The mempool from which mbufs originate.
+ * @param opaque_arg
+ *   A pointer that can be used by the user to retrieve useful information
+ *   for mbuf initialization. This pointer comes from the ``init_arg``
+ *   parameter of rte_mempool_create().
+ */
+void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
+
+/**
+ * Reset the fields of a packet mbuf to their default values.
+ *
+ * The given mbuf must have only one segment.
+ *
+ * @param m
+ *   The packet mbuf to be resetted.
+ */
+static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
+{
+	m->next = NULL;
+	m->pkt_len = 0;
+	m->tx_offload = 0;
+	m->vlan_tci = 0;
+	m->nb_segs = 1;
+	m->port = 0xff;
+
+	m->ol_flags = 0;
+	m->packet_type = 0;
+	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
+			RTE_PKTMBUF_HEADROOM : m->buf_len;
+
+	m->data_len = 0;
+	__rte_mbuf_sanity_check(m, 1);
+}
+
+/**
+ * Allocate a new mbuf from a mempool.
+ *
+ * This new mbuf contains one segment, which has a length of 0. The pointer
+ * to data is initialized to have some bytes of headroom in the buffer
+ * (if buffer size allows).
+ *
+ * @param mp
+ *   The mempool from which the mbuf is allocated.
+ * @return
+ *   - The pointer to the new mbuf on success.
+ *   - NULL if allocation failed.
+ */
+static inline struct rte_mbuf *rte_pktmbuf_alloc(struct rte_mempool *mp)
+{
+	struct rte_mbuf *m;
+	if ((m = __rte_mbuf_raw_alloc(mp)) != NULL)
+		rte_pktmbuf_reset(m);
+	return (m);
+}
+
+#ifdef RTE_MBUF_REFCNT
+
+/**
+ * Attach packet mbuf to another packet mbuf.
+ * After attachment we refer the mbuf we attached as 'indirect',
+ * while mbuf we attached to as 'direct'.
+ * Right now, not supported:
+ *  - attachment to indirect mbuf (e.g. - md  has to be direct).
+ *  - attachment for already indirect mbuf (e.g. - mi has to be direct).
+ *  - mbuf we trying to attach (mi) is used by someone else
+ *    e.g. it's reference counter is greater then 1.
+ *
+ * @param mi
+ *   The indirect packet mbuf.
+ * @param md
+ *   The direct packet mbuf.
+ */
+
+static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
+{
+	RTE_MBUF_ASSERT(RTE_MBUF_DIRECT(md) &&
+	    RTE_MBUF_DIRECT(mi) &&
+	    rte_mbuf_refcnt_read(mi) == 1);
+
+	rte_mbuf_refcnt_update(md, 1);
+	mi->buf_physaddr = md->buf_physaddr;
+	mi->buf_addr = md->buf_addr;
+	mi->buf_len = md->buf_len;
+
+	mi->next = md->next;
+	mi->data_off = md->data_off;
+	mi->data_len = md->data_len;
+	mi->port = md->port;
+	mi->vlan_tci = md->vlan_tci;
+	mi->tx_offload = md->tx_offload;
+	mi->hash = md->hash;
+
+	mi->next = NULL;
+	mi->pkt_len = mi->data_len;
+	mi->nb_segs = 1;
+	mi->ol_flags = md->ol_flags;
+	mi->packet_type = md->packet_type;
+
+	__rte_mbuf_sanity_check(mi, 1);
+	__rte_mbuf_sanity_check(md, 0);
+}
+
+/**
+ * Detach an indirect packet mbuf -
+ *  - restore original mbuf address and length values.
+ *  - reset pktmbuf data and data_len to their default values.
+ *  All other fields of the given packet mbuf will be left intact.
+ *
+ * @param m
+ *   The indirect attached packet mbuf.
+ */
+
+static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
+{
+	const struct rte_mempool *mp = m->pool;
+	void *buf = RTE_MBUF_TO_BADDR(m);
+	uint32_t buf_len = mp->elt_size - sizeof(*m);
+	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof (*m);
+
+	m->buf_addr = buf;
+	m->buf_len = (uint16_t)buf_len;
+
+	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
+			RTE_PKTMBUF_HEADROOM : m->buf_len;
+
+	m->data_len = 0;
+}
+
+#endif /* RTE_MBUF_REFCNT */
+
+
+static inline struct rte_mbuf* __attribute__((always_inline))
+__rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
+{
+	__rte_mbuf_sanity_check(m, 0);
+
+#ifdef RTE_MBUF_REFCNT
+	if (likely (rte_mbuf_refcnt_read(m) == 1) ||
+			likely (rte_mbuf_refcnt_update(m, -1) == 0)) {
+		struct rte_mbuf *md = RTE_MBUF_FROM_BADDR(m->buf_addr);
+
+		rte_mbuf_refcnt_set(m, 0);
+
+		/* if this is an indirect mbuf, then
+		 *  - detach mbuf
+		 *  - free attached mbuf segment
+		 */
+		if (unlikely (md != m)) {
+			rte_pktmbuf_detach(m);
+			if (rte_mbuf_refcnt_update(md, -1) == 0)
+				__rte_mbuf_raw_free(md);
+		}
+#endif
+		return(m);
+#ifdef RTE_MBUF_REFCNT
+	}
+	return (NULL);
+#endif
+}
+
+/**
+ * Free a segment of a packet mbuf into its original mempool.
+ *
+ * Free an mbuf, without parsing other segments in case of chained
+ * buffers.
+ *
+ * @param m
+ *   The packet mbuf segment to be freed.
+ */
+static inline void __attribute__((always_inline))
+rte_pktmbuf_free_seg(struct rte_mbuf *m)
+{
+	if (likely(NULL != (m = __rte_pktmbuf_prefree_seg(m)))) {
+		m->next = NULL;
+		__rte_mbuf_raw_free(m);
+	}
+}
+
+/**
+ * Free a packet mbuf back into its original mempool.
+ *
+ * Free an mbuf, and all its segments in case of chained buffers. Each
+ * segment is added back into its original mempool.
+ *
+ * @param m
+ *   The packet mbuf to be freed.
+ */
+static inline void rte_pktmbuf_free(struct rte_mbuf *m)
+{
+	struct rte_mbuf *m_next;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	while (m != NULL) {
+		m_next = m->next;
+		rte_pktmbuf_free_seg(m);
+		m = m_next;
+	}
+}
+
+#ifdef RTE_MBUF_REFCNT
+
+/**
+ * Creates a "clone" of the given packet mbuf.
+ *
+ * Walks through all segments of the given packet mbuf, and for each of them:
+ *  - Creates a new packet mbuf from the given pool.
+ *  - Attaches newly created mbuf to the segment.
+ * Then updates pkt_len and nb_segs of the "clone" packet mbuf to match values
+ * from the original packet mbuf.
+ *
+ * @param md
+ *   The packet mbuf to be cloned.
+ * @param mp
+ *   The mempool from which the "clone" mbufs are allocated.
+ * @return
+ *   - The pointer to the new "clone" mbuf on success.
+ *   - NULL if allocation fails.
+ */
+static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,
+		struct rte_mempool *mp)
+{
+	struct rte_mbuf *mc, *mi, **prev;
+	uint32_t pktlen;
+	uint8_t nseg;
+
+	if (unlikely ((mc = rte_pktmbuf_alloc(mp)) == NULL))
+		return (NULL);
+
+	mi = mc;
+	prev = &mi->next;
+	pktlen = md->pkt_len;
+	nseg = 0;
+
+	do {
+		nseg++;
+		rte_pktmbuf_attach(mi, md);
+		*prev = mi;
+		prev = &mi->next;
+	} while ((md = md->next) != NULL &&
+	    (mi = rte_pktmbuf_alloc(mp)) != NULL);
+
+	*prev = NULL;
+	mc->nb_segs = nseg;
+	mc->pkt_len = pktlen;
+
+	/* Allocation of new indirect segment failed */
+	if (unlikely (mi == NULL)) {
+		rte_pktmbuf_free(mc);
+		return (NULL);
+	}
+
+	__rte_mbuf_sanity_check(mc, 1);
+	return (mc);
+}
+
+/**
+ * Adds given value to the refcnt of all packet mbuf segments.
+ *
+ * Walks through all segments of given packet mbuf and for each of them
+ * invokes rte_mbuf_refcnt_update().
+ *
+ * @param m
+ *   The packet mbuf whose refcnt to be updated.
+ * @param v
+ *   The value to add to the mbuf's segments refcnt.
+ */
+static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
+{
+	__rte_mbuf_sanity_check(m, 1);
+
+	do {
+		rte_mbuf_refcnt_update(m, v);
+	} while ((m = m->next) != NULL);
+}
+
+#endif /* RTE_MBUF_REFCNT */
+
+/**
+ * Get the headroom in a packet mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @return
+ *   The length of the headroom.
+ */
+static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
+{
+	__rte_mbuf_sanity_check(m, 1);
+	return m->data_off;
+}
+
+/**
+ * Get the tailroom of a packet mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @return
+ *   The length of the tailroom.
+ */
+static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
+{
+	__rte_mbuf_sanity_check(m, 1);
+	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
+			  m->data_len);
+}
+
+/**
+ * Get the last segment of the packet.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @return
+ *   The last segment of the given mbuf.
+ */
+static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
+{
+	struct rte_mbuf *m2 = (struct rte_mbuf *)m;
+
+	__rte_mbuf_sanity_check(m, 1);
+	while (m2->next != NULL)
+		m2 = m2->next;
+	return m2;
+}
+
+/**
+ * A macro that points to the start of the data in the mbuf.
+ *
+ * The returned pointer is cast to type t. Before using this
+ * function, the user must ensure that m_headlen(m) is large enough to
+ * read its data.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param t
+ *   The type to cast the result into.
+ */
+#define rte_pktmbuf_mtod(m, t) ((t)((char *)(m)->buf_addr + (m)->data_off))
+
+/**
+ * A macro that returns the length of the packet.
+ *
+ * The value can be read or assigned.
+ *
+ * @param m
+ *   The packet mbuf.
+ */
+#define rte_pktmbuf_pkt_len(m) ((m)->pkt_len)
+
+/**
+ * A macro that returns the length of the segment.
+ *
+ * The value can be read or assigned.
+ *
+ * @param m
+ *   The packet mbuf.
+ */
+#define rte_pktmbuf_data_len(m) ((m)->data_len)
+
+/**
+ * Prepend len bytes to an mbuf data area.
+ *
+ * Returns a pointer to the new
+ * data start address. If there is not enough headroom in the first
+ * segment, the function will return NULL, without modifying the mbuf.
+ *
+ * @param m
+ *   The pkt mbuf.
+ * @param len
+ *   The amount of data to prepend (in bytes).
+ * @return
+ *   A pointer to the start of the newly prepended data, or
+ *   NULL if there is not enough headroom space in the first segment
+ */
+static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
+					uint16_t len)
+{
+	__rte_mbuf_sanity_check(m, 1);
+
+	if (unlikely(len > rte_pktmbuf_headroom(m)))
+		return NULL;
+
+	m->data_off -= len;
+	m->data_len = (uint16_t)(m->data_len + len);
+	m->pkt_len  = (m->pkt_len + len);
+
+	return (char *)m->buf_addr + m->data_off;
+}
+
+/**
+ * Append len bytes to an mbuf.
+ *
+ * Append len bytes to an mbuf and return a pointer to the start address
+ * of the added data. If there is not enough tailroom in the last
+ * segment, the function will return NULL, without modifying the mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param len
+ *   The amount of data to append (in bytes).
+ * @return
+ *   A pointer to the start of the newly appended data, or
+ *   NULL if there is not enough tailroom space in the last segment
+ */
+static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
+{
+	void *tail;
+	struct rte_mbuf *m_last;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	m_last = rte_pktmbuf_lastseg(m);
+	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
+		return NULL;
+
+	tail = (char *)m_last->buf_addr + m_last->data_off + m_last->data_len;
+	m_last->data_len = (uint16_t)(m_last->data_len + len);
+	m->pkt_len  = (m->pkt_len + len);
+	return (char*) tail;
+}
+
+/**
+ * Remove len bytes at the beginning of an mbuf.
+ *
+ * Returns a pointer to the start address of the new data area. If the
+ * length is greater than the length of the first segment, then the
+ * function will fail and return NULL, without modifying the mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param len
+ *   The amount of data to remove (in bytes).
+ * @return
+ *   A pointer to the new start of the data.
+ */
+static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
+{
+	__rte_mbuf_sanity_check(m, 1);
+
+	if (unlikely(len > m->data_len))
+		return NULL;
+
+	m->data_len = (uint16_t)(m->data_len - len);
+	m->data_off += len;
+	m->pkt_len  = (m->pkt_len - len);
+	return (char *)m->buf_addr + m->data_off;
+}
+
+/**
+ * Remove len bytes of data at the end of the mbuf.
+ *
+ * If the length is greater than the length of the last segment, the
+ * function will fail and return -1 without modifying the mbuf.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @param len
+ *   The amount of data to remove (in bytes).
+ * @return
+ *   - 0: On success.
+ *   - -1: On error.
+ */
+static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
+{
+	struct rte_mbuf *m_last;
+
+	__rte_mbuf_sanity_check(m, 1);
+
+	m_last = rte_pktmbuf_lastseg(m);
+	if (unlikely(len > m_last->data_len))
+		return -1;
+
+	m_last->data_len = (uint16_t)(m_last->data_len - len);
+	m->pkt_len  = (m->pkt_len - len);
+	return 0;
+}
+
+/**
+ * Test if mbuf data is contiguous.
+ *
+ * @param m
+ *   The packet mbuf.
+ * @return
+ *   - 1, if all data is contiguous (one segment).
+ *   - 0, if there is several segments.
+ */
+static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
+{
+	__rte_mbuf_sanity_check(m, 1);
+	return !!(m->nb_segs == 1);
+}
+
+/**
+ * Dump an mbuf structure to the console.
+ *
+ * Dump all fields for the given packet mbuf and all its associated
+ * segments (in the case of a chained buffer).
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param m
+ *   The packet mbuf.
+ * @param dump_len
+ *   If dump_len != 0, also dump the "dump_len" first data bytes of
+ *   the packet.
+ */
+void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_H_ */
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
deleted file mode 100644
index 9b45ba4..0000000
--- a/lib/librte_mbuf/Makefile
+++ /dev/null
@@ -1,48 +0,0 @@
-#   BSD LICENSE
-#
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-#   All rights reserved.
-#
-#   Redistribution and use in source and binary forms, with or without
-#   modification, are permitted provided that the following conditions
-#   are met:
-#
-#     * Redistributions of source code must retain the above copyright
-#       notice, this list of conditions and the following disclaimer.
-#     * Redistributions in binary form must reproduce the above copyright
-#       notice, this list of conditions and the following disclaimer in
-#       the documentation and/or other materials provided with the
-#       distribution.
-#     * Neither the name of Intel Corporation nor the names of its
-#       contributors may be used to endorse or promote products derived
-#       from this software without specific prior written permission.
-#
-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-# library name
-LIB = librte_mbuf.a
-
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
-
-# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c
-
-# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
-
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF) += lib/librte_eal lib/librte_mempool
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
deleted file mode 100644
index 1b14e02..0000000
--- a/lib/librte_mbuf/rte_mbuf.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   Copyright 2014 6WIND S.A.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <string.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <stdarg.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <ctype.h>
-#include <sys/queue.h>
-
-#include <rte_debug.h>
-#include <rte_common.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_launch.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_atomic.h>
-#include <rte_branch_prediction.h>
-#include <rte_ring.h>
-#include <rte_mempool.h>
-#include <rte_mbuf.h>
-#include <rte_string_fns.h>
-#include <rte_hexdump.h>
-
-/*
- * ctrlmbuf constructor, given as a callback function to
- * rte_mempool_create()
- */
-void
-rte_ctrlmbuf_init(struct rte_mempool *mp,
-		__attribute__((unused)) void *opaque_arg,
-		void *_m,
-		__attribute__((unused)) unsigned i)
-{
-	struct rte_mbuf *m = _m;
-	rte_pktmbuf_init(mp, opaque_arg, _m, i);
-	m->ol_flags |= CTRL_MBUF_FLAG;
-}
-
-/*
- * pktmbuf pool constructor, given as a callback function to
- * rte_mempool_create()
- */
-void
-rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
-{
-	struct rte_pktmbuf_pool_private *mbp_priv;
-	uint16_t roomsz;
-
-	mbp_priv = rte_mempool_get_priv(mp);
-	roomsz = (uint16_t)(uintptr_t)opaque_arg;
-
-	/* Use default data room size. */
-	if (0 == roomsz)
-		roomsz = 2048 + RTE_PKTMBUF_HEADROOM;
-
-	mbp_priv->mbuf_data_room_size = roomsz;
-}
-
-/*
- * pktmbuf constructor, given as a callback function to
- * rte_mempool_create().
- * Set the fields of a packet mbuf to their default values.
- */
-void
-rte_pktmbuf_init(struct rte_mempool *mp,
-		 __attribute__((unused)) void *opaque_arg,
-		 void *_m,
-		 __attribute__((unused)) unsigned i)
-{
-	struct rte_mbuf *m = _m;
-	uint32_t buf_len = mp->elt_size - sizeof(struct rte_mbuf);
-
-	RTE_MBUF_ASSERT(mp->elt_size >= sizeof(struct rte_mbuf));
-
-	memset(m, 0, mp->elt_size);
-
-	/* start of buffer is just after mbuf structure */
-	m->buf_addr = (char *)m + sizeof(struct rte_mbuf);
-	m->buf_physaddr = rte_mempool_virt2phy(mp, m) +
-			sizeof(struct rte_mbuf);
-	m->buf_len = (uint16_t)buf_len;
-
-	/* keep some headroom between start of buffer and data */
-	m->data_off = RTE_MIN(RTE_PKTMBUF_HEADROOM, (uint16_t)m->buf_len);
-
-	/* init some constant fields */
-	m->pool = mp;
-	m->nb_segs = 1;
-	m->port = 0xff;
-}
-
-/* do some sanity checks on a mbuf: panic if it fails */
-void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
-{
-	const struct rte_mbuf *m_seg;
-	unsigned nb_segs;
-
-	if (m == NULL)
-		rte_panic("mbuf is NULL\n");
-
-	/* generic checks */
-	if (m->pool == NULL)
-		rte_panic("bad mbuf pool\n");
-	if (m->buf_physaddr == 0)
-		rte_panic("bad phys addr\n");
-	if (m->buf_addr == NULL)
-		rte_panic("bad virt addr\n");
-
-#ifdef RTE_MBUF_REFCNT
-	uint16_t cnt = rte_mbuf_refcnt_read(m);
-	if ((cnt == 0) || (cnt == UINT16_MAX))
-		rte_panic("bad ref cnt\n");
-#endif
-
-	/* nothing to check for sub-segments */
-	if (is_header == 0)
-		return;
-
-	nb_segs = m->nb_segs;
-	m_seg = m;
-	while (m_seg && nb_segs != 0) {
-		m_seg = m_seg->next;
-		nb_segs--;
-	}
-	if (nb_segs != 0)
-		rte_panic("bad nb_segs\n");
-}
-
-/* dump a mbuf on console */
-void
-rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
-{
-	unsigned int len;
-	unsigned nb_segs;
-
-	__rte_mbuf_sanity_check(m, 1);
-
-	fprintf(f, "dump mbuf at 0x%p, phys=%"PRIx64", buf_len=%u\n",
-	       m, (uint64_t)m->buf_physaddr, (unsigned)m->buf_len);
-	fprintf(f, "  pkt_len=%"PRIu32", ol_flags=%"PRIx64", nb_segs=%u, "
-	       "in_port=%u\n", m->pkt_len, m->ol_flags,
-	       (unsigned)m->nb_segs, (unsigned)m->port);
-	nb_segs = m->nb_segs;
-
-	while (m && nb_segs != 0) {
-		__rte_mbuf_sanity_check(m, 0);
-
-		fprintf(f, "  segment at 0x%p, data=0x%p, data_len=%u\n",
-			m, rte_pktmbuf_mtod(m, void *), (unsigned)m->data_len);
-		len = dump_len;
-		if (len > m->data_len)
-			len = m->data_len;
-		if (len != 0)
-			rte_hexdump(f, NULL, rte_pktmbuf_mtod(m, void *), len);
-		dump_len -= len;
-		m = m->next;
-		nb_segs --;
-	}
-}
-
-/*
- * Get the name of a RX offload flag. Must be kept synchronized with flag
- * definitions in rte_mbuf.h.
- */
-const char *rte_get_rx_ol_flag_name(uint64_t mask)
-{
-	switch (mask) {
-	case PKT_RX_VLAN_PKT: return "PKT_RX_VLAN_PKT";
-	case PKT_RX_RSS_HASH: return "PKT_RX_RSS_HASH";
-	case PKT_RX_FDIR: return "PKT_RX_FDIR";
-	case PKT_RX_L4_CKSUM_BAD: return "PKT_RX_L4_CKSUM_BAD";
-	case PKT_RX_IP_CKSUM_BAD: return "PKT_RX_IP_CKSUM_BAD";
-	/* case PKT_RX_EIP_CKSUM_BAD: return "PKT_RX_EIP_CKSUM_BAD"; */
-	/* case PKT_RX_OVERSIZE: return "PKT_RX_OVERSIZE"; */
-	/* case PKT_RX_HBUF_OVERFLOW: return "PKT_RX_HBUF_OVERFLOW"; */
-	/* case PKT_RX_RECIP_ERR: return "PKT_RX_RECIP_ERR"; */
-	/* case PKT_RX_MAC_ERR: return "PKT_RX_MAC_ERR"; */
-	case PKT_RX_IPV4_HDR: return "PKT_RX_IPV4_HDR";
-	case PKT_RX_IPV4_HDR_EXT: return "PKT_RX_IPV4_HDR_EXT";
-	case PKT_RX_IPV6_HDR: return "PKT_RX_IPV6_HDR";
-	case PKT_RX_IPV6_HDR_EXT: return "PKT_RX_IPV6_HDR_EXT";
-	case PKT_RX_IEEE1588_PTP: return "PKT_RX_IEEE1588_PTP";
-	case PKT_RX_IEEE1588_TMST: return "PKT_RX_IEEE1588_TMST";
-	case PKT_RX_TUNNEL_IPV4_HDR: return "PKT_RX_TUNNEL_IPV4_HDR";
-	case PKT_RX_TUNNEL_IPV6_HDR: return "PKT_RX_TUNNEL_IPV6_HDR";
-	default: return NULL;
-	}
-}
-
-/*
- * Get the name of a TX offload flag. Must be kept synchronized with flag
- * definitions in rte_mbuf.h.
- */
-const char *rte_get_tx_ol_flag_name(uint64_t mask)
-{
-	switch (mask) {
-	case PKT_TX_VLAN_PKT: return "PKT_TX_VLAN_PKT";
-	case PKT_TX_IP_CKSUM: return "PKT_TX_IP_CKSUM";
-	case PKT_TX_TCP_CKSUM: return "PKT_TX_TCP_CKSUM";
-	case PKT_TX_SCTP_CKSUM: return "PKT_TX_SCTP_CKSUM";
-	case PKT_TX_UDP_CKSUM: return "PKT_TX_UDP_CKSUM";
-	case PKT_TX_IEEE1588_TMST: return "PKT_TX_IEEE1588_TMST";
-	case PKT_TX_UDP_TUNNEL_PKT: return "PKT_TX_UDP_TUNNEL_PKT";
-	case PKT_TX_TCP_SEG: return "PKT_TX_TCP_SEG";
-	case PKT_TX_IPV4: return "PKT_TX_IPV4";
-	case PKT_TX_IPV6: return "PKT_TX_IPV6";
-	case PKT_TX_OUTER_IP_CKSUM: return "PKT_TX_OUTER_IP_CKSUM";
-	case PKT_TX_OUTER_IPV4: return "PKT_TX_OUTER_IPV4";
-	case PKT_TX_OUTER_IPV6: return "PKT_TX_OUTER_IPV6";
-	default: return NULL;
-	}
-}
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
deleted file mode 100644
index 16059c6..0000000
--- a/lib/librte_mbuf/rte_mbuf.h
+++ /dev/null
@@ -1,1133 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   Copyright 2014 6WIND S.A.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_MBUF_H_
-#define _RTE_MBUF_H_
-
-/**
- * @file
- * RTE Mbuf
- *
- * The mbuf library provides the ability to create and destroy buffers
- * that may be used by the RTE application to store message
- * buffers. The message buffers are stored in a mempool, using the
- * RTE mempool library.
- *
- * This library provide an API to allocate/free packet mbufs, which are
- * used to carry network packets.
- *
- * To understand the concepts of packet buffers or mbufs, you
- * should read "TCP/IP Illustrated, Volume 2: The Implementation,
- * Addison-Wesley, 1995, ISBN 0-201-63354-X from Richard Stevens"
- * http://www.kohala.com/start/tcpipiv2.html
- */
-
-#include <stdint.h>
-#include <rte_mempool.h>
-#include <rte_memory.h>
-#include <rte_atomic.h>
-#include <rte_prefetch.h>
-#include <rte_branch_prediction.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/* deprecated feature, renamed in RTE_MBUF_REFCNT */
-#pragma GCC poison RTE_MBUF_SCATTER_GATHER
-
-/*
- * Packet Offload Features Flags. It also carry packet type information.
- * Critical resources. Both rx/tx shared these bits. Be cautious on any change
- *
- * - RX flags start at bit position zero, and get added to the left of previous
- *   flags.
- * - The most-significant 8 bits are reserved for generic mbuf flags
- * - TX flags therefore start at bit position 55 (i.e. 63-8), and new flags get
- *   added to the right of the previously defined flags
- *
- * Keep these flags synchronized with rte_get_rx_ol_flag_name() and
- * rte_get_tx_ol_flag_name().
- */
-#define PKT_RX_VLAN_PKT      (1ULL << 0)  /**< RX packet is a 802.1q VLAN packet. */
-#define PKT_RX_RSS_HASH      (1ULL << 1)  /**< RX packet with RSS hash result. */
-#define PKT_RX_FDIR          (1ULL << 2)  /**< RX packet with FDIR match indicate. */
-#define PKT_RX_L4_CKSUM_BAD  (1ULL << 3)  /**< L4 cksum of RX pkt. is not OK. */
-#define PKT_RX_IP_CKSUM_BAD  (1ULL << 4)  /**< IP cksum of RX pkt. is not OK. */
-#define PKT_RX_EIP_CKSUM_BAD (0ULL << 0)  /**< External IP header checksum error. */
-#define PKT_RX_OVERSIZE      (0ULL << 0)  /**< Num of desc of an RX pkt oversize. */
-#define PKT_RX_HBUF_OVERFLOW (0ULL << 0)  /**< Header buffer overflow. */
-#define PKT_RX_RECIP_ERR     (0ULL << 0)  /**< Hardware processing error. */
-#define PKT_RX_MAC_ERR       (0ULL << 0)  /**< MAC error. */
-#define PKT_RX_IPV4_HDR      (1ULL << 5)  /**< RX packet with IPv4 header. */
-#define PKT_RX_IPV4_HDR_EXT  (1ULL << 6)  /**< RX packet with extended IPv4 header. */
-#define PKT_RX_IPV6_HDR      (1ULL << 7)  /**< RX packet with IPv6 header. */
-#define PKT_RX_IPV6_HDR_EXT  (1ULL << 8)  /**< RX packet with extended IPv6 header. */
-#define PKT_RX_IEEE1588_PTP  (1ULL << 9)  /**< RX IEEE1588 L2 Ethernet PT Packet. */
-#define PKT_RX_IEEE1588_TMST (1ULL << 10) /**< RX IEEE1588 L2/L4 timestamped packet.*/
-#define PKT_RX_TUNNEL_IPV4_HDR (1ULL << 11) /**< RX tunnel packet with IPv4 header.*/
-#define PKT_RX_TUNNEL_IPV6_HDR (1ULL << 12) /**< RX tunnel packet with IPv6 header. */
-#define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */
-#define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */
-/* add new RX flags here */
-
-/* add new TX flags here */
-
-/**
- * TCP segmentation offload. To enable this offload feature for a
- * packet to be transmitted on hardware supporting TSO:
- *  - set the PKT_TX_TCP_SEG flag in mbuf->ol_flags (this flag implies
- *    PKT_TX_TCP_CKSUM)
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - if it's IPv4, set the PKT_TX_IP_CKSUM flag and write the IP checksum
- *    to 0 in the packet
- *  - fill the mbuf offload information: l2_len, l3_len, l4_len, tso_segsz
- *  - calculate the pseudo header checksum without taking ip_len in account,
- *    and set it in the TCP header. Refer to rte_ipv4_phdr_cksum() and
- *    rte_ipv6_phdr_cksum() that can be used as helpers.
- */
-#define PKT_TX_TCP_SEG       (1ULL << 49)
-
-/** TX packet is an UDP tunneled packet. It must be specified when using
- *  outer checksum offload (PKT_TX_OUTER_IP_CKSUM) */
-#define PKT_TX_UDP_TUNNEL_PKT (1ULL << 50) /**< TX packet is an UDP tunneled packet */
-#define PKT_TX_IEEE1588_TMST (1ULL << 51) /**< TX IEEE1588 packet to timestamp. */
-
-/**
- * Bits 52+53 used for L4 packet type with checksum enabled: 00: Reserved,
- * 01: TCP checksum, 10: SCTP checksum, 11: UDP checksum. To use hardware
- * L4 checksum offload, the user needs to:
- *  - fill l2_len and l3_len in mbuf
- *  - set the flags PKT_TX_TCP_CKSUM, PKT_TX_SCTP_CKSUM or PKT_TX_UDP_CKSUM
- *  - set the flag PKT_TX_IPV4 or PKT_TX_IPV6
- *  - calculate the pseudo header checksum and set it in the L4 header (only
- *    for TCP or UDP). See rte_ipv4_phdr_cksum() and rte_ipv6_phdr_cksum().
- *    For SCTP, set the crc field to 0.
- */
-#define PKT_TX_L4_NO_CKSUM   (0ULL << 52) /**< Disable L4 cksum of TX pkt. */
-#define PKT_TX_TCP_CKSUM     (1ULL << 52) /**< TCP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_SCTP_CKSUM    (2ULL << 52) /**< SCTP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_UDP_CKSUM     (3ULL << 52) /**< UDP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_L4_MASK       (3ULL << 52) /**< Mask for L4 cksum offload request. */
-
-#define PKT_TX_IP_CKSUM      (1ULL << 54) /**< IP cksum of TX pkt. computed by NIC. */
-#define PKT_TX_IPV4_CSUM     PKT_TX_IP_CKSUM /**< Alias of PKT_TX_IP_CKSUM. */
-
-/** Packet is IPv4 without requiring IP checksum offload. */
-#define PKT_TX_IPV4          (1ULL << 55)
-
-/** Tell the NIC it's an IPv6 packet.*/
-#define PKT_TX_IPV6          (1ULL << 56)
-
-#define PKT_TX_VLAN_PKT      (1ULL << 57) /**< TX packet is a 802.1q VLAN packet. */
-
-/** Outer IP checksum of TX packet, computed by NIC for tunneling packet.
- *  The tunnel type must also be specified, ex: PKT_TX_UDP_TUNNEL_PKT. */
-#define PKT_TX_OUTER_IP_CKSUM   (1ULL << 58)
-
-/** Packet is outer IPv4 without requiring IP checksum offload for tunneling packet. */
-#define PKT_TX_OUTER_IPV4   (1ULL << 59)
-
-/** Tell the NIC it's an outer IPv6 packet for tunneling packet */
-#define PKT_TX_OUTER_IPV6    (1ULL << 60)
-
-/* Use final bit of flags to indicate a control mbuf */
-#define CTRL_MBUF_FLAG       (1ULL << 63) /**< Mbuf contains control data */
-
-/**
- * Get the name of a RX offload flag
- *
- * @param mask
- *   The mask describing the flag.
- * @return
- *   The name of this flag, or NULL if it's not a valid RX flag.
- */
-const char *rte_get_rx_ol_flag_name(uint64_t mask);
-
-/**
- * Get the name of a TX offload flag
- *
- * @param mask
- *   The mask describing the flag. Usually only one bit must be set.
- *   Several bits can be given if they belong to the same mask.
- *   Ex: PKT_TX_L4_MASK.
- * @return
- *   The name of this flag, or NULL if it's not a valid TX flag.
- */
-const char *rte_get_tx_ol_flag_name(uint64_t mask);
-
-/* define a set of marker types that can be used to refer to set points in the
- * mbuf */
-typedef void    *MARKER[0];   /**< generic marker for a point in a structure */
-typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */
-typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
-                               * with a single assignment */
-
-/**
- * The generic rte_mbuf, containing a packet mbuf.
- */
-struct rte_mbuf {
-	MARKER cacheline0;
-
-	void *buf_addr;           /**< Virtual address of segment buffer. */
-	phys_addr_t buf_physaddr; /**< Physical address of segment buffer. */
-
-	uint16_t buf_len;         /**< Length of segment buffer. */
-
-	/* next 6 bytes are initialised on RX descriptor rearm */
-	MARKER8 rearm_data;
-	uint16_t data_off;
-
-	/**
-	 * 16-bit Reference counter.
-	 * It should only be accessed using the following functions:
-	 * rte_mbuf_refcnt_update(), rte_mbuf_refcnt_read(), and
-	 * rte_mbuf_refcnt_set(). The functionality of these functions (atomic,
-	 * or non-atomic) is controlled by the CONFIG_RTE_MBUF_REFCNT_ATOMIC
-	 * config option.
-	 */
-	union {
-#ifdef RTE_MBUF_REFCNT
-		rte_atomic16_t refcnt_atomic; /**< Atomically accessed refcnt */
-		uint16_t refcnt;              /**< Non-atomically accessed refcnt */
-#endif
-		uint16_t refcnt_reserved;     /**< Do not use this field */
-	};
-	uint8_t nb_segs;          /**< Number of segments. */
-	uint8_t port;             /**< Input port. */
-
-	uint64_t ol_flags;        /**< Offload features. */
-
-	/* remaining bytes are set on RX when pulling packet from descriptor */
-	MARKER rx_descriptor_fields1;
-
-	/**
-	 * The packet type, which is used to indicate ordinary packet and also
-	 * tunneled packet format, i.e. each number is represented a type of
-	 * packet.
-	 */
-	uint16_t packet_type;
-
-	uint16_t data_len;        /**< Amount of data in segment buffer. */
-	uint32_t pkt_len;         /**< Total pkt len: sum of all segments. */
-	uint16_t vlan_tci;        /**< VLAN Tag Control Identifier (CPU order) */
-	uint16_t reserved;
-	union {
-		uint32_t rss;     /**< RSS hash result if RSS enabled */
-		struct {
-			union {
-				struct {
-					uint16_t hash;
-					uint16_t id;
-				};
-				uint32_t lo;
-				/**< Second 4 flexible bytes */
-			};
-			uint32_t hi;
-			/**< First 4 flexible bytes or FD ID, dependent on
-			     PKT_RX_FDIR_* flag in ol_flags. */
-		} fdir;           /**< Filter identifier if FDIR enabled */
-		uint32_t sched;   /**< Hierarchical scheduler */
-		uint32_t usr;	  /**< User defined tags. See @rte_distributor_process */
-	} hash;                   /**< hash information */
-
-	/* second cache line - fields only used in slow path or on TX */
-	MARKER cacheline1 __rte_cache_aligned;
-
-	union {
-		void *userdata;   /**< Can be used for external metadata */
-		uint64_t udata64; /**< Allow 8-byte userdata on 32-bit */
-	};
-
-	struct rte_mempool *pool; /**< Pool from which mbuf was allocated. */
-	struct rte_mbuf *next;    /**< Next segment of scattered packet. */
-
-	/* fields to support TX offloads */
-	union {
-		uint64_t tx_offload;       /**< combined for easy fetch */
-		struct {
-			uint64_t l2_len:7; /**< L2 (MAC) Header Length. */
-			uint64_t l3_len:9; /**< L3 (IP) Header Length. */
-			uint64_t l4_len:8; /**< L4 (TCP/UDP) Header Length. */
-			uint64_t tso_segsz:16; /**< TCP TSO segment size */
-
-			/* fields for TX offloading of tunnels */
-			uint64_t outer_l3_len:9; /**< Outer L3 (IP) Hdr Length. */
-			uint64_t outer_l2_len:7; /**< Outer L2 (MAC) Hdr Length. */
-
-			/* uint64_t unused:8; */
-		};
-	};
-} __rte_cache_aligned;
-
-/**
- * Given the buf_addr returns the pointer to corresponding mbuf.
- */
-#define RTE_MBUF_FROM_BADDR(ba)     (((struct rte_mbuf *)(ba)) - 1)
-
-/**
- * Given the pointer to mbuf returns an address where it's  buf_addr
- * should point to.
- */
-#define RTE_MBUF_TO_BADDR(mb)       (((struct rte_mbuf *)(mb)) + 1)
-
-/**
- * Returns TRUE if given mbuf is indirect, or FALSE otherwise.
- */
-#define RTE_MBUF_INDIRECT(mb)   (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb))
-
-/**
- * Returns TRUE if given mbuf is direct, or FALSE otherwise.
- */
-#define RTE_MBUF_DIRECT(mb)     (RTE_MBUF_FROM_BADDR((mb)->buf_addr) == (mb))
-
-
-/**
- * Private data in case of pktmbuf pool.
- *
- * A structure that contains some pktmbuf_pool-specific data that are
- * appended after the mempool structure (in private data).
- */
-struct rte_pktmbuf_pool_private {
-	uint16_t mbuf_data_room_size; /**< Size of data space in each mbuf.*/
-};
-
-#ifdef RTE_LIBRTE_MBUF_DEBUG
-
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
-
-/**  check mbuf type in debug mode if mbuf pointer is not null */
-#define __rte_mbuf_sanity_check_raw(m, is_h)	do {       \
-	if ((m) != NULL)                                   \
-		rte_mbuf_sanity_check(m, is_h);          \
-} while (0)
-
-/**  MBUF asserts in debug mode */
-#define RTE_MBUF_ASSERT(exp)                                         \
-if (!(exp)) {                                                        \
-	rte_panic("line%d\tassert \"" #exp "\" failed\n", __LINE__); \
-}
-
-#else /*  RTE_LIBRTE_MBUF_DEBUG */
-
-/**  check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
-
-/**  check mbuf type in debug mode if mbuf pointer is not null */
-#define __rte_mbuf_sanity_check_raw(m, is_h) do { } while (0)
-
-/**  MBUF asserts in debug mode */
-#define RTE_MBUF_ASSERT(exp)                do { } while (0)
-
-#endif /*  RTE_LIBRTE_MBUF_DEBUG */
-
-#ifdef RTE_MBUF_REFCNT
-#ifdef RTE_MBUF_REFCNT_ATOMIC
-
-/**
- * Adds given value to an mbuf's refcnt and returns its new value.
- * @param m
- *   Mbuf to update
- * @param value
- *   Value to add/subtract
- * @return
- *   Updated value
- */
-static inline uint16_t
-rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
-{
-	return (uint16_t)(rte_atomic16_add_return(&m->refcnt_atomic, value));
-}
-
-/**
- * Reads the value of an mbuf's refcnt.
- * @param m
- *   Mbuf to read
- * @return
- *   Reference count number.
- */
-static inline uint16_t
-rte_mbuf_refcnt_read(const struct rte_mbuf *m)
-{
-	return (uint16_t)(rte_atomic16_read(&m->refcnt_atomic));
-}
-
-/**
- * Sets an mbuf's refcnt to a defined value.
- * @param m
- *   Mbuf to update
- * @param new_value
- *   Value set
- */
-static inline void
-rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
-{
-	rte_atomic16_set(&m->refcnt_atomic, new_value);
-}
-
-#else /* ! RTE_MBUF_REFCNT_ATOMIC */
-
-/**
- * Adds given value to an mbuf's refcnt and returns its new value.
- */
-static inline uint16_t
-rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
-{
-	m->refcnt = (uint16_t)(m->refcnt + value);
-	return m->refcnt;
-}
-
-/**
- * Reads the value of an mbuf's refcnt.
- */
-static inline uint16_t
-rte_mbuf_refcnt_read(const struct rte_mbuf *m)
-{
-	return m->refcnt;
-}
-
-/**
- * Sets an mbuf's refcnt to the defined value.
- */
-static inline void
-rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
-{
-	m->refcnt = new_value;
-}
-
-#endif /* RTE_MBUF_REFCNT_ATOMIC */
-
-/** Mbuf prefetch */
-#define RTE_MBUF_PREFETCH_TO_FREE(m) do {       \
-	if ((m) != NULL)                        \
-		rte_prefetch0(m);               \
-} while (0)
-
-#else /* ! RTE_MBUF_REFCNT */
-
-/** Mbuf prefetch */
-#define RTE_MBUF_PREFETCH_TO_FREE(m) do { } while(0)
-
-#define rte_mbuf_refcnt_set(m,v) do { } while(0)
-
-#endif /* RTE_MBUF_REFCNT */
-
-
-/**
- * Sanity checks on an mbuf.
- *
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
- *
- * @param m
- *   The mbuf to be checked.
- * @param is_header
- *   True if the mbuf is a packet header, false if it is a sub-segment
- *   of a packet (in this case, some fields like nb_segs are not checked)
- */
-void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
-
-/**
- * @internal Allocate a new mbuf from mempool *mp*.
- * The use of that function is reserved for RTE internal needs.
- * Please use rte_pktmbuf_alloc().
- *
- * @param mp
- *   The mempool from which mbuf is allocated.
- * @return
- *   - The pointer to the new mbuf on success.
- *   - NULL if allocation failed.
- */
-static inline struct rte_mbuf *__rte_mbuf_raw_alloc(struct rte_mempool *mp)
-{
-	struct rte_mbuf *m;
-	void *mb = NULL;
-	if (rte_mempool_get(mp, &mb) < 0)
-		return NULL;
-	m = (struct rte_mbuf *)mb;
-#ifdef RTE_MBUF_REFCNT
-	RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
-	rte_mbuf_refcnt_set(m, 1);
-#endif /* RTE_MBUF_REFCNT */
-	return (m);
-}
-
-/**
- * @internal Put mbuf back into its original mempool.
- * The use of that function is reserved for RTE internal needs.
- * Please use rte_pktmbuf_free().
- *
- * @param m
- *   The mbuf to be freed.
- */
-static inline void __attribute__((always_inline))
-__rte_mbuf_raw_free(struct rte_mbuf *m)
-{
-#ifdef RTE_MBUF_REFCNT
-	RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(m) == 0);
-#endif /* RTE_MBUF_REFCNT */
-	rte_mempool_put(m->pool, m);
-}
-
-/* Operations on ctrl mbuf */
-
-/**
- * The control mbuf constructor.
- *
- * This function initializes some fields in an mbuf structure that are
- * not modified by the user once created (mbuf type, origin pool, buffer
- * start address, and so on). This function is given as a callback function
- * to rte_mempool_create() at pool creation time.
- *
- * @param mp
- *   The mempool from which the mbuf is allocated.
- * @param opaque_arg
- *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
- * @param m
- *   The mbuf to initialize.
- * @param i
- *   The index of the mbuf in the pool table.
- */
-void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg,
-		void *m, unsigned i);
-
-/**
- * Allocate a new mbuf (type is ctrl) from mempool *mp*.
- *
- * This new mbuf is initialized with data pointing to the beginning of
- * buffer, and with a length of zero.
- *
- * @param mp
- *   The mempool from which the mbuf is allocated.
- * @return
- *   - The pointer to the new mbuf on success.
- *   - NULL if allocation failed.
- */
-#define rte_ctrlmbuf_alloc(mp) rte_pktmbuf_alloc(mp)
-
-/**
- * Free a control mbuf back into its original mempool.
- *
- * @param m
- *   The control mbuf to be freed.
- */
-#define rte_ctrlmbuf_free(m) rte_pktmbuf_free(m)
-
-/**
- * A macro that returns the pointer to the carried data.
- *
- * The value that can be read or assigned.
- *
- * @param m
- *   The control mbuf.
- */
-#define rte_ctrlmbuf_data(m) ((char *)((m)->buf_addr) + (m)->data_off)
-
-/**
- * A macro that returns the length of the carried data.
- *
- * The value that can be read or assigned.
- *
- * @param m
- *   The control mbuf.
- */
-#define rte_ctrlmbuf_len(m) rte_pktmbuf_data_len(m)
-
-/**
- * Tests if an mbuf is a control mbuf
- *
- * @param m
- *   The mbuf to be tested
- * @return
- *   - True (1) if the mbuf is a control mbuf
- *   - False(0) otherwise
- */
-static inline int
-rte_is_ctrlmbuf(struct rte_mbuf *m)
-{
-	return (!!(m->ol_flags & CTRL_MBUF_FLAG));
-}
-
-/* Operations on pkt mbuf */
-
-/**
- * The packet mbuf constructor.
- *
- * This function initializes some fields in the mbuf structure that are
- * not modified by the user once created (origin pool, buffer start
- * address, and so on). This function is given as a callback function to
- * rte_mempool_create() at pool creation time.
- *
- * @param mp
- *   The mempool from which mbufs originate.
- * @param opaque_arg
- *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
- * @param m
- *   The mbuf to initialize.
- * @param i
- *   The index of the mbuf in the pool table.
- */
-void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
-		      void *m, unsigned i);
-
-
-/**
- * A  packet mbuf pool constructor.
- *
- * This function initializes the mempool private data in the case of a
- * pktmbuf pool. This private data is needed by the driver. The
- * function is given as a callback function to rte_mempool_create() at
- * pool creation. It can be extended by the user, for example, to
- * provide another packet size.
- *
- * @param mp
- *   The mempool from which mbufs originate.
- * @param opaque_arg
- *   A pointer that can be used by the user to retrieve useful information
- *   for mbuf initialization. This pointer comes from the ``init_arg``
- *   parameter of rte_mempool_create().
- */
-void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
-
-/**
- * Reset the fields of a packet mbuf to their default values.
- *
- * The given mbuf must have only one segment.
- *
- * @param m
- *   The packet mbuf to be resetted.
- */
-static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
-{
-	m->next = NULL;
-	m->pkt_len = 0;
-	m->tx_offload = 0;
-	m->vlan_tci = 0;
-	m->nb_segs = 1;
-	m->port = 0xff;
-
-	m->ol_flags = 0;
-	m->packet_type = 0;
-	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
-			RTE_PKTMBUF_HEADROOM : m->buf_len;
-
-	m->data_len = 0;
-	__rte_mbuf_sanity_check(m, 1);
-}
-
-/**
- * Allocate a new mbuf from a mempool.
- *
- * This new mbuf contains one segment, which has a length of 0. The pointer
- * to data is initialized to have some bytes of headroom in the buffer
- * (if buffer size allows).
- *
- * @param mp
- *   The mempool from which the mbuf is allocated.
- * @return
- *   - The pointer to the new mbuf on success.
- *   - NULL if allocation failed.
- */
-static inline struct rte_mbuf *rte_pktmbuf_alloc(struct rte_mempool *mp)
-{
-	struct rte_mbuf *m;
-	if ((m = __rte_mbuf_raw_alloc(mp)) != NULL)
-		rte_pktmbuf_reset(m);
-	return (m);
-}
-
-#ifdef RTE_MBUF_REFCNT
-
-/**
- * Attach packet mbuf to another packet mbuf.
- * After attachment we refer the mbuf we attached as 'indirect',
- * while mbuf we attached to as 'direct'.
- * Right now, not supported:
- *  - attachment to indirect mbuf (e.g. - md  has to be direct).
- *  - attachment for already indirect mbuf (e.g. - mi has to be direct).
- *  - mbuf we trying to attach (mi) is used by someone else
- *    e.g. it's reference counter is greater then 1.
- *
- * @param mi
- *   The indirect packet mbuf.
- * @param md
- *   The direct packet mbuf.
- */
-
-static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *md)
-{
-	RTE_MBUF_ASSERT(RTE_MBUF_DIRECT(md) &&
-	    RTE_MBUF_DIRECT(mi) &&
-	    rte_mbuf_refcnt_read(mi) == 1);
-
-	rte_mbuf_refcnt_update(md, 1);
-	mi->buf_physaddr = md->buf_physaddr;
-	mi->buf_addr = md->buf_addr;
-	mi->buf_len = md->buf_len;
-
-	mi->next = md->next;
-	mi->data_off = md->data_off;
-	mi->data_len = md->data_len;
-	mi->port = md->port;
-	mi->vlan_tci = md->vlan_tci;
-	mi->tx_offload = md->tx_offload;
-	mi->hash = md->hash;
-
-	mi->next = NULL;
-	mi->pkt_len = mi->data_len;
-	mi->nb_segs = 1;
-	mi->ol_flags = md->ol_flags;
-	mi->packet_type = md->packet_type;
-
-	__rte_mbuf_sanity_check(mi, 1);
-	__rte_mbuf_sanity_check(md, 0);
-}
-
-/**
- * Detach an indirect packet mbuf -
- *  - restore original mbuf address and length values.
- *  - reset pktmbuf data and data_len to their default values.
- *  All other fields of the given packet mbuf will be left intact.
- *
- * @param m
- *   The indirect attached packet mbuf.
- */
-
-static inline void rte_pktmbuf_detach(struct rte_mbuf *m)
-{
-	const struct rte_mempool *mp = m->pool;
-	void *buf = RTE_MBUF_TO_BADDR(m);
-	uint32_t buf_len = mp->elt_size - sizeof(*m);
-	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof (*m);
-
-	m->buf_addr = buf;
-	m->buf_len = (uint16_t)buf_len;
-
-	m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
-			RTE_PKTMBUF_HEADROOM : m->buf_len;
-
-	m->data_len = 0;
-}
-
-#endif /* RTE_MBUF_REFCNT */
-
-
-static inline struct rte_mbuf* __attribute__((always_inline))
-__rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
-{
-	__rte_mbuf_sanity_check(m, 0);
-
-#ifdef RTE_MBUF_REFCNT
-	if (likely (rte_mbuf_refcnt_read(m) == 1) ||
-			likely (rte_mbuf_refcnt_update(m, -1) == 0)) {
-		struct rte_mbuf *md = RTE_MBUF_FROM_BADDR(m->buf_addr);
-
-		rte_mbuf_refcnt_set(m, 0);
-
-		/* if this is an indirect mbuf, then
-		 *  - detach mbuf
-		 *  - free attached mbuf segment
-		 */
-		if (unlikely (md != m)) {
-			rte_pktmbuf_detach(m);
-			if (rte_mbuf_refcnt_update(md, -1) == 0)
-				__rte_mbuf_raw_free(md);
-		}
-#endif
-		return(m);
-#ifdef RTE_MBUF_REFCNT
-	}
-	return (NULL);
-#endif
-}
-
-/**
- * Free a segment of a packet mbuf into its original mempool.
- *
- * Free an mbuf, without parsing other segments in case of chained
- * buffers.
- *
- * @param m
- *   The packet mbuf segment to be freed.
- */
-static inline void __attribute__((always_inline))
-rte_pktmbuf_free_seg(struct rte_mbuf *m)
-{
-	if (likely(NULL != (m = __rte_pktmbuf_prefree_seg(m)))) {
-		m->next = NULL;
-		__rte_mbuf_raw_free(m);
-	}
-}
-
-/**
- * Free a packet mbuf back into its original mempool.
- *
- * Free an mbuf, and all its segments in case of chained buffers. Each
- * segment is added back into its original mempool.
- *
- * @param m
- *   The packet mbuf to be freed.
- */
-static inline void rte_pktmbuf_free(struct rte_mbuf *m)
-{
-	struct rte_mbuf *m_next;
-
-	__rte_mbuf_sanity_check(m, 1);
-
-	while (m != NULL) {
-		m_next = m->next;
-		rte_pktmbuf_free_seg(m);
-		m = m_next;
-	}
-}
-
-#ifdef RTE_MBUF_REFCNT
-
-/**
- * Creates a "clone" of the given packet mbuf.
- *
- * Walks through all segments of the given packet mbuf, and for each of them:
- *  - Creates a new packet mbuf from the given pool.
- *  - Attaches newly created mbuf to the segment.
- * Then updates pkt_len and nb_segs of the "clone" packet mbuf to match values
- * from the original packet mbuf.
- *
- * @param md
- *   The packet mbuf to be cloned.
- * @param mp
- *   The mempool from which the "clone" mbufs are allocated.
- * @return
- *   - The pointer to the new "clone" mbuf on success.
- *   - NULL if allocation fails.
- */
-static inline struct rte_mbuf *rte_pktmbuf_clone(struct rte_mbuf *md,
-		struct rte_mempool *mp)
-{
-	struct rte_mbuf *mc, *mi, **prev;
-	uint32_t pktlen;
-	uint8_t nseg;
-
-	if (unlikely ((mc = rte_pktmbuf_alloc(mp)) == NULL))
-		return (NULL);
-
-	mi = mc;
-	prev = &mi->next;
-	pktlen = md->pkt_len;
-	nseg = 0;
-
-	do {
-		nseg++;
-		rte_pktmbuf_attach(mi, md);
-		*prev = mi;
-		prev = &mi->next;
-	} while ((md = md->next) != NULL &&
-	    (mi = rte_pktmbuf_alloc(mp)) != NULL);
-
-	*prev = NULL;
-	mc->nb_segs = nseg;
-	mc->pkt_len = pktlen;
-
-	/* Allocation of new indirect segment failed */
-	if (unlikely (mi == NULL)) {
-		rte_pktmbuf_free(mc);
-		return (NULL);
-	}
-
-	__rte_mbuf_sanity_check(mc, 1);
-	return (mc);
-}
-
-/**
- * Adds given value to the refcnt of all packet mbuf segments.
- *
- * Walks through all segments of given packet mbuf and for each of them
- * invokes rte_mbuf_refcnt_update().
- *
- * @param m
- *   The packet mbuf whose refcnt to be updated.
- * @param v
- *   The value to add to the mbuf's segments refcnt.
- */
-static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
-{
-	__rte_mbuf_sanity_check(m, 1);
-
-	do {
-		rte_mbuf_refcnt_update(m, v);
-	} while ((m = m->next) != NULL);
-}
-
-#endif /* RTE_MBUF_REFCNT */
-
-/**
- * Get the headroom in a packet mbuf.
- *
- * @param m
- *   The packet mbuf.
- * @return
- *   The length of the headroom.
- */
-static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
-{
-	__rte_mbuf_sanity_check(m, 1);
-	return m->data_off;
-}
-
-/**
- * Get the tailroom of a packet mbuf.
- *
- * @param m
- *   The packet mbuf.
- * @return
- *   The length of the tailroom.
- */
-static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
-{
-	__rte_mbuf_sanity_check(m, 1);
-	return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
-			  m->data_len);
-}
-
-/**
- * Get the last segment of the packet.
- *
- * @param m
- *   The packet mbuf.
- * @return
- *   The last segment of the given mbuf.
- */
-static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
-{
-	struct rte_mbuf *m2 = (struct rte_mbuf *)m;
-
-	__rte_mbuf_sanity_check(m, 1);
-	while (m2->next != NULL)
-		m2 = m2->next;
-	return m2;
-}
-
-/**
- * A macro that points to the start of the data in the mbuf.
- *
- * The returned pointer is cast to type t. Before using this
- * function, the user must ensure that m_headlen(m) is large enough to
- * read its data.
- *
- * @param m
- *   The packet mbuf.
- * @param t
- *   The type to cast the result into.
- */
-#define rte_pktmbuf_mtod(m, t) ((t)((char *)(m)->buf_addr + (m)->data_off))
-
-/**
- * A macro that returns the length of the packet.
- *
- * The value can be read or assigned.
- *
- * @param m
- *   The packet mbuf.
- */
-#define rte_pktmbuf_pkt_len(m) ((m)->pkt_len)
-
-/**
- * A macro that returns the length of the segment.
- *
- * The value can be read or assigned.
- *
- * @param m
- *   The packet mbuf.
- */
-#define rte_pktmbuf_data_len(m) ((m)->data_len)
-
-/**
- * Prepend len bytes to an mbuf data area.
- *
- * Returns a pointer to the new
- * data start address. If there is not enough headroom in the first
- * segment, the function will return NULL, without modifying the mbuf.
- *
- * @param m
- *   The pkt mbuf.
- * @param len
- *   The amount of data to prepend (in bytes).
- * @return
- *   A pointer to the start of the newly prepended data, or
- *   NULL if there is not enough headroom space in the first segment
- */
-static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
-					uint16_t len)
-{
-	__rte_mbuf_sanity_check(m, 1);
-
-	if (unlikely(len > rte_pktmbuf_headroom(m)))
-		return NULL;
-
-	m->data_off -= len;
-	m->data_len = (uint16_t)(m->data_len + len);
-	m->pkt_len  = (m->pkt_len + len);
-
-	return (char *)m->buf_addr + m->data_off;
-}
-
-/**
- * Append len bytes to an mbuf.
- *
- * Append len bytes to an mbuf and return a pointer to the start address
- * of the added data. If there is not enough tailroom in the last
- * segment, the function will return NULL, without modifying the mbuf.
- *
- * @param m
- *   The packet mbuf.
- * @param len
- *   The amount of data to append (in bytes).
- * @return
- *   A pointer to the start of the newly appended data, or
- *   NULL if there is not enough tailroom space in the last segment
- */
-static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
-{
-	void *tail;
-	struct rte_mbuf *m_last;
-
-	__rte_mbuf_sanity_check(m, 1);
-
-	m_last = rte_pktmbuf_lastseg(m);
-	if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
-		return NULL;
-
-	tail = (char *)m_last->buf_addr + m_last->data_off + m_last->data_len;
-	m_last->data_len = (uint16_t)(m_last->data_len + len);
-	m->pkt_len  = (m->pkt_len + len);
-	return (char*) tail;
-}
-
-/**
- * Remove len bytes at the beginning of an mbuf.
- *
- * Returns a pointer to the start address of the new data area. If the
- * length is greater than the length of the first segment, then the
- * function will fail and return NULL, without modifying the mbuf.
- *
- * @param m
- *   The packet mbuf.
- * @param len
- *   The amount of data to remove (in bytes).
- * @return
- *   A pointer to the new start of the data.
- */
-static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
-{
-	__rte_mbuf_sanity_check(m, 1);
-
-	if (unlikely(len > m->data_len))
-		return NULL;
-
-	m->data_len = (uint16_t)(m->data_len - len);
-	m->data_off += len;
-	m->pkt_len  = (m->pkt_len - len);
-	return (char *)m->buf_addr + m->data_off;
-}
-
-/**
- * Remove len bytes of data at the end of the mbuf.
- *
- * If the length is greater than the length of the last segment, the
- * function will fail and return -1 without modifying the mbuf.
- *
- * @param m
- *   The packet mbuf.
- * @param len
- *   The amount of data to remove (in bytes).
- * @return
- *   - 0: On success.
- *   - -1: On error.
- */
-static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
-{
-	struct rte_mbuf *m_last;
-
-	__rte_mbuf_sanity_check(m, 1);
-
-	m_last = rte_pktmbuf_lastseg(m);
-	if (unlikely(len > m_last->data_len))
-		return -1;
-
-	m_last->data_len = (uint16_t)(m_last->data_len - len);
-	m->pkt_len  = (m->pkt_len - len);
-	return 0;
-}
-
-/**
- * Test if mbuf data is contiguous.
- *
- * @param m
- *   The packet mbuf.
- * @return
- *   - 1, if all data is contiguous (one segment).
- *   - 0, if there is several segments.
- */
-static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
-{
-	__rte_mbuf_sanity_check(m, 1);
-	return !!(m->nb_segs == 1);
-}
-
-/**
- * Dump an mbuf structure to the console.
- *
- * Dump all fields for the given packet mbuf and all its associated
- * segments (in the case of a chained buffer).
- *
- * @param f
- *   A pointer to a file for output
- * @param m
- *   The packet mbuf.
- * @param dump_len
- *   If dump_len != 0, also dump the "dump_len" first data bytes of
- *   the packet.
- */
-void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_MBUF_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 07/13] core: move librte_ring to core subdir
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (5 preceding siblings ...)
  2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 06/13] core: move librte_mbuf " Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 08/13] Update path of core libraries Sergio Gonzalez Monroy
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

This is equivalent to:

git mv lib/librte_ring lib/core

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_ring/Makefile   |   48 ++
 lib/core/librte_ring/rte_ring.c |  338 +++++++++++
 lib/core/librte_ring/rte_ring.h | 1214 +++++++++++++++++++++++++++++++++++++++
 lib/librte_ring/Makefile        |   48 --
 lib/librte_ring/rte_ring.c      |  338 -----------
 lib/librte_ring/rte_ring.h      | 1214 ---------------------------------------
 6 files changed, 1600 insertions(+), 1600 deletions(-)
 create mode 100644 lib/core/librte_ring/Makefile
 create mode 100644 lib/core/librte_ring/rte_ring.c
 create mode 100644 lib/core/librte_ring/rte_ring.h
 delete mode 100644 lib/librte_ring/Makefile
 delete mode 100644 lib/librte_ring/rte_ring.c
 delete mode 100644 lib/librte_ring/rte_ring.h

diff --git a/lib/core/librte_ring/Makefile b/lib/core/librte_ring/Makefile
new file mode 100644
index 0000000..2380a43
--- /dev/null
+++ b/lib/core/librte_ring/Makefile
@@ -0,0 +1,48 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ring.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h
+
+# this lib needs eal and rte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal lib/librte_malloc
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_ring/rte_ring.c b/lib/core/librte_ring/rte_ring.c
new file mode 100644
index 0000000..f5899c4
--- /dev/null
+++ b/lib/core/librte_ring/rte_ring.c
@@ -0,0 +1,338 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * Derived from FreeBSD's bufring.c
+ *
+ **************************************************************************
+ *
+ * Copyright (c) 2007,2008 Kip Macy kmacy@freebsd.org
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ *    this list of conditions and the following disclaimer.
+ *
+ * 2. The name of Kip Macy nor the names of other
+ *    contributors may be used to endorse or promote products derived from
+ *    this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ ***************************************************************************/
+
+#include <stdio.h>
+#include <stdarg.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <errno.h>
+#include <sys/queue.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_launch.h>
+#include <rte_tailq.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_atomic.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_errno.h>
+#include <rte_string_fns.h>
+#include <rte_spinlock.h>
+
+#include "rte_ring.h"
+
+TAILQ_HEAD(rte_ring_list, rte_tailq_entry);
+
+/* true if x is a power of 2 */
+#define POWEROF2(x) ((((x)-1) & (x)) == 0)
+
+/* return the size of memory occupied by a ring */
+ssize_t
+rte_ring_get_memsize(unsigned count)
+{
+	ssize_t sz;
+
+	/* count must be a power of 2 */
+	if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) {
+		RTE_LOG(ERR, RING,
+			"Requested size is invalid, must be power of 2, and "
+			"do not exceed the size limit %u\n", RTE_RING_SZ_MASK);
+		return -EINVAL;
+	}
+
+	sz = sizeof(struct rte_ring) + count * sizeof(void *);
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+int
+rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
+	unsigned flags)
+{
+	/* compilation-time checks */
+	RTE_BUILD_BUG_ON((sizeof(struct rte_ring) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#ifdef RTE_RING_SPLIT_PROD_CONS
+	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#ifdef RTE_LIBRTE_RING_DEBUG
+	RTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &
+			  RTE_CACHE_LINE_MASK) != 0);
+#endif
+
+	/* init the ring structure */
+	memset(r, 0, sizeof(*r));
+	snprintf(r->name, sizeof(r->name), "%s", name);
+	r->flags = flags;
+	r->prod.watermark = count;
+	r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);
+	r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);
+	r->prod.size = r->cons.size = count;
+	r->prod.mask = r->cons.mask = count-1;
+	r->prod.head = r->cons.head = 0;
+	r->prod.tail = r->cons.tail = 0;
+
+	return 0;
+}
+
+/* create the ring */
+struct rte_ring *
+rte_ring_create(const char *name, unsigned count, int socket_id,
+		unsigned flags)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+	struct rte_ring *r;
+	struct rte_tailq_entry *te;
+	const struct rte_memzone *mz;
+	ssize_t ring_size;
+	int mz_flags = 0;
+	struct rte_ring_list* ring_list = NULL;
+
+	/* check that we have an initialised tail queue */
+	if ((ring_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return NULL;
+	}
+
+	ring_size = rte_ring_get_memsize(count);
+	if (ring_size < 0) {
+		rte_errno = ring_size;
+		return NULL;
+	}
+
+	te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
+	if (te == NULL) {
+		RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	snprintf(mz_name, sizeof(mz_name), "%s%s", RTE_RING_MZ_PREFIX, name);
+
+	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	/* reserve a memory zone for this ring. If we can't get rte_config or
+	 * we are secondary process, the memzone_reserve function will set
+	 * rte_errno for us appropriately - hence no check in this this function */
+	mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
+	if (mz != NULL) {
+		r = mz->addr;
+		/* no need to check return value here, we already checked the
+		 * arguments above */
+		rte_ring_init(r, name, count, flags);
+
+		te->data = (void *) r;
+
+		TAILQ_INSERT_TAIL(ring_list, te, next);
+	} else {
+		r = NULL;
+		RTE_LOG(ERR, RING, "Cannot reserve memory\n");
+		rte_free(te);
+	}
+	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	return r;
+}
+
+/*
+ * change the high water mark. If *count* is 0, water marking is
+ * disabled
+ */
+int
+rte_ring_set_water_mark(struct rte_ring *r, unsigned count)
+{
+	if (count >= r->prod.size)
+		return -EINVAL;
+
+	/* if count is 0, disable the watermarking */
+	if (count == 0)
+		count = r->prod.size;
+
+	r->prod.watermark = count;
+	return 0;
+}
+
+/* dump the status of the ring on the console */
+void
+rte_ring_dump(FILE *f, const struct rte_ring *r)
+{
+#ifdef RTE_LIBRTE_RING_DEBUG
+	struct rte_ring_debug_stats sum;
+	unsigned lcore_id;
+#endif
+
+	fprintf(f, "ring <%s>@%p\n", r->name, r);
+	fprintf(f, "  flags=%x\n", r->flags);
+	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
+	fprintf(f, "  ct=%"PRIu32"\n", r->cons.tail);
+	fprintf(f, "  ch=%"PRIu32"\n", r->cons.head);
+	fprintf(f, "  pt=%"PRIu32"\n", r->prod.tail);
+	fprintf(f, "  ph=%"PRIu32"\n", r->prod.head);
+	fprintf(f, "  used=%u\n", rte_ring_count(r));
+	fprintf(f, "  avail=%u\n", rte_ring_free_count(r));
+	if (r->prod.watermark == r->prod.size)
+		fprintf(f, "  watermark=0\n");
+	else
+		fprintf(f, "  watermark=%"PRIu32"\n", r->prod.watermark);
+
+	/* sum and dump statistics */
+#ifdef RTE_LIBRTE_RING_DEBUG
+	memset(&sum, 0, sizeof(sum));
+	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+		sum.enq_success_bulk += r->stats[lcore_id].enq_success_bulk;
+		sum.enq_success_objs += r->stats[lcore_id].enq_success_objs;
+		sum.enq_quota_bulk += r->stats[lcore_id].enq_quota_bulk;
+		sum.enq_quota_objs += r->stats[lcore_id].enq_quota_objs;
+		sum.enq_fail_bulk += r->stats[lcore_id].enq_fail_bulk;
+		sum.enq_fail_objs += r->stats[lcore_id].enq_fail_objs;
+		sum.deq_success_bulk += r->stats[lcore_id].deq_success_bulk;
+		sum.deq_success_objs += r->stats[lcore_id].deq_success_objs;
+		sum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;
+		sum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;
+	}
+	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
+	fprintf(f, "  enq_success_bulk=%"PRIu64"\n", sum.enq_success_bulk);
+	fprintf(f, "  enq_success_objs=%"PRIu64"\n", sum.enq_success_objs);
+	fprintf(f, "  enq_quota_bulk=%"PRIu64"\n", sum.enq_quota_bulk);
+	fprintf(f, "  enq_quota_objs=%"PRIu64"\n", sum.enq_quota_objs);
+	fprintf(f, "  enq_fail_bulk=%"PRIu64"\n", sum.enq_fail_bulk);
+	fprintf(f, "  enq_fail_objs=%"PRIu64"\n", sum.enq_fail_objs);
+	fprintf(f, "  deq_success_bulk=%"PRIu64"\n", sum.deq_success_bulk);
+	fprintf(f, "  deq_success_objs=%"PRIu64"\n", sum.deq_success_objs);
+	fprintf(f, "  deq_fail_bulk=%"PRIu64"\n", sum.deq_fail_bulk);
+	fprintf(f, "  deq_fail_objs=%"PRIu64"\n", sum.deq_fail_objs);
+#else
+	fprintf(f, "  no statistics available\n");
+#endif
+}
+
+/* dump the status of all rings on the console */
+void
+rte_ring_list_dump(FILE *f)
+{
+	const struct rte_tailq_entry *te;
+	struct rte_ring_list *ring_list;
+
+	/* check that we have an initialised tail queue */
+	if ((ring_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	TAILQ_FOREACH(te, ring_list, next) {
+		rte_ring_dump(f, (struct rte_ring *) te->data);
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+}
+
+/* search a ring from its name */
+struct rte_ring *
+rte_ring_lookup(const char *name)
+{
+	struct rte_tailq_entry *te;
+	struct rte_ring *r = NULL;
+	struct rte_ring_list *ring_list;
+
+	/* check that we have an initialized tail queue */
+	if ((ring_list =
+	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
+		rte_errno = E_RTE_NO_TAILQ;
+		return NULL;
+	}
+
+	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+	TAILQ_FOREACH(te, ring_list, next) {
+		r = (struct rte_ring *) te->data;
+		if (strncmp(name, r->name, RTE_RING_NAMESIZE) == 0)
+			break;
+	}
+
+	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+	if (te == NULL) {
+		rte_errno = ENOENT;
+		return NULL;
+	}
+
+	return r;
+}
diff --git a/lib/core/librte_ring/rte_ring.h b/lib/core/librte_ring/rte_ring.h
new file mode 100644
index 0000000..7cd5f2d
--- /dev/null
+++ b/lib/core/librte_ring/rte_ring.h
@@ -0,0 +1,1214 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * Derived from FreeBSD's bufring.h
+ *
+ **************************************************************************
+ *
+ * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright notice,
+ *    this list of conditions and the following disclaimer.
+ *
+ * 2. The name of Kip Macy nor the names of other
+ *    contributors may be used to endorse or promote products derived from
+ *    this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ *
+ ***************************************************************************/
+
+#ifndef _RTE_RING_H_
+#define _RTE_RING_H_
+
+/**
+ * @file
+ * RTE Ring
+ *
+ * The Ring Manager is a fixed-size queue, implemented as a table of
+ * pointers. Head and tail pointers are modified atomically, allowing
+ * concurrent access to it. It has the following features:
+ *
+ * - FIFO (First In First Out)
+ * - Maximum size is fixed; the pointers are stored in a table.
+ * - Lockless implementation.
+ * - Multi- or single-consumer dequeue.
+ * - Multi- or single-producer enqueue.
+ * - Bulk dequeue.
+ * - Bulk enqueue.
+ *
+ * Note: the ring implementation is not preemptable. A lcore must not
+ * be interrupted by another task that uses the same ring.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdio.h>
+#include <stdint.h>
+#include <sys/queue.h>
+#include <errno.h>
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+
+enum rte_ring_queue_behavior {
+	RTE_RING_QUEUE_FIXED = 0, /* Enq/Deq a fixed number of items from a ring */
+	RTE_RING_QUEUE_VARIABLE   /* Enq/Deq as many items a possible from ring */
+};
+
+#ifdef RTE_LIBRTE_RING_DEBUG
+/**
+ * A structure that stores the ring statistics (per-lcore).
+ */
+struct rte_ring_debug_stats {
+	uint64_t enq_success_bulk; /**< Successful enqueues number. */
+	uint64_t enq_success_objs; /**< Objects successfully enqueued. */
+	uint64_t enq_quota_bulk;   /**< Successful enqueues above watermark. */
+	uint64_t enq_quota_objs;   /**< Objects enqueued above watermark. */
+	uint64_t enq_fail_bulk;    /**< Failed enqueues number. */
+	uint64_t enq_fail_objs;    /**< Objects that failed to be enqueued. */
+	uint64_t deq_success_bulk; /**< Successful dequeues number. */
+	uint64_t deq_success_objs; /**< Objects successfully dequeued. */
+	uint64_t deq_fail_bulk;    /**< Failed dequeues number. */
+	uint64_t deq_fail_objs;    /**< Objects that failed to be dequeued. */
+} __rte_cache_aligned;
+#endif
+
+#define RTE_RING_NAMESIZE 32 /**< The maximum length of a ring name. */
+#define RTE_RING_MZ_PREFIX "RG_"
+
+/**
+ * An RTE ring structure.
+ *
+ * The producer and the consumer have a head and a tail index. The particularity
+ * of these index is that they are not between 0 and size(ring). These indexes
+ * are between 0 and 2^32, and we mask their value when we access the ring[]
+ * field. Thanks to this assumption, we can do subtractions between 2 index
+ * values in a modulo-32bit base: that's why the overflow of the indexes is not
+ * a problem.
+ */
+struct rte_ring {
+	char name[RTE_RING_NAMESIZE];    /**< Name of the ring. */
+	int flags;                       /**< Flags supplied at creation. */
+
+	/** Ring producer status. */
+	struct prod {
+		uint32_t watermark;      /**< Maximum items before EDQUOT. */
+		uint32_t sp_enqueue;     /**< True, if single producer. */
+		uint32_t size;           /**< Size of ring. */
+		uint32_t mask;           /**< Mask (size-1) of ring. */
+		volatile uint32_t head;  /**< Producer head. */
+		volatile uint32_t tail;  /**< Producer tail. */
+	} prod __rte_cache_aligned;
+
+	/** Ring consumer status. */
+	struct cons {
+		uint32_t sc_dequeue;     /**< True, if single consumer. */
+		uint32_t size;           /**< Size of the ring. */
+		uint32_t mask;           /**< Mask (size-1) of ring. */
+		volatile uint32_t head;  /**< Consumer head. */
+		volatile uint32_t tail;  /**< Consumer tail. */
+#ifdef RTE_RING_SPLIT_PROD_CONS
+	} cons __rte_cache_aligned;
+#else
+	} cons;
+#endif
+
+#ifdef RTE_LIBRTE_RING_DEBUG
+	struct rte_ring_debug_stats stats[RTE_MAX_LCORE];
+#endif
+
+	void * ring[0] __rte_cache_aligned; /**< Memory space of ring starts here.
+	                                     * not volatile so need to be careful
+	                                     * about compiler re-ordering */
+};
+
+#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
+#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
+#define RTE_RING_QUOT_EXCEED (1 << 31)  /**< Quota exceed for burst ops */
+#define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */
+
+/**
+ * @internal When debug is enabled, store ring statistics.
+ * @param r
+ *   A pointer to the ring.
+ * @param name
+ *   The name of the statistics field to increment in the ring.
+ * @param n
+ *   The number to add to the object-oriented statistics.
+ */
+#ifdef RTE_LIBRTE_RING_DEBUG
+#define __RING_STAT_ADD(r, name, n) do {		\
+		unsigned __lcore_id = rte_lcore_id();	\
+		r->stats[__lcore_id].name##_objs += n;	\
+		r->stats[__lcore_id].name##_bulk += 1;	\
+	} while(0)
+#else
+#define __RING_STAT_ADD(r, name, n) do {} while(0)
+#endif
+
+/**
+ * Calculate the memory size needed for a ring
+ *
+ * This function returns the number of bytes needed for a ring, given
+ * the number of elements in it. This value is the sum of the size of
+ * the structure rte_ring and the size of the memory needed by the
+ * objects pointers. The value is aligned to a cache line size.
+ *
+ * @param count
+ *   The number of elements in the ring (must be a power of 2).
+ * @return
+ *   - The memory size needed for the ring on success.
+ *   - -EINVAL if count is not a power of 2.
+ */
+ssize_t rte_ring_get_memsize(unsigned count);
+
+/**
+ * Initialize a ring structure.
+ *
+ * Initialize a ring structure in memory pointed by "r". The size of the
+ * memory area must be large enough to store the ring structure and the
+ * object table. It is advised to use rte_ring_get_memsize() to get the
+ * appropriate size.
+ *
+ * The ring size is set to *count*, which must be a power of two. Water
+ * marking is disabled by default. The real usable ring size is
+ * *count-1* instead of *count* to differentiate a free ring from an
+ * empty ring.
+ *
+ * The ring is not added in RTE_TAILQ_RING global list. Indeed, the
+ * memory given by the caller may not be shareable among dpdk
+ * processes.
+ *
+ * @param r
+ *   The pointer to the ring structure followed by the objects table.
+ * @param name
+ *   The name of the ring.
+ * @param count
+ *   The number of elements in the ring (must be a power of 2).
+ * @param flags
+ *   An OR of the following:
+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ *      is "single-producer". Otherwise, it is "multi-producers".
+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ *      is "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   0 on success, or a negative value on error.
+ */
+int rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
+	unsigned flags);
+
+/**
+ * Create a new ring named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory. Then it
+ * calls rte_ring_init() to initialize an empty ring.
+ *
+ * The new ring size is set to *count*, which must be a power of
+ * two. Water marking is disabled by default. The real usable ring size
+ * is *count-1* instead of *count* to differentiate a free ring from an
+ * empty ring.
+ *
+ * The ring is added in RTE_TAILQ_RING list.
+ *
+ * @param name
+ *   The name of the ring.
+ * @param count
+ *   The size of the ring (must be a power of 2).
+ * @param socket_id
+ *   The *socket_id* argument is the socket identifier in case of
+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ *   constraint for the reserved zone.
+ * @param flags
+ *   An OR of the following:
+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ *      is "single-producer". Otherwise, it is "multi-producers".
+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ *      is "single-consumer". Otherwise, it is "multi-consumers".
+ * @return
+ *   On success, the pointer to the new allocated ring. NULL on error with
+ *    rte_errno set appropriately. Possible errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring list
+ *    - EINVAL - count provided is not a power of 2
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_ring *rte_ring_create(const char *name, unsigned count,
+				 int socket_id, unsigned flags);
+
+/**
+ * Change the high water mark.
+ *
+ * If *count* is 0, water marking is disabled. Otherwise, it is set to the
+ * *count* value. The *count* value must be greater than 0 and less
+ * than the ring size.
+ *
+ * This function can be called at any time (not necessarily at
+ * initialization).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param count
+ *   The new water mark value.
+ * @return
+ *   - 0: Success; water mark changed.
+ *   - -EINVAL: Invalid water mark value.
+ */
+int rte_ring_set_water_mark(struct rte_ring *r, unsigned count);
+
+/**
+ * Dump the status of the ring to the console.
+ *
+ * @param f
+ *   A pointer to a file for output
+ * @param r
+ *   A pointer to the ring structure.
+ */
+void rte_ring_dump(FILE *f, const struct rte_ring *r);
+
+/* the actual enqueue of pointers on the ring.
+ * Placed here since identical code needed in both
+ * single and multi producer enqueue functions */
+#define ENQUEUE_PTRS() do { \
+	const uint32_t size = r->prod.size; \
+	uint32_t idx = prod_head & mask; \
+	if (likely(idx + n < size)) { \
+		for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \
+			r->ring[idx] = obj_table[i]; \
+			r->ring[idx+1] = obj_table[i+1]; \
+			r->ring[idx+2] = obj_table[i+2]; \
+			r->ring[idx+3] = obj_table[i+3]; \
+		} \
+		switch (n & 0x3) { \
+			case 3: r->ring[idx++] = obj_table[i++]; \
+			case 2: r->ring[idx++] = obj_table[i++]; \
+			case 1: r->ring[idx++] = obj_table[i++]; \
+		} \
+	} else { \
+		for (i = 0; idx < size; i++, idx++)\
+			r->ring[idx] = obj_table[i]; \
+		for (idx = 0; i < n; i++, idx++) \
+			r->ring[idx] = obj_table[i]; \
+	} \
+} while(0)
+
+/* the actual copy of pointers on the ring to obj_table.
+ * Placed here since identical code needed in both
+ * single and multi consumer dequeue functions */
+#define DEQUEUE_PTRS() do { \
+	uint32_t idx = cons_head & mask; \
+	const uint32_t size = r->cons.size; \
+	if (likely(idx + n < size)) { \
+		for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\
+			obj_table[i] = r->ring[idx]; \
+			obj_table[i+1] = r->ring[idx+1]; \
+			obj_table[i+2] = r->ring[idx+2]; \
+			obj_table[i+3] = r->ring[idx+3]; \
+		} \
+		switch (n & 0x3) { \
+			case 3: obj_table[i++] = r->ring[idx++]; \
+			case 2: obj_table[i++] = r->ring[idx++]; \
+			case 1: obj_table[i++] = r->ring[idx++]; \
+		} \
+	} else { \
+		for (i = 0; idx < size; i++, idx++) \
+			obj_table[i] = r->ring[idx]; \
+		for (idx = 0; i < n; i++, idx++) \
+			obj_table[i] = r->ring[idx]; \
+	} \
+} while (0)
+
+/**
+ * @internal Enqueue several objects on the ring (multi-producers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * producer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @param behavior
+ *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
+ * @return
+ *   Depend on the behavior value
+ *   if behavior = RTE_RING_QUEUE_FIXED
+ *   - 0: Success; objects enqueue.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
+ *   if behavior = RTE_RING_QUEUE_VARIABLE
+ *   - n: Actual number of objects enqueued.
+ */
+static inline int __attribute__((always_inline))
+__rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
+			 unsigned n, enum rte_ring_queue_behavior behavior)
+{
+	uint32_t prod_head, prod_next;
+	uint32_t cons_tail, free_entries;
+	const unsigned max = n;
+	int success;
+	unsigned i;
+	uint32_t mask = r->prod.mask;
+	int ret;
+
+	/* move prod.head atomically */
+	do {
+		/* Reset n to the initial burst count */
+		n = max;
+
+		prod_head = r->prod.head;
+		cons_tail = r->cons.tail;
+		/* The subtraction is done between two unsigned 32bits value
+		 * (the result is always modulo 32 bits even if we have
+		 * prod_head > cons_tail). So 'free_entries' is always between 0
+		 * and size(ring)-1. */
+		free_entries = (mask + cons_tail - prod_head);
+
+		/* check that we have enough room in ring */
+		if (unlikely(n > free_entries)) {
+			if (behavior == RTE_RING_QUEUE_FIXED) {
+				__RING_STAT_ADD(r, enq_fail, n);
+				return -ENOBUFS;
+			}
+			else {
+				/* No free entry available */
+				if (unlikely(free_entries == 0)) {
+					__RING_STAT_ADD(r, enq_fail, n);
+					return 0;
+				}
+
+				n = free_entries;
+			}
+		}
+
+		prod_next = prod_head + n;
+		success = rte_atomic32_cmpset(&r->prod.head, prod_head,
+					      prod_next);
+	} while (unlikely(success == 0));
+
+	/* write entries in ring */
+	ENQUEUE_PTRS();
+	rte_compiler_barrier();
+
+	/* if we exceed the watermark */
+	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
+		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
+				(int)(n | RTE_RING_QUOT_EXCEED);
+		__RING_STAT_ADD(r, enq_quota, n);
+	}
+	else {
+		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
+		__RING_STAT_ADD(r, enq_success, n);
+	}
+
+	/*
+	 * If there are other enqueues in progress that preceded us,
+	 * we need to wait for them to complete
+	 */
+	while (unlikely(r->prod.tail != prod_head))
+		rte_pause();
+
+	r->prod.tail = prod_next;
+	return ret;
+}
+
+/**
+ * @internal Enqueue several objects on a ring (NOT multi-producers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @param behavior
+ *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
+ * @return
+ *   Depend on the behavior value
+ *   if behavior = RTE_RING_QUEUE_FIXED
+ *   - 0: Success; objects enqueue.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
+ *   if behavior = RTE_RING_QUEUE_VARIABLE
+ *   - n: Actual number of objects enqueued.
+ */
+static inline int __attribute__((always_inline))
+__rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
+			 unsigned n, enum rte_ring_queue_behavior behavior)
+{
+	uint32_t prod_head, cons_tail;
+	uint32_t prod_next, free_entries;
+	unsigned i;
+	uint32_t mask = r->prod.mask;
+	int ret;
+
+	prod_head = r->prod.head;
+	cons_tail = r->cons.tail;
+	/* The subtraction is done between two unsigned 32bits value
+	 * (the result is always modulo 32 bits even if we have
+	 * prod_head > cons_tail). So 'free_entries' is always between 0
+	 * and size(ring)-1. */
+	free_entries = mask + cons_tail - prod_head;
+
+	/* check that we have enough room in ring */
+	if (unlikely(n > free_entries)) {
+		if (behavior == RTE_RING_QUEUE_FIXED) {
+			__RING_STAT_ADD(r, enq_fail, n);
+			return -ENOBUFS;
+		}
+		else {
+			/* No free entry available */
+			if (unlikely(free_entries == 0)) {
+				__RING_STAT_ADD(r, enq_fail, n);
+				return 0;
+			}
+
+			n = free_entries;
+		}
+	}
+
+	prod_next = prod_head + n;
+	r->prod.head = prod_next;
+
+	/* write entries in ring */
+	ENQUEUE_PTRS();
+	rte_compiler_barrier();
+
+	/* if we exceed the watermark */
+	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
+		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
+			(int)(n | RTE_RING_QUOT_EXCEED);
+		__RING_STAT_ADD(r, enq_quota, n);
+	}
+	else {
+		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
+		__RING_STAT_ADD(r, enq_success, n);
+	}
+
+	r->prod.tail = prod_next;
+	return ret;
+}
+
+/**
+ * @internal Dequeue several objects from a ring (multi-consumers safe). When
+ * the request objects are more than the available objects, only dequeue the
+ * actual number of objects
+ *
+ * This function uses a "compare and set" instruction to move the
+ * consumer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @param behavior
+ *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
+ * @return
+ *   Depend on the behavior value
+ *   if behavior = RTE_RING_QUEUE_FIXED
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
+ *     dequeued.
+ *   if behavior = RTE_RING_QUEUE_VARIABLE
+ *   - n: Actual number of objects dequeued.
+ */
+
+static inline int __attribute__((always_inline))
+__rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
+		 unsigned n, enum rte_ring_queue_behavior behavior)
+{
+	uint32_t cons_head, prod_tail;
+	uint32_t cons_next, entries;
+	const unsigned max = n;
+	int success;
+	unsigned i;
+	uint32_t mask = r->prod.mask;
+
+	/* move cons.head atomically */
+	do {
+		/* Restore n as it may change every loop */
+		n = max;
+
+		cons_head = r->cons.head;
+		prod_tail = r->prod.tail;
+		/* The subtraction is done between two unsigned 32bits value
+		 * (the result is always modulo 32 bits even if we have
+		 * cons_head > prod_tail). So 'entries' is always between 0
+		 * and size(ring)-1. */
+		entries = (prod_tail - cons_head);
+
+		/* Set the actual entries for dequeue */
+		if (n > entries) {
+			if (behavior == RTE_RING_QUEUE_FIXED) {
+				__RING_STAT_ADD(r, deq_fail, n);
+				return -ENOENT;
+			}
+			else {
+				if (unlikely(entries == 0)){
+					__RING_STAT_ADD(r, deq_fail, n);
+					return 0;
+				}
+
+				n = entries;
+			}
+		}
+
+		cons_next = cons_head + n;
+		success = rte_atomic32_cmpset(&r->cons.head, cons_head,
+					      cons_next);
+	} while (unlikely(success == 0));
+
+	/* copy in table */
+	DEQUEUE_PTRS();
+	rte_compiler_barrier();
+
+	/*
+	 * If there are other dequeues in progress that preceded us,
+	 * we need to wait for them to complete
+	 */
+	while (unlikely(r->cons.tail != cons_head))
+		rte_pause();
+
+	__RING_STAT_ADD(r, deq_success, n);
+	r->cons.tail = cons_next;
+
+	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
+}
+
+/**
+ * @internal Dequeue several objects from a ring (NOT multi-consumers safe).
+ * When the request objects are more than the available objects, only dequeue
+ * the actual number of objects
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @param behavior
+ *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
+ * @return
+ *   Depend on the behavior value
+ *   if behavior = RTE_RING_QUEUE_FIXED
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
+ *     dequeued.
+ *   if behavior = RTE_RING_QUEUE_VARIABLE
+ *   - n: Actual number of objects dequeued.
+ */
+static inline int __attribute__((always_inline))
+__rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
+		 unsigned n, enum rte_ring_queue_behavior behavior)
+{
+	uint32_t cons_head, prod_tail;
+	uint32_t cons_next, entries;
+	unsigned i;
+	uint32_t mask = r->prod.mask;
+
+	cons_head = r->cons.head;
+	prod_tail = r->prod.tail;
+	/* The subtraction is done between two unsigned 32bits value
+	 * (the result is always modulo 32 bits even if we have
+	 * cons_head > prod_tail). So 'entries' is always between 0
+	 * and size(ring)-1. */
+	entries = prod_tail - cons_head;
+
+	if (n > entries) {
+		if (behavior == RTE_RING_QUEUE_FIXED) {
+			__RING_STAT_ADD(r, deq_fail, n);
+			return -ENOENT;
+		}
+		else {
+			if (unlikely(entries == 0)){
+				__RING_STAT_ADD(r, deq_fail, n);
+				return 0;
+			}
+
+			n = entries;
+		}
+	}
+
+	cons_next = cons_head + n;
+	r->cons.head = cons_next;
+
+	/* copy in table */
+	DEQUEUE_PTRS();
+	rte_compiler_barrier();
+
+	__RING_STAT_ADD(r, deq_success, n);
+	r->cons.tail = cons_next;
+	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
+}
+
+/**
+ * Enqueue several objects on the ring (multi-producers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * producer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - 0: Success; objects enqueue.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
+			 unsigned n)
+{
+	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+}
+
+/**
+ * Enqueue several objects on a ring (NOT multi-producers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - 0: Success; objects enqueued.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
+			 unsigned n)
+{
+	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+}
+
+/**
+ * Enqueue several objects on a ring.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - 0: Success; objects enqueued.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
+		      unsigned n)
+{
+	if (r->prod.sp_enqueue)
+		return rte_ring_sp_enqueue_bulk(r, obj_table, n);
+	else
+		return rte_ring_mp_enqueue_bulk(r, obj_table, n);
+}
+
+/**
+ * Enqueue one object on a ring (multi-producers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * producer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ * @return
+ *   - 0: Success; objects enqueued.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
+{
+	return rte_ring_mp_enqueue_bulk(r, &obj, 1);
+}
+
+/**
+ * Enqueue one object on a ring (NOT multi-producers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ * @return
+ *   - 0: Success; objects enqueued.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
+{
+	return rte_ring_sp_enqueue_bulk(r, &obj, 1);
+}
+
+/**
+ * Enqueue one object on a ring.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version, depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj
+ *   A pointer to the object to be added.
+ * @return
+ *   - 0: Success; objects enqueued.
+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
+ *     high water mark is exceeded.
+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_enqueue(struct rte_ring *r, void *obj)
+{
+	if (r->prod.sp_enqueue)
+		return rte_ring_sp_enqueue(r, obj);
+	else
+		return rte_ring_mp_enqueue(r, obj);
+}
+
+/**
+ * Dequeue several objects from a ring (multi-consumers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * consumer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+}
+
+/**
+ * Dequeue several objects from a ring (NOT multi-consumers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table,
+ *   must be strictly positive.
+ * @return
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
+}
+
+/**
+ * Dequeue several objects from a ring.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	if (r->cons.sc_dequeue)
+		return rte_ring_sc_dequeue_bulk(r, obj_table, n);
+	else
+		return rte_ring_mc_dequeue_bulk(r, obj_table, n);
+}
+
+/**
+ * Dequeue one object from a ring (multi-consumers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * consumer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
+{
+	return rte_ring_mc_dequeue_bulk(r, obj_p, 1);
+}
+
+/**
+ * Dequeue one object from a ring (NOT multi-consumers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success; objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
+{
+	return rte_ring_sc_dequeue_bulk(r, obj_p, 1);
+}
+
+/**
+ * Dequeue one object from a ring.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_p
+ *   A pointer to a void * pointer (object) that will be filled.
+ * @return
+ *   - 0: Success, objects dequeued.
+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
+ *     dequeued.
+ */
+static inline int __attribute__((always_inline))
+rte_ring_dequeue(struct rte_ring *r, void **obj_p)
+{
+	if (r->cons.sc_dequeue)
+		return rte_ring_sc_dequeue(r, obj_p);
+	else
+		return rte_ring_mc_dequeue(r, obj_p);
+}
+
+/**
+ * Test if a ring is full.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   - 1: The ring is full.
+ *   - 0: The ring is not full.
+ */
+static inline int
+rte_ring_full(const struct rte_ring *r)
+{
+	uint32_t prod_tail = r->prod.tail;
+	uint32_t cons_tail = r->cons.tail;
+	return (((cons_tail - prod_tail - 1) & r->prod.mask) == 0);
+}
+
+/**
+ * Test if a ring is empty.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   - 1: The ring is empty.
+ *   - 0: The ring is not empty.
+ */
+static inline int
+rte_ring_empty(const struct rte_ring *r)
+{
+	uint32_t prod_tail = r->prod.tail;
+	uint32_t cons_tail = r->cons.tail;
+	return !!(cons_tail == prod_tail);
+}
+
+/**
+ * Return the number of entries in a ring.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   The number of entries in the ring.
+ */
+static inline unsigned
+rte_ring_count(const struct rte_ring *r)
+{
+	uint32_t prod_tail = r->prod.tail;
+	uint32_t cons_tail = r->cons.tail;
+	return ((prod_tail - cons_tail) & r->prod.mask);
+}
+
+/**
+ * Return the number of free entries in a ring.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @return
+ *   The number of free entries in the ring.
+ */
+static inline unsigned
+rte_ring_free_count(const struct rte_ring *r)
+{
+	uint32_t prod_tail = r->prod.tail;
+	uint32_t cons_tail = r->cons.tail;
+	return ((cons_tail - prod_tail - 1) & r->prod.mask);
+}
+
+/**
+ * Dump the status of all rings on the console
+ *
+ * @param f
+ *   A pointer to a file for output
+ */
+void rte_ring_list_dump(FILE *f);
+
+/**
+ * Search a ring from its name
+ *
+ * @param name
+ *   The name of the ring.
+ * @return
+ *   The pointer to the ring matching the name, or NULL if not found,
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - ENOENT - required entry not available to return.
+ */
+struct rte_ring *rte_ring_lookup(const char *name);
+
+/**
+ * Enqueue several objects on the ring (multi-producers safe).
+ *
+ * This function uses a "compare and set" instruction to move the
+ * producer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
+			 unsigned n)
+{
+	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+}
+
+/**
+ * Enqueue several objects on a ring (NOT multi-producers safe).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
+			 unsigned n)
+{
+	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+}
+
+/**
+ * Enqueue several objects on a ring.
+ *
+ * This function calls the multi-producer or the single-producer
+ * version depending on the default behavior that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects).
+ * @param n
+ *   The number of objects to add in the ring from the obj_table.
+ * @return
+ *   - n: Actual number of objects enqueued.
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,
+		      unsigned n)
+{
+	if (r->prod.sp_enqueue)
+		return rte_ring_sp_enqueue_burst(r, obj_table, n);
+	else
+		return rte_ring_mp_enqueue_burst(r, obj_table, n);
+}
+
+/**
+ * Dequeue several objects from a ring (multi-consumers safe). When the request
+ * objects are more than the available objects, only dequeue the actual number
+ * of objects
+ *
+ * This function uses a "compare and set" instruction to move the
+ * consumer index atomically.
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - n: Actual number of objects dequeued, 0 if ring is empty
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+}
+
+/**
+ * Dequeue several objects from a ring (NOT multi-consumers safe).When the
+ * request objects are more than the available objects, only dequeue the
+ * actual number of objects
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - n: Actual number of objects dequeued, 0 if ring is empty
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
+}
+
+/**
+ * Dequeue multiple objects from a ring up to a maximum number.
+ *
+ * This function calls the multi-consumers or the single-consumer
+ * version, depending on the default behaviour that was specified at
+ * ring creation time (see flags).
+ *
+ * @param r
+ *   A pointer to the ring structure.
+ * @param obj_table
+ *   A pointer to a table of void * pointers (objects) that will be filled.
+ * @param n
+ *   The number of objects to dequeue from the ring to the obj_table.
+ * @return
+ *   - Number of objects dequeued
+ */
+static inline unsigned __attribute__((always_inline))
+rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
+{
+	if (r->cons.sc_dequeue)
+		return rte_ring_sc_dequeue_burst(r, obj_table, n);
+	else
+		return rte_ring_mc_dequeue_burst(r, obj_table, n);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RING_H_ */
diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile
deleted file mode 100644
index 2380a43..0000000
--- a/lib/librte_ring/Makefile
+++ /dev/null
@@ -1,48 +0,0 @@
-#   BSD LICENSE
-#
-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
-#   All rights reserved.
-#
-#   Redistribution and use in source and binary forms, with or without
-#   modification, are permitted provided that the following conditions
-#   are met:
-#
-#     * Redistributions of source code must retain the above copyright
-#       notice, this list of conditions and the following disclaimer.
-#     * Redistributions in binary form must reproduce the above copyright
-#       notice, this list of conditions and the following disclaimer in
-#       the documentation and/or other materials provided with the
-#       distribution.
-#     * Neither the name of Intel Corporation nor the names of its
-#       contributors may be used to endorse or promote products derived
-#       from this software without specific prior written permission.
-#
-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-# library name
-LIB = librte_ring.a
-
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
-
-# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c
-
-# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h
-
-# this lib needs eal and rte_malloc
-DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal lib/librte_malloc
-
-include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
deleted file mode 100644
index f5899c4..0000000
--- a/lib/librte_ring/rte_ring.c
+++ /dev/null
@@ -1,338 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/*
- * Derived from FreeBSD's bufring.c
- *
- **************************************************************************
- *
- * Copyright (c) 2007,2008 Kip Macy kmacy@freebsd.org
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- *    this list of conditions and the following disclaimer.
- *
- * 2. The name of Kip Macy nor the names of other
- *    contributors may be used to endorse or promote products derived from
- *    this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- ***************************************************************************/
-
-#include <stdio.h>
-#include <stdarg.h>
-#include <string.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <errno.h>
-#include <sys/queue.h>
-
-#include <rte_common.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_malloc.h>
-#include <rte_launch.h>
-#include <rte_tailq.h>
-#include <rte_eal.h>
-#include <rte_eal_memconfig.h>
-#include <rte_atomic.h>
-#include <rte_per_lcore.h>
-#include <rte_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_errno.h>
-#include <rte_string_fns.h>
-#include <rte_spinlock.h>
-
-#include "rte_ring.h"
-
-TAILQ_HEAD(rte_ring_list, rte_tailq_entry);
-
-/* true if x is a power of 2 */
-#define POWEROF2(x) ((((x)-1) & (x)) == 0)
-
-/* return the size of memory occupied by a ring */
-ssize_t
-rte_ring_get_memsize(unsigned count)
-{
-	ssize_t sz;
-
-	/* count must be a power of 2 */
-	if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) {
-		RTE_LOG(ERR, RING,
-			"Requested size is invalid, must be power of 2, and "
-			"do not exceed the size limit %u\n", RTE_RING_SZ_MASK);
-		return -EINVAL;
-	}
-
-	sz = sizeof(struct rte_ring) + count * sizeof(void *);
-	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
-	return sz;
-}
-
-int
-rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
-	unsigned flags)
-{
-	/* compilation-time checks */
-	RTE_BUILD_BUG_ON((sizeof(struct rte_ring) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#ifdef RTE_RING_SPLIT_PROD_CONS
-	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#ifdef RTE_LIBRTE_RING_DEBUG
-	RTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-	RTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &
-			  RTE_CACHE_LINE_MASK) != 0);
-#endif
-
-	/* init the ring structure */
-	memset(r, 0, sizeof(*r));
-	snprintf(r->name, sizeof(r->name), "%s", name);
-	r->flags = flags;
-	r->prod.watermark = count;
-	r->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);
-	r->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);
-	r->prod.size = r->cons.size = count;
-	r->prod.mask = r->cons.mask = count-1;
-	r->prod.head = r->cons.head = 0;
-	r->prod.tail = r->cons.tail = 0;
-
-	return 0;
-}
-
-/* create the ring */
-struct rte_ring *
-rte_ring_create(const char *name, unsigned count, int socket_id,
-		unsigned flags)
-{
-	char mz_name[RTE_MEMZONE_NAMESIZE];
-	struct rte_ring *r;
-	struct rte_tailq_entry *te;
-	const struct rte_memzone *mz;
-	ssize_t ring_size;
-	int mz_flags = 0;
-	struct rte_ring_list* ring_list = NULL;
-
-	/* check that we have an initialised tail queue */
-	if ((ring_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return NULL;
-	}
-
-	ring_size = rte_ring_get_memsize(count);
-	if (ring_size < 0) {
-		rte_errno = ring_size;
-		return NULL;
-	}
-
-	te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
-	if (te == NULL) {
-		RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
-		rte_errno = ENOMEM;
-		return NULL;
-	}
-
-	snprintf(mz_name, sizeof(mz_name), "%s%s", RTE_RING_MZ_PREFIX, name);
-
-	rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
-
-	/* reserve a memory zone for this ring. If we can't get rte_config or
-	 * we are secondary process, the memzone_reserve function will set
-	 * rte_errno for us appropriately - hence no check in this this function */
-	mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
-	if (mz != NULL) {
-		r = mz->addr;
-		/* no need to check return value here, we already checked the
-		 * arguments above */
-		rte_ring_init(r, name, count, flags);
-
-		te->data = (void *) r;
-
-		TAILQ_INSERT_TAIL(ring_list, te, next);
-	} else {
-		r = NULL;
-		RTE_LOG(ERR, RING, "Cannot reserve memory\n");
-		rte_free(te);
-	}
-	rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
-
-	return r;
-}
-
-/*
- * change the high water mark. If *count* is 0, water marking is
- * disabled
- */
-int
-rte_ring_set_water_mark(struct rte_ring *r, unsigned count)
-{
-	if (count >= r->prod.size)
-		return -EINVAL;
-
-	/* if count is 0, disable the watermarking */
-	if (count == 0)
-		count = r->prod.size;
-
-	r->prod.watermark = count;
-	return 0;
-}
-
-/* dump the status of the ring on the console */
-void
-rte_ring_dump(FILE *f, const struct rte_ring *r)
-{
-#ifdef RTE_LIBRTE_RING_DEBUG
-	struct rte_ring_debug_stats sum;
-	unsigned lcore_id;
-#endif
-
-	fprintf(f, "ring <%s>@%p\n", r->name, r);
-	fprintf(f, "  flags=%x\n", r->flags);
-	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
-	fprintf(f, "  ct=%"PRIu32"\n", r->cons.tail);
-	fprintf(f, "  ch=%"PRIu32"\n", r->cons.head);
-	fprintf(f, "  pt=%"PRIu32"\n", r->prod.tail);
-	fprintf(f, "  ph=%"PRIu32"\n", r->prod.head);
-	fprintf(f, "  used=%u\n", rte_ring_count(r));
-	fprintf(f, "  avail=%u\n", rte_ring_free_count(r));
-	if (r->prod.watermark == r->prod.size)
-		fprintf(f, "  watermark=0\n");
-	else
-		fprintf(f, "  watermark=%"PRIu32"\n", r->prod.watermark);
-
-	/* sum and dump statistics */
-#ifdef RTE_LIBRTE_RING_DEBUG
-	memset(&sum, 0, sizeof(sum));
-	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-		sum.enq_success_bulk += r->stats[lcore_id].enq_success_bulk;
-		sum.enq_success_objs += r->stats[lcore_id].enq_success_objs;
-		sum.enq_quota_bulk += r->stats[lcore_id].enq_quota_bulk;
-		sum.enq_quota_objs += r->stats[lcore_id].enq_quota_objs;
-		sum.enq_fail_bulk += r->stats[lcore_id].enq_fail_bulk;
-		sum.enq_fail_objs += r->stats[lcore_id].enq_fail_objs;
-		sum.deq_success_bulk += r->stats[lcore_id].deq_success_bulk;
-		sum.deq_success_objs += r->stats[lcore_id].deq_success_objs;
-		sum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;
-		sum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;
-	}
-	fprintf(f, "  size=%"PRIu32"\n", r->prod.size);
-	fprintf(f, "  enq_success_bulk=%"PRIu64"\n", sum.enq_success_bulk);
-	fprintf(f, "  enq_success_objs=%"PRIu64"\n", sum.enq_success_objs);
-	fprintf(f, "  enq_quota_bulk=%"PRIu64"\n", sum.enq_quota_bulk);
-	fprintf(f, "  enq_quota_objs=%"PRIu64"\n", sum.enq_quota_objs);
-	fprintf(f, "  enq_fail_bulk=%"PRIu64"\n", sum.enq_fail_bulk);
-	fprintf(f, "  enq_fail_objs=%"PRIu64"\n", sum.enq_fail_objs);
-	fprintf(f, "  deq_success_bulk=%"PRIu64"\n", sum.deq_success_bulk);
-	fprintf(f, "  deq_success_objs=%"PRIu64"\n", sum.deq_success_objs);
-	fprintf(f, "  deq_fail_bulk=%"PRIu64"\n", sum.deq_fail_bulk);
-	fprintf(f, "  deq_fail_objs=%"PRIu64"\n", sum.deq_fail_objs);
-#else
-	fprintf(f, "  no statistics available\n");
-#endif
-}
-
-/* dump the status of all rings on the console */
-void
-rte_ring_list_dump(FILE *f)
-{
-	const struct rte_tailq_entry *te;
-	struct rte_ring_list *ring_list;
-
-	/* check that we have an initialised tail queue */
-	if ((ring_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return;
-	}
-
-	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
-
-	TAILQ_FOREACH(te, ring_list, next) {
-		rte_ring_dump(f, (struct rte_ring *) te->data);
-	}
-
-	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
-}
-
-/* search a ring from its name */
-struct rte_ring *
-rte_ring_lookup(const char *name)
-{
-	struct rte_tailq_entry *te;
-	struct rte_ring *r = NULL;
-	struct rte_ring_list *ring_list;
-
-	/* check that we have an initialized tail queue */
-	if ((ring_list =
-	     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {
-		rte_errno = E_RTE_NO_TAILQ;
-		return NULL;
-	}
-
-	rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
-
-	TAILQ_FOREACH(te, ring_list, next) {
-		r = (struct rte_ring *) te->data;
-		if (strncmp(name, r->name, RTE_RING_NAMESIZE) == 0)
-			break;
-	}
-
-	rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
-
-	if (te == NULL) {
-		rte_errno = ENOENT;
-		return NULL;
-	}
-
-	return r;
-}
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
deleted file mode 100644
index 7cd5f2d..0000000
--- a/lib/librte_ring/rte_ring.h
+++ /dev/null
@@ -1,1214 +0,0 @@
-/*-
- *   BSD LICENSE
- *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- *   All rights reserved.
- *
- *   Redistribution and use in source and binary forms, with or without
- *   modification, are permitted provided that the following conditions
- *   are met:
- *
- *     * Redistributions of source code must retain the above copyright
- *       notice, this list of conditions and the following disclaimer.
- *     * Redistributions in binary form must reproduce the above copyright
- *       notice, this list of conditions and the following disclaimer in
- *       the documentation and/or other materials provided with the
- *       distribution.
- *     * Neither the name of Intel Corporation nor the names of its
- *       contributors may be used to endorse or promote products derived
- *       from this software without specific prior written permission.
- *
- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/*
- * Derived from FreeBSD's bufring.h
- *
- **************************************************************************
- *
- * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- *
- * 1. Redistributions of source code must retain the above copyright notice,
- *    this list of conditions and the following disclaimer.
- *
- * 2. The name of Kip Macy nor the names of other
- *    contributors may be used to endorse or promote products derived from
- *    this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
- * POSSIBILITY OF SUCH DAMAGE.
- *
- ***************************************************************************/
-
-#ifndef _RTE_RING_H_
-#define _RTE_RING_H_
-
-/**
- * @file
- * RTE Ring
- *
- * The Ring Manager is a fixed-size queue, implemented as a table of
- * pointers. Head and tail pointers are modified atomically, allowing
- * concurrent access to it. It has the following features:
- *
- * - FIFO (First In First Out)
- * - Maximum size is fixed; the pointers are stored in a table.
- * - Lockless implementation.
- * - Multi- or single-consumer dequeue.
- * - Multi- or single-producer enqueue.
- * - Bulk dequeue.
- * - Bulk enqueue.
- *
- * Note: the ring implementation is not preemptable. A lcore must not
- * be interrupted by another task that uses the same ring.
- *
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include <stdio.h>
-#include <stdint.h>
-#include <sys/queue.h>
-#include <errno.h>
-#include <rte_common.h>
-#include <rte_memory.h>
-#include <rte_lcore.h>
-#include <rte_atomic.h>
-#include <rte_branch_prediction.h>
-
-enum rte_ring_queue_behavior {
-	RTE_RING_QUEUE_FIXED = 0, /* Enq/Deq a fixed number of items from a ring */
-	RTE_RING_QUEUE_VARIABLE   /* Enq/Deq as many items a possible from ring */
-};
-
-#ifdef RTE_LIBRTE_RING_DEBUG
-/**
- * A structure that stores the ring statistics (per-lcore).
- */
-struct rte_ring_debug_stats {
-	uint64_t enq_success_bulk; /**< Successful enqueues number. */
-	uint64_t enq_success_objs; /**< Objects successfully enqueued. */
-	uint64_t enq_quota_bulk;   /**< Successful enqueues above watermark. */
-	uint64_t enq_quota_objs;   /**< Objects enqueued above watermark. */
-	uint64_t enq_fail_bulk;    /**< Failed enqueues number. */
-	uint64_t enq_fail_objs;    /**< Objects that failed to be enqueued. */
-	uint64_t deq_success_bulk; /**< Successful dequeues number. */
-	uint64_t deq_success_objs; /**< Objects successfully dequeued. */
-	uint64_t deq_fail_bulk;    /**< Failed dequeues number. */
-	uint64_t deq_fail_objs;    /**< Objects that failed to be dequeued. */
-} __rte_cache_aligned;
-#endif
-
-#define RTE_RING_NAMESIZE 32 /**< The maximum length of a ring name. */
-#define RTE_RING_MZ_PREFIX "RG_"
-
-/**
- * An RTE ring structure.
- *
- * The producer and the consumer have a head and a tail index. The particularity
- * of these index is that they are not between 0 and size(ring). These indexes
- * are between 0 and 2^32, and we mask their value when we access the ring[]
- * field. Thanks to this assumption, we can do subtractions between 2 index
- * values in a modulo-32bit base: that's why the overflow of the indexes is not
- * a problem.
- */
-struct rte_ring {
-	char name[RTE_RING_NAMESIZE];    /**< Name of the ring. */
-	int flags;                       /**< Flags supplied at creation. */
-
-	/** Ring producer status. */
-	struct prod {
-		uint32_t watermark;      /**< Maximum items before EDQUOT. */
-		uint32_t sp_enqueue;     /**< True, if single producer. */
-		uint32_t size;           /**< Size of ring. */
-		uint32_t mask;           /**< Mask (size-1) of ring. */
-		volatile uint32_t head;  /**< Producer head. */
-		volatile uint32_t tail;  /**< Producer tail. */
-	} prod __rte_cache_aligned;
-
-	/** Ring consumer status. */
-	struct cons {
-		uint32_t sc_dequeue;     /**< True, if single consumer. */
-		uint32_t size;           /**< Size of the ring. */
-		uint32_t mask;           /**< Mask (size-1) of ring. */
-		volatile uint32_t head;  /**< Consumer head. */
-		volatile uint32_t tail;  /**< Consumer tail. */
-#ifdef RTE_RING_SPLIT_PROD_CONS
-	} cons __rte_cache_aligned;
-#else
-	} cons;
-#endif
-
-#ifdef RTE_LIBRTE_RING_DEBUG
-	struct rte_ring_debug_stats stats[RTE_MAX_LCORE];
-#endif
-
-	void * ring[0] __rte_cache_aligned; /**< Memory space of ring starts here.
-	                                     * not volatile so need to be careful
-	                                     * about compiler re-ordering */
-};
-
-#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
-#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
-#define RTE_RING_QUOT_EXCEED (1 << 31)  /**< Quota exceed for burst ops */
-#define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */
-
-/**
- * @internal When debug is enabled, store ring statistics.
- * @param r
- *   A pointer to the ring.
- * @param name
- *   The name of the statistics field to increment in the ring.
- * @param n
- *   The number to add to the object-oriented statistics.
- */
-#ifdef RTE_LIBRTE_RING_DEBUG
-#define __RING_STAT_ADD(r, name, n) do {		\
-		unsigned __lcore_id = rte_lcore_id();	\
-		r->stats[__lcore_id].name##_objs += n;	\
-		r->stats[__lcore_id].name##_bulk += 1;	\
-	} while(0)
-#else
-#define __RING_STAT_ADD(r, name, n) do {} while(0)
-#endif
-
-/**
- * Calculate the memory size needed for a ring
- *
- * This function returns the number of bytes needed for a ring, given
- * the number of elements in it. This value is the sum of the size of
- * the structure rte_ring and the size of the memory needed by the
- * objects pointers. The value is aligned to a cache line size.
- *
- * @param count
- *   The number of elements in the ring (must be a power of 2).
- * @return
- *   - The memory size needed for the ring on success.
- *   - -EINVAL if count is not a power of 2.
- */
-ssize_t rte_ring_get_memsize(unsigned count);
-
-/**
- * Initialize a ring structure.
- *
- * Initialize a ring structure in memory pointed by "r". The size of the
- * memory area must be large enough to store the ring structure and the
- * object table. It is advised to use rte_ring_get_memsize() to get the
- * appropriate size.
- *
- * The ring size is set to *count*, which must be a power of two. Water
- * marking is disabled by default. The real usable ring size is
- * *count-1* instead of *count* to differentiate a free ring from an
- * empty ring.
- *
- * The ring is not added in RTE_TAILQ_RING global list. Indeed, the
- * memory given by the caller may not be shareable among dpdk
- * processes.
- *
- * @param r
- *   The pointer to the ring structure followed by the objects table.
- * @param name
- *   The name of the ring.
- * @param count
- *   The number of elements in the ring (must be a power of 2).
- * @param flags
- *   An OR of the following:
- *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
- *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
- *      is "single-producer". Otherwise, it is "multi-producers".
- *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
- *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
- *      is "single-consumer". Otherwise, it is "multi-consumers".
- * @return
- *   0 on success, or a negative value on error.
- */
-int rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
-	unsigned flags);
-
-/**
- * Create a new ring named *name* in memory.
- *
- * This function uses ``memzone_reserve()`` to allocate memory. Then it
- * calls rte_ring_init() to initialize an empty ring.
- *
- * The new ring size is set to *count*, which must be a power of
- * two. Water marking is disabled by default. The real usable ring size
- * is *count-1* instead of *count* to differentiate a free ring from an
- * empty ring.
- *
- * The ring is added in RTE_TAILQ_RING list.
- *
- * @param name
- *   The name of the ring.
- * @param count
- *   The size of the ring (must be a power of 2).
- * @param socket_id
- *   The *socket_id* argument is the socket identifier in case of
- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
- *   constraint for the reserved zone.
- * @param flags
- *   An OR of the following:
- *    - RING_F_SP_ENQ: If this flag is set, the default behavior when
- *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
- *      is "single-producer". Otherwise, it is "multi-producers".
- *    - RING_F_SC_DEQ: If this flag is set, the default behavior when
- *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
- *      is "single-consumer". Otherwise, it is "multi-consumers".
- * @return
- *   On success, the pointer to the new allocated ring. NULL on error with
- *    rte_errno set appropriately. Possible errno values include:
- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
- *    - E_RTE_SECONDARY - function was called from a secondary process instance
- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring list
- *    - EINVAL - count provided is not a power of 2
- *    - ENOSPC - the maximum number of memzones has already been allocated
- *    - EEXIST - a memzone with the same name already exists
- *    - ENOMEM - no appropriate memory area found in which to create memzone
- */
-struct rte_ring *rte_ring_create(const char *name, unsigned count,
-				 int socket_id, unsigned flags);
-
-/**
- * Change the high water mark.
- *
- * If *count* is 0, water marking is disabled. Otherwise, it is set to the
- * *count* value. The *count* value must be greater than 0 and less
- * than the ring size.
- *
- * This function can be called at any time (not necessarily at
- * initialization).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param count
- *   The new water mark value.
- * @return
- *   - 0: Success; water mark changed.
- *   - -EINVAL: Invalid water mark value.
- */
-int rte_ring_set_water_mark(struct rte_ring *r, unsigned count);
-
-/**
- * Dump the status of the ring to the console.
- *
- * @param f
- *   A pointer to a file for output
- * @param r
- *   A pointer to the ring structure.
- */
-void rte_ring_dump(FILE *f, const struct rte_ring *r);
-
-/* the actual enqueue of pointers on the ring.
- * Placed here since identical code needed in both
- * single and multi producer enqueue functions */
-#define ENQUEUE_PTRS() do { \
-	const uint32_t size = r->prod.size; \
-	uint32_t idx = prod_head & mask; \
-	if (likely(idx + n < size)) { \
-		for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \
-			r->ring[idx] = obj_table[i]; \
-			r->ring[idx+1] = obj_table[i+1]; \
-			r->ring[idx+2] = obj_table[i+2]; \
-			r->ring[idx+3] = obj_table[i+3]; \
-		} \
-		switch (n & 0x3) { \
-			case 3: r->ring[idx++] = obj_table[i++]; \
-			case 2: r->ring[idx++] = obj_table[i++]; \
-			case 1: r->ring[idx++] = obj_table[i++]; \
-		} \
-	} else { \
-		for (i = 0; idx < size; i++, idx++)\
-			r->ring[idx] = obj_table[i]; \
-		for (idx = 0; i < n; i++, idx++) \
-			r->ring[idx] = obj_table[i]; \
-	} \
-} while(0)
-
-/* the actual copy of pointers on the ring to obj_table.
- * Placed here since identical code needed in both
- * single and multi consumer dequeue functions */
-#define DEQUEUE_PTRS() do { \
-	uint32_t idx = cons_head & mask; \
-	const uint32_t size = r->cons.size; \
-	if (likely(idx + n < size)) { \
-		for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\
-			obj_table[i] = r->ring[idx]; \
-			obj_table[i+1] = r->ring[idx+1]; \
-			obj_table[i+2] = r->ring[idx+2]; \
-			obj_table[i+3] = r->ring[idx+3]; \
-		} \
-		switch (n & 0x3) { \
-			case 3: obj_table[i++] = r->ring[idx++]; \
-			case 2: obj_table[i++] = r->ring[idx++]; \
-			case 1: obj_table[i++] = r->ring[idx++]; \
-		} \
-	} else { \
-		for (i = 0; idx < size; i++, idx++) \
-			obj_table[i] = r->ring[idx]; \
-		for (idx = 0; i < n; i++, idx++) \
-			obj_table[i] = r->ring[idx]; \
-	} \
-} while (0)
-
-/**
- * @internal Enqueue several objects on the ring (multi-producers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * producer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @param behavior
- *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
- * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects enqueued.
- */
-static inline int __attribute__((always_inline))
-__rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned n, enum rte_ring_queue_behavior behavior)
-{
-	uint32_t prod_head, prod_next;
-	uint32_t cons_tail, free_entries;
-	const unsigned max = n;
-	int success;
-	unsigned i;
-	uint32_t mask = r->prod.mask;
-	int ret;
-
-	/* move prod.head atomically */
-	do {
-		/* Reset n to the initial burst count */
-		n = max;
-
-		prod_head = r->prod.head;
-		cons_tail = r->cons.tail;
-		/* The subtraction is done between two unsigned 32bits value
-		 * (the result is always modulo 32 bits even if we have
-		 * prod_head > cons_tail). So 'free_entries' is always between 0
-		 * and size(ring)-1. */
-		free_entries = (mask + cons_tail - prod_head);
-
-		/* check that we have enough room in ring */
-		if (unlikely(n > free_entries)) {
-			if (behavior == RTE_RING_QUEUE_FIXED) {
-				__RING_STAT_ADD(r, enq_fail, n);
-				return -ENOBUFS;
-			}
-			else {
-				/* No free entry available */
-				if (unlikely(free_entries == 0)) {
-					__RING_STAT_ADD(r, enq_fail, n);
-					return 0;
-				}
-
-				n = free_entries;
-			}
-		}
-
-		prod_next = prod_head + n;
-		success = rte_atomic32_cmpset(&r->prod.head, prod_head,
-					      prod_next);
-	} while (unlikely(success == 0));
-
-	/* write entries in ring */
-	ENQUEUE_PTRS();
-	rte_compiler_barrier();
-
-	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
-				(int)(n | RTE_RING_QUOT_EXCEED);
-		__RING_STAT_ADD(r, enq_quota, n);
-	}
-	else {
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-		__RING_STAT_ADD(r, enq_success, n);
-	}
-
-	/*
-	 * If there are other enqueues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->prod.tail != prod_head))
-		rte_pause();
-
-	r->prod.tail = prod_next;
-	return ret;
-}
-
-/**
- * @internal Enqueue several objects on a ring (NOT multi-producers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @param behavior
- *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring
- * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects enqueued.
- */
-static inline int __attribute__((always_inline))
-__rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
-			 unsigned n, enum rte_ring_queue_behavior behavior)
-{
-	uint32_t prod_head, cons_tail;
-	uint32_t prod_next, free_entries;
-	unsigned i;
-	uint32_t mask = r->prod.mask;
-	int ret;
-
-	prod_head = r->prod.head;
-	cons_tail = r->cons.tail;
-	/* The subtraction is done between two unsigned 32bits value
-	 * (the result is always modulo 32 bits even if we have
-	 * prod_head > cons_tail). So 'free_entries' is always between 0
-	 * and size(ring)-1. */
-	free_entries = mask + cons_tail - prod_head;
-
-	/* check that we have enough room in ring */
-	if (unlikely(n > free_entries)) {
-		if (behavior == RTE_RING_QUEUE_FIXED) {
-			__RING_STAT_ADD(r, enq_fail, n);
-			return -ENOBUFS;
-		}
-		else {
-			/* No free entry available */
-			if (unlikely(free_entries == 0)) {
-				__RING_STAT_ADD(r, enq_fail, n);
-				return 0;
-			}
-
-			n = free_entries;
-		}
-	}
-
-	prod_next = prod_head + n;
-	r->prod.head = prod_next;
-
-	/* write entries in ring */
-	ENQUEUE_PTRS();
-	rte_compiler_barrier();
-
-	/* if we exceed the watermark */
-	if (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :
-			(int)(n | RTE_RING_QUOT_EXCEED);
-		__RING_STAT_ADD(r, enq_quota, n);
-	}
-	else {
-		ret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
-		__RING_STAT_ADD(r, enq_success, n);
-	}
-
-	r->prod.tail = prod_next;
-	return ret;
-}
-
-/**
- * @internal Dequeue several objects from a ring (multi-consumers safe). When
- * the request objects are more than the available objects, only dequeue the
- * actual number of objects
- *
- * This function uses a "compare and set" instruction to move the
- * consumer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @param behavior
- *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
- * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects dequeued.
- */
-
-static inline int __attribute__((always_inline))
-__rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
-		 unsigned n, enum rte_ring_queue_behavior behavior)
-{
-	uint32_t cons_head, prod_tail;
-	uint32_t cons_next, entries;
-	const unsigned max = n;
-	int success;
-	unsigned i;
-	uint32_t mask = r->prod.mask;
-
-	/* move cons.head atomically */
-	do {
-		/* Restore n as it may change every loop */
-		n = max;
-
-		cons_head = r->cons.head;
-		prod_tail = r->prod.tail;
-		/* The subtraction is done between two unsigned 32bits value
-		 * (the result is always modulo 32 bits even if we have
-		 * cons_head > prod_tail). So 'entries' is always between 0
-		 * and size(ring)-1. */
-		entries = (prod_tail - cons_head);
-
-		/* Set the actual entries for dequeue */
-		if (n > entries) {
-			if (behavior == RTE_RING_QUEUE_FIXED) {
-				__RING_STAT_ADD(r, deq_fail, n);
-				return -ENOENT;
-			}
-			else {
-				if (unlikely(entries == 0)){
-					__RING_STAT_ADD(r, deq_fail, n);
-					return 0;
-				}
-
-				n = entries;
-			}
-		}
-
-		cons_next = cons_head + n;
-		success = rte_atomic32_cmpset(&r->cons.head, cons_head,
-					      cons_next);
-	} while (unlikely(success == 0));
-
-	/* copy in table */
-	DEQUEUE_PTRS();
-	rte_compiler_barrier();
-
-	/*
-	 * If there are other dequeues in progress that preceded us,
-	 * we need to wait for them to complete
-	 */
-	while (unlikely(r->cons.tail != cons_head))
-		rte_pause();
-
-	__RING_STAT_ADD(r, deq_success, n);
-	r->cons.tail = cons_next;
-
-	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
-}
-
-/**
- * @internal Dequeue several objects from a ring (NOT multi-consumers safe).
- * When the request objects are more than the available objects, only dequeue
- * the actual number of objects
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @param behavior
- *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring
- * @return
- *   Depend on the behavior value
- *   if behavior = RTE_RING_QUEUE_FIXED
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- *   if behavior = RTE_RING_QUEUE_VARIABLE
- *   - n: Actual number of objects dequeued.
- */
-static inline int __attribute__((always_inline))
-__rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
-		 unsigned n, enum rte_ring_queue_behavior behavior)
-{
-	uint32_t cons_head, prod_tail;
-	uint32_t cons_next, entries;
-	unsigned i;
-	uint32_t mask = r->prod.mask;
-
-	cons_head = r->cons.head;
-	prod_tail = r->prod.tail;
-	/* The subtraction is done between two unsigned 32bits value
-	 * (the result is always modulo 32 bits even if we have
-	 * cons_head > prod_tail). So 'entries' is always between 0
-	 * and size(ring)-1. */
-	entries = prod_tail - cons_head;
-
-	if (n > entries) {
-		if (behavior == RTE_RING_QUEUE_FIXED) {
-			__RING_STAT_ADD(r, deq_fail, n);
-			return -ENOENT;
-		}
-		else {
-			if (unlikely(entries == 0)){
-				__RING_STAT_ADD(r, deq_fail, n);
-				return 0;
-			}
-
-			n = entries;
-		}
-	}
-
-	cons_next = cons_head + n;
-	r->cons.head = cons_next;
-
-	/* copy in table */
-	DEQUEUE_PTRS();
-	rte_compiler_barrier();
-
-	__RING_STAT_ADD(r, deq_success, n);
-	r->cons.tail = cons_next;
-	return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
-}
-
-/**
- * Enqueue several objects on the ring (multi-producers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * producer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - 0: Success; objects enqueue.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
-{
-	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
-}
-
-/**
- * Enqueue several objects on a ring (NOT multi-producers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
-{
-	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
-}
-
-/**
- * Enqueue several objects on a ring.
- *
- * This function calls the multi-producer or the single-producer
- * version depending on the default behavior that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
-		      unsigned n)
-{
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_bulk(r, obj_table, n);
-	else
-		return rte_ring_mp_enqueue_bulk(r, obj_table, n);
-}
-
-/**
- * Enqueue one object on a ring (multi-producers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * producer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj
- *   A pointer to the object to be added.
- * @return
- *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
-{
-	return rte_ring_mp_enqueue_bulk(r, &obj, 1);
-}
-
-/**
- * Enqueue one object on a ring (NOT multi-producers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj
- *   A pointer to the object to be added.
- * @return
- *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
-{
-	return rte_ring_sp_enqueue_bulk(r, &obj, 1);
-}
-
-/**
- * Enqueue one object on a ring.
- *
- * This function calls the multi-producer or the single-producer
- * version, depending on the default behaviour that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj
- *   A pointer to the object to be added.
- * @return
- *   - 0: Success; objects enqueued.
- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the
- *     high water mark is exceeded.
- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_enqueue(struct rte_ring *r, void *obj)
-{
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue(r, obj);
-	else
-		return rte_ring_mp_enqueue(r, obj);
-}
-
-/**
- * Dequeue several objects from a ring (multi-consumers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * consumer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
-}
-
-/**
- * Dequeue several objects from a ring (NOT multi-consumers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table,
- *   must be strictly positive.
- * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);
-}
-
-/**
- * Dequeue several objects from a ring.
- *
- * This function calls the multi-consumers or the single-consumer
- * version, depending on the default behaviour that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_bulk(r, obj_table, n);
-	else
-		return rte_ring_mc_dequeue_bulk(r, obj_table, n);
-}
-
-/**
- * Dequeue one object from a ring (multi-consumers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * consumer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
-{
-	return rte_ring_mc_dequeue_bulk(r, obj_p, 1);
-}
-
-/**
- * Dequeue one object from a ring (NOT multi-consumers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success; objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
-{
-	return rte_ring_sc_dequeue_bulk(r, obj_p, 1);
-}
-
-/**
- * Dequeue one object from a ring.
- *
- * This function calls the multi-consumers or the single-consumer
- * version depending on the default behaviour that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_p
- *   A pointer to a void * pointer (object) that will be filled.
- * @return
- *   - 0: Success, objects dequeued.
- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is
- *     dequeued.
- */
-static inline int __attribute__((always_inline))
-rte_ring_dequeue(struct rte_ring *r, void **obj_p)
-{
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue(r, obj_p);
-	else
-		return rte_ring_mc_dequeue(r, obj_p);
-}
-
-/**
- * Test if a ring is full.
- *
- * @param r
- *   A pointer to the ring structure.
- * @return
- *   - 1: The ring is full.
- *   - 0: The ring is not full.
- */
-static inline int
-rte_ring_full(const struct rte_ring *r)
-{
-	uint32_t prod_tail = r->prod.tail;
-	uint32_t cons_tail = r->cons.tail;
-	return (((cons_tail - prod_tail - 1) & r->prod.mask) == 0);
-}
-
-/**
- * Test if a ring is empty.
- *
- * @param r
- *   A pointer to the ring structure.
- * @return
- *   - 1: The ring is empty.
- *   - 0: The ring is not empty.
- */
-static inline int
-rte_ring_empty(const struct rte_ring *r)
-{
-	uint32_t prod_tail = r->prod.tail;
-	uint32_t cons_tail = r->cons.tail;
-	return !!(cons_tail == prod_tail);
-}
-
-/**
- * Return the number of entries in a ring.
- *
- * @param r
- *   A pointer to the ring structure.
- * @return
- *   The number of entries in the ring.
- */
-static inline unsigned
-rte_ring_count(const struct rte_ring *r)
-{
-	uint32_t prod_tail = r->prod.tail;
-	uint32_t cons_tail = r->cons.tail;
-	return ((prod_tail - cons_tail) & r->prod.mask);
-}
-
-/**
- * Return the number of free entries in a ring.
- *
- * @param r
- *   A pointer to the ring structure.
- * @return
- *   The number of free entries in the ring.
- */
-static inline unsigned
-rte_ring_free_count(const struct rte_ring *r)
-{
-	uint32_t prod_tail = r->prod.tail;
-	uint32_t cons_tail = r->cons.tail;
-	return ((cons_tail - prod_tail - 1) & r->prod.mask);
-}
-
-/**
- * Dump the status of all rings on the console
- *
- * @param f
- *   A pointer to a file for output
- */
-void rte_ring_list_dump(FILE *f);
-
-/**
- * Search a ring from its name
- *
- * @param name
- *   The name of the ring.
- * @return
- *   The pointer to the ring matching the name, or NULL if not found,
- *   with rte_errno set appropriately. Possible rte_errno values include:
- *    - ENOENT - required entry not available to return.
- */
-struct rte_ring *rte_ring_lookup(const char *name);
-
-/**
- * Enqueue several objects on the ring (multi-producers safe).
- *
- * This function uses a "compare and set" instruction to move the
- * producer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - n: Actual number of objects enqueued.
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
-{
-	return __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
-}
-
-/**
- * Enqueue several objects on a ring (NOT multi-producers safe).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - n: Actual number of objects enqueued.
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-			 unsigned n)
-{
-	return __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
-}
-
-/**
- * Enqueue several objects on a ring.
- *
- * This function calls the multi-producer or the single-producer
- * version depending on the default behavior that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects).
- * @param n
- *   The number of objects to add in the ring from the obj_table.
- * @return
- *   - n: Actual number of objects enqueued.
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,
-		      unsigned n)
-{
-	if (r->prod.sp_enqueue)
-		return rte_ring_sp_enqueue_burst(r, obj_table, n);
-	else
-		return rte_ring_mp_enqueue_burst(r, obj_table, n);
-}
-
-/**
- * Dequeue several objects from a ring (multi-consumers safe). When the request
- * objects are more than the available objects, only dequeue the actual number
- * of objects
- *
- * This function uses a "compare and set" instruction to move the
- * consumer index atomically.
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @return
- *   - n: Actual number of objects dequeued, 0 if ring is empty
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	return __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
-}
-
-/**
- * Dequeue several objects from a ring (NOT multi-consumers safe).When the
- * request objects are more than the available objects, only dequeue the
- * actual number of objects
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @return
- *   - n: Actual number of objects dequeued, 0 if ring is empty
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	return __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);
-}
-
-/**
- * Dequeue multiple objects from a ring up to a maximum number.
- *
- * This function calls the multi-consumers or the single-consumer
- * version, depending on the default behaviour that was specified at
- * ring creation time (see flags).
- *
- * @param r
- *   A pointer to the ring structure.
- * @param obj_table
- *   A pointer to a table of void * pointers (objects) that will be filled.
- * @param n
- *   The number of objects to dequeue from the ring to the obj_table.
- * @return
- *   - Number of objects dequeued
- */
-static inline unsigned __attribute__((always_inline))
-rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)
-{
-	if (r->cons.sc_dequeue)
-		return rte_ring_sc_dequeue_burst(r, obj_table, n);
-	else
-		return rte_ring_mc_dequeue_burst(r, obj_table, n);
-}
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_RING_H_ */
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 08/13] Update path of core libraries
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (6 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 07/13] core: move librte_ring " Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 09/13] mk: new corelib makefile Sergio Gonzalez Monroy
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

Update path to libraries inside core subdirectory.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 app/test/test_eal_fs.c                         |  2 +-
 lib/Makefile                                   |  6 +-----
 lib/core/librte_eal/bsdapp/eal/Makefile        | 14 +++++++-------
 lib/core/librte_eal/common/Makefile            |  2 +-
 lib/core/librte_eal/linuxapp/eal/Makefile      | 14 +++++++-------
 lib/core/librte_eal/linuxapp/kni/Makefile      |  2 +-
 lib/core/librte_eal/linuxapp/xen_dom0/Makefile |  2 +-
 lib/core/librte_malloc/Makefile                |  2 +-
 lib/core/librte_mbuf/Makefile                  |  2 +-
 lib/core/librte_mempool/Makefile               |  4 ++--
 lib/core/librte_ring/Makefile                  |  2 +-
 lib/librte_acl/Makefile                        |  4 ++--
 lib/librte_cfgfile/Makefile                    |  2 +-
 lib/librte_cmdline/Makefile                    |  4 ++--
 lib/librte_distributor/Makefile                |  3 +--
 lib/librte_ether/Makefile                      |  2 +-
 lib/librte_hash/Makefile                       |  2 +-
 lib/librte_ip_frag/Makefile                    |  4 ++--
 lib/librte_ivshmem/Makefile                    |  2 +-
 lib/librte_kni/Makefile                        |  4 ++--
 lib/librte_kvargs/Makefile                     |  4 ++--
 lib/librte_lpm/Makefile                        |  4 ++--
 lib/librte_meter/Makefile                      |  2 +-
 lib/librte_pipeline/Makefile                   |  1 +
 lib/librte_pmd_af_packet/Makefile              |  3 +--
 lib/librte_pmd_bond/Makefile                   |  5 ++---
 lib/librte_pmd_e1000/Makefile                  |  6 +++---
 lib/librte_pmd_enic/Makefile                   |  6 +++---
 lib/librte_pmd_i40e/Makefile                   |  6 +++---
 lib/librte_pmd_ixgbe/Makefile                  |  6 +++---
 lib/librte_pmd_pcap/Makefile                   |  3 +--
 lib/librte_pmd_ring/Makefile                   |  4 ++--
 lib/librte_pmd_virtio/Makefile                 |  6 +++---
 lib/librte_pmd_vmxnet3/Makefile                |  6 +++---
 lib/librte_pmd_xenvirt/Makefile                |  6 +++---
 lib/librte_port/Makefile                       |  6 ++----
 lib/librte_power/Makefile                      |  2 +-
 lib/librte_sched/Makefile                      |  5 +++--
 lib/librte_table/Makefile                      |  5 +----
 lib/librte_timer/Makefile                      |  4 ++--
 lib/librte_vhost/Makefile                      |  6 ++----
 41 files changed, 81 insertions(+), 94 deletions(-)

diff --git a/app/test/test_eal_fs.c b/app/test/test_eal_fs.c
index 1cbcb9d..f6e81fc 100644
--- a/app/test/test_eal_fs.c
+++ b/app/test/test_eal_fs.c
@@ -38,7 +38,7 @@
 #include <errno.h>
 
 /* eal_filesystem.h is not a public header file, so use relative path */
-#include "../../lib/librte_eal/common/eal_filesystem.h"
+#include "../../lib/core/librte_eal/common/eal_filesystem.h"
 
 static int
 test_parse_sysfs_value(void)
diff --git a/lib/Makefile b/lib/Makefile
index bafc9ae..6de4587 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -31,11 +31,7 @@
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
-DIRS-$(CONFIG_RTE_LIBRTE_EAL) += librte_eal
-DIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += librte_malloc
-DIRS-$(CONFIG_RTE_LIBRTE_RING) += librte_ring
-DIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += librte_mempool
-DIRS-$(CONFIG_RTE_LIBRTE_MBUF) += librte_mbuf
+DIRS-y += core
 DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer
 DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
diff --git a/lib/core/librte_eal/bsdapp/eal/Makefile b/lib/core/librte_eal/bsdapp/eal/Makefile
index d434882..af0338f 100644
--- a/lib/core/librte_eal/bsdapp/eal/Makefile
+++ b/lib/core/librte_eal/bsdapp/eal/Makefile
@@ -33,14 +33,14 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 LIB = librte_eal.a
 
-VPATH += $(RTE_SDK)/lib/librte_eal/common
+VPATH += $(RTE_SDK)/lib/core/librte_eal/common
 
 CFLAGS += -I$(SRCDIR)/include
-CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
-CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
-CFLAGS += -I$(RTE_SDK)/lib/librte_ring
-CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-CFLAGS += -I$(RTE_SDK)/lib/librte_malloc
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_eal/common
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_ring
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_mempool
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_malloc
 CFLAGS += -I$(RTE_SDK)/lib/librte_ether
 CFLAGS += -I$(RTE_SDK)/lib/librte_pmd_ring
 CFLAGS += -I$(RTE_SDK)/lib/librte_pmd_pcap
@@ -91,7 +91,7 @@ INC := rte_interrupts.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP)-include/exec-env := \
 	$(addprefix include/exec-env/,$(INC))
 
-DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += lib/librte_eal/common
+DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += lib/core/librte_eal/common
 
 include $(RTE_SDK)/mk/rte.lib.mk
 
diff --git a/lib/core/librte_eal/common/Makefile b/lib/core/librte_eal/common/Makefile
index 52c1a5f..1533f81 100644
--- a/lib/core/librte_eal/common/Makefile
+++ b/lib/core/librte_eal/common/Makefile
@@ -50,7 +50,7 @@ GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
 GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h
 # defined in mk/arch/$(RTE_ARCH)/rte.vars.mk
 ARCH_DIR ?= $(RTE_ARCH)
-ARCH_INC := $(notdir $(wildcard $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)/*.h))
+ARCH_INC := $(notdir $(wildcard $(RTE_SDK)/lib/core/librte_eal/common/include/arch/$(ARCH_DIR))/*.h)
 
 SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include := $(addprefix include/,$(INC))
 SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include += \
diff --git a/lib/core/librte_eal/linuxapp/eal/Makefile b/lib/core/librte_eal/linuxapp/eal/Makefile
index 72ecf3a..0af2cd6 100644
--- a/lib/core/librte_eal/linuxapp/eal/Makefile
+++ b/lib/core/librte_eal/linuxapp/eal/Makefile
@@ -33,14 +33,14 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 LIB = librte_eal.a
 
-VPATH += $(RTE_SDK)/lib/librte_eal/common
+VPATH += $(RTE_SDK)/lib/core/librte_eal/common
 
 CFLAGS += -I$(SRCDIR)/include
-CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
-CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include
-CFLAGS += -I$(RTE_SDK)/lib/librte_ring
-CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-CFLAGS += -I$(RTE_SDK)/lib/librte_malloc
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_eal/common
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_eal/common/include
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_ring
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_mempool
+CFLAGS += -I$(RTE_SDK)/lib/core/librte_malloc
 CFLAGS += -I$(RTE_SDK)/lib/librte_ether
 CFLAGS += -I$(RTE_SDK)/lib/librte_ivshmem
 CFLAGS += -I$(RTE_SDK)/lib/librte_pmd_ring
@@ -106,7 +106,7 @@ INC := rte_interrupts.h rte_kni_common.h rte_dom0_common.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP)-include/exec-env := \
 	$(addprefix include/exec-env/,$(INC))
 
-DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += lib/librte_eal/common
+DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += lib/core/librte_eal/common
 
 include $(RTE_SDK)/mk/rte.lib.mk
 
diff --git a/lib/core/librte_eal/linuxapp/kni/Makefile b/lib/core/librte_eal/linuxapp/kni/Makefile
index fb673d9..01142ca 100644
--- a/lib/core/librte_eal/linuxapp/kni/Makefile
+++ b/lib/core/librte_eal/linuxapp/kni/Makefile
@@ -52,7 +52,7 @@ MODULE_CFLAGS += -D"UBUNTU_KERNEL_CODE=UBUNTU_KERNEL_VERSION($(UBUNTU_KERNEL_COD
 endif
 
 # this lib needs main eal
-DEPDIRS-y += lib/librte_eal/linuxapp/eal
+DEPDIRS-y += lib/core/librte_eal/linuxapp/eal
 
 #
 # all source are stored in SRCS-y
diff --git a/lib/core/librte_eal/linuxapp/xen_dom0/Makefile b/lib/core/librte_eal/linuxapp/xen_dom0/Makefile
index 9d22fb9..de08f4f 100644
--- a/lib/core/librte_eal/linuxapp/xen_dom0/Makefile
+++ b/lib/core/librte_eal/linuxapp/xen_dom0/Makefile
@@ -45,7 +45,7 @@ MODULE_CFLAGS += -include $(RTE_OUTPUT)/include/rte_config.h
 MODULE_CFLAGS += -Wall -Werror
 
 # this lib needs main eal
-DEPDIRS-y += lib/librte_eal/linuxapp/eal
+DEPDIRS-y += lib/core/librte_eal/linuxapp/eal
 
 #
 # all source are stored in SRCS-y
diff --git a/lib/core/librte_malloc/Makefile b/lib/core/librte_malloc/Makefile
index ba87e34..8ed6e7d 100644
--- a/lib/core/librte_malloc/Makefile
+++ b/lib/core/librte_malloc/Makefile
@@ -43,6 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_MALLOC) := rte_malloc.c malloc_elem.c malloc_heap.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/core/librte_eal
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_mbuf/Makefile b/lib/core/librte_mbuf/Makefile
index 9b45ba4..b916d77 100644
--- a/lib/core/librte_mbuf/Makefile
+++ b/lib/core/librte_mbuf/Makefile
@@ -43,6 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF) += lib/librte_eal lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF) += lib/core/librte_eal lib/core/librte_mempool
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_mempool/Makefile b/lib/core/librte_mempool/Makefile
index 9939e10..94a7fc1 100644
--- a/lib/core/librte_mempool/Makefile
+++ b/lib/core/librte_mempool/Makefile
@@ -45,7 +45,7 @@ endif
 SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 
 # this lib needs eal, rte_ring and rte_malloc
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_eal lib/librte_ring
-DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/core/librte_eal lib/core/librte_ring
+DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/core/librte_malloc
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/core/librte_ring/Makefile b/lib/core/librte_ring/Makefile
index 2380a43..0b196e8 100644
--- a/lib/core/librte_ring/Makefile
+++ b/lib/core/librte_ring/Makefile
@@ -43,6 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h
 
 # this lib needs eal and rte_malloc
-DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/core/librte_eal lib/core/librte_malloc
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_acl/Makefile b/lib/librte_acl/Makefile
index 65e566d..d3636ed 100644
--- a/lib/librte_acl/Makefile
+++ b/lib/librte_acl/Makefile
@@ -56,8 +56,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_ACL_STANDALONE),y)
 # standalone build
 SYMLINK-$(CONFIG_RTE_LIBRTE_ACL)-include += rte_acl_osdep_alone.h
 else
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_ACL) += lib/librte_eal lib/librte_malloc
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ACL) += lib/core
 endif
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
index 55e8701..c959f5b 100644
--- a/lib/librte_cfgfile/Makefile
+++ b/lib/librte_cfgfile/Makefile
@@ -48,6 +48,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_CFGFILE) += rte_cfgfile.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_CFGFILE)-include += rte_cfgfile.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cmdline/Makefile b/lib/librte_cmdline/Makefile
index 7eae449..ba5e49f 100644
--- a/lib/librte_cmdline/Makefile
+++ b/lib/librte_cmdline/Makefile
@@ -57,7 +57,7 @@ INCS += cmdline_parse_etheraddr.h cmdline_parse_string.h cmdline_rdline.h
 INCS += cmdline_vt100.h cmdline_socket.h cmdline_cirbuf.h cmdline_parse_portlist.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_CMDLINE)-include := $(INCS)
 
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += lib/librte_eal
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile
index 36699f8..e1ab6ee 100644
--- a/lib/librte_distributor/Makefile
+++ b/lib/librte_distributor/Makefile
@@ -44,7 +44,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)-include := rte_distributor.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/librte_eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index a461c31..647c554 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -49,6 +49,6 @@ SYMLINK-y-include += rte_ethdev.h
 SYMLINK-y-include += rte_eth_ctrl.h
 
 # this lib depends upon:
-DEPDIRS-y += lib/librte_eal lib/librte_mempool lib/librte_ring lib/librte_mbuf
+DEPDIRS-y += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_hash/Makefile b/lib/librte_hash/Makefile
index 95e4c09..220ba5d 100644
--- a/lib/librte_hash/Makefile
+++ b/lib/librte_hash/Makefile
@@ -48,6 +48,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_HASH)-include += rte_jhash.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_HASH)-include += rte_fbk_hash.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_HASH) += lib/librte_eal lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_HASH) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ip_frag/Makefile b/lib/librte_ip_frag/Makefile
index 8c00d39..9fbff70 100644
--- a/lib/librte_ip_frag/Makefile
+++ b/lib/librte_ip_frag/Makefile
@@ -53,7 +53,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += ip_frag_internal.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_IP_FRAG)-include += rte_ip_frag.h
 
 
-# this library depends on rte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += lib/librte_mempool lib/librte_ether
+# this library depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += lib/core lib/librte_ether
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ivshmem/Makefile b/lib/librte_ivshmem/Makefile
index 536814c..d873195 100644
--- a/lib/librte_ivshmem/Makefile
+++ b/lib/librte_ivshmem/Makefile
@@ -43,6 +43,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_IVSHMEM) := rte_ivshmem.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_IVSHMEM)-include := rte_ivshmem.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_IVSHMEM) += lib/librte_mempool
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IVSHMEM) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile
index 5267304..d2472c2 100644
--- a/lib/librte_kni/Makefile
+++ b/lib/librte_kni/Makefile
@@ -42,8 +42,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_KNI) := rte_kni.c
 # install includes
 SYMLINK-$(CONFIG_RTE_LIBRTE_KNI)-include := rte_kni.h
 
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_KNI) += lib/librte_eal lib/librte_mbuf
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_KNI) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_KNI) += lib/librte_ether
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_kvargs/Makefile b/lib/librte_kvargs/Makefile
index b09359a..00564e2 100644
--- a/lib/librte_kvargs/Makefile
+++ b/lib/librte_kvargs/Makefile
@@ -45,7 +45,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) := rte_kvargs.c
 INCS := rte_kvargs.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_KVARGS)-include := $(INCS)
 
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += lib/librte_eal
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index fa94163..9bc0711 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -43,7 +43,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_LPM) := rte_lpm.c rte_lpm6.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_LPM)-include := rte_lpm.h rte_lpm6.h
 
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_LPM) += lib/librte_eal lib/librte_malloc
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_LPM) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_meter/Makefile b/lib/librte_meter/Makefile
index b25c0cc..8a7bee2 100644
--- a/lib/librte_meter/Makefile
+++ b/lib/librte_meter/Makefile
@@ -48,6 +48,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_METER) := rte_meter.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_METER)-include := rte_meter.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_METER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_METER) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pipeline/Makefile b/lib/librte_pipeline/Makefile
index cf8fde8..aae047a 100644
--- a/lib/librte_pipeline/Makefile
+++ b/lib/librte_pipeline/Makefile
@@ -48,6 +48,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_PIPELINE) := rte_pipeline.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_PIPELINE)-include += rte_pipeline.h
 
 # this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += lib/librte_core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) := lib/librte_table
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += lib/librte_port
 
diff --git a/lib/librte_pmd_af_packet/Makefile b/lib/librte_pmd_af_packet/Makefile
index 6955e5c..39853a1 100644
--- a/lib/librte_pmd_af_packet/Makefile
+++ b/lib/librte_pmd_af_packet/Makefile
@@ -52,9 +52,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += rte_eth_af_packet.c
 SYMLINK-y-include += rte_eth_af_packet.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_malloc
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_kvargs
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_bond/Makefile b/lib/librte_pmd_bond/Makefile
index cdff126..d271300 100644
--- a/lib/librte_pmd_bond/Makefile
+++ b/lib/librte_pmd_bond/Makefile
@@ -58,10 +58,9 @@ SYMLINK-y-include += rte_eth_bond.h
 SYMLINK-y-include += rte_eth_bond_8023ad.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_malloc
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_kvargs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_cmdline
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_e1000/Makefile b/lib/librte_pmd_e1000/Makefile
index 14bc4a2..b083db6 100644
--- a/lib/librte_pmd_e1000/Makefile
+++ b/lib/librte_pmd_e1000/Makefile
@@ -88,8 +88,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_EM_PMD) += em_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_EM_PMD) += em_rxtx.c
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_net
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_enic/Makefile b/lib/librte_pmd_enic/Makefile
index a2a623f..489ac1d 100644
--- a/lib/librte_pmd_enic/Makefile
+++ b/lib/librte_pmd_enic/Makefile
@@ -59,9 +59,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_rq.c
 SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += vnic/vnic_rss.c
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_net
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_hash
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_i40e/Makefile b/lib/librte_pmd_i40e/Makefile
index 98e4bdf..664d0e5 100644
--- a/lib/librte_pmd_i40e/Makefile
+++ b/lib/librte_pmd_i40e/Makefile
@@ -94,8 +94,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_pf.c
 SRCS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += i40e_fdir.c
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_net
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_ixgbe/Makefile b/lib/librte_pmd_ixgbe/Makefile
index 3588047..33f4aa3 100644
--- a/lib/librte_pmd_ixgbe/Makefile
+++ b/lib/librte_pmd_ixgbe/Makefile
@@ -110,8 +110,8 @@ endif
 
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_pcap/Makefile b/lib/librte_pmd_pcap/Makefile
index c5c214d..0ec383a 100644
--- a/lib/librte_pmd_pcap/Makefile
+++ b/lib/librte_pmd_pcap/Makefile
@@ -51,9 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += rte_eth_pcap.c
 SYMLINK-y-include +=
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_malloc
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_kvargs
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_ring/Makefile b/lib/librte_pmd_ring/Makefile
index b57e421..6f5d423 100644
--- a/lib/librte_pmd_ring/Makefile
+++ b/lib/librte_pmd_ring/Makefile
@@ -50,8 +50,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_RING) += rte_eth_ring.c
 SYMLINK-y-include += rte_eth_ring.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_eal lib/librte_ring
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_mbuf lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_kvargs
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_virtio/Makefile b/lib/librte_pmd_virtio/Makefile
index 456095b..477bf9c 100644
--- a/lib/librte_pmd_virtio/Makefile
+++ b/lib/librte_pmd_virtio/Makefile
@@ -50,8 +50,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += virtio_ethdev.c
 
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_net
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_vmxnet3/Makefile b/lib/librte_pmd_vmxnet3/Makefile
index 6872c74..d833c90 100644
--- a/lib/librte_pmd_vmxnet3/Makefile
+++ b/lib/librte_pmd_vmxnet3/Makefile
@@ -73,8 +73,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3_rxtx.c
 SRCS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += vmxnet3_ethdev.c
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_net
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_xenvirt/Makefile b/lib/librte_pmd_xenvirt/Makefile
index 01bfcaa..bf6265e 100644
--- a/lib/librte_pmd_xenvirt/Makefile
+++ b/lib/librte_pmd_xenvirt/Makefile
@@ -50,9 +50,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += rte_eth_xenvirt.c rte_mempool_gntalloc.
 SYMLINK-y-include += rte_eth_xenvirt.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_eal lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_net lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_net
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_cmdline
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_port/Makefile b/lib/librte_port/Makefile
index 82b5192..8b0650f 100644
--- a/lib/librte_port/Makefile
+++ b/lib/librte_port/Makefile
@@ -67,11 +67,9 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_sched.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_PORT)-include += rte_port_source_sink.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) := lib/librte_eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_mempool
-DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) := lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_ether
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_sched
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_ip_frag
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_power/Makefile b/lib/librte_power/Makefile
index d672a5a..3a531b0 100644
--- a/lib/librte_power/Makefile
+++ b/lib/librte_power/Makefile
@@ -44,6 +44,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_POWER) += rte_power_kvm_vm.c guest_channel.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_POWER)-include := rte_power.h
 
 # this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_POWER) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_POWER) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
index 1a25b21..a2b965f 100644
--- a/lib/librte_sched/Makefile
+++ b/lib/librte_sched/Makefile
@@ -50,7 +50,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_SCHED) += rte_sched.c rte_red.c rte_approx.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_SCHED)-include := rte_sched.h rte_bitmap.h rte_sched_common.h rte_red.h rte_approx.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_mempool lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_net lib/librte_timer
+DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/core
+DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_net
+DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_timer
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index dd684cc..fbd451a 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -68,10 +68,7 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_array.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_TABLE)-include += rte_table_stub.h
 
 # this lib depends upon:
-DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) := lib/librte_eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_mbuf
-DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_mempool
-DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_malloc
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) := lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_port
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_lpm
 ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
diff --git a/lib/librte_timer/Makefile b/lib/librte_timer/Makefile
index 07eb0c6..afdffb9 100644
--- a/lib/librte_timer/Makefile
+++ b/lib/librte_timer/Makefile
@@ -42,7 +42,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_TIMER) := rte_timer.c
 # install this header file
 SYMLINK-$(CONFIG_RTE_LIBRTE_TIMER)-include := rte_timer.h
 
-# this lib needs eal
-DEPDIRS-$(CONFIG_RTE_LIBRTE_TIMER) += lib/librte_eal
+# this lib needs
+DEPDIRS-$(CONFIG_RTE_LIBRTE_TIMER) += lib/core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
index c008d64..3c54be7 100644
--- a/lib/librte_vhost/Makefile
+++ b/lib/librte_vhost/Makefile
@@ -34,8 +34,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 # library name
 LIB = librte_vhost.a
 
-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=64 -lfuse
-LDFLAGS += -lfuse
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=64
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := vhost-net-cdev.c virtio-net.c vhost_rxtx.c
 
@@ -43,8 +42,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := vhost-net-cdev.c virtio-net.c vhost_rxtx.c
 SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_virtio_net.h
 
 # dependencies
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_ether
-DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_mbuf
 
 include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 09/13] mk: new corelib makefile
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (7 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 08/13] Update path of core libraries Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 10/13] lib: Set LDLIBS for each library Sergio Gonzalez Monroy
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

This patch creates a new rte.corelib.mk file and updates core libraries
to use it.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/core/librte_eal/bsdapp/eal/Makefile   |  2 +-
 lib/core/librte_eal/linuxapp/eal/Makefile |  3 +-
 lib/core/librte_malloc/Makefile           |  2 +-
 lib/core/librte_mbuf/Makefile             |  2 +-
 lib/core/librte_mempool/Makefile          |  2 +-
 lib/core/librte_ring/Makefile             |  2 +-
 mk/rte.corelib.mk                         | 81 +++++++++++++++++++++++++++++++
 7 files changed, 87 insertions(+), 7 deletions(-)
 create mode 100644 mk/rte.corelib.mk

diff --git a/lib/core/librte_eal/bsdapp/eal/Makefile b/lib/core/librte_eal/bsdapp/eal/Makefile
index af0338f..afba0c6 100644
--- a/lib/core/librte_eal/bsdapp/eal/Makefile
+++ b/lib/core/librte_eal/bsdapp/eal/Makefile
@@ -93,5 +93,5 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP)-include/exec-env := \
 
 DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_BSDAPP) += lib/core/librte_eal/common
 
-include $(RTE_SDK)/mk/rte.lib.mk
+include $(RTE_SDK)/mk/rte.corelib.mk
 
diff --git a/lib/core/librte_eal/linuxapp/eal/Makefile b/lib/core/librte_eal/linuxapp/eal/Makefile
index 0af2cd6..04165a2 100644
--- a/lib/core/librte_eal/linuxapp/eal/Makefile
+++ b/lib/core/librte_eal/linuxapp/eal/Makefile
@@ -108,5 +108,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP)-include/exec-env := \
 
 DEPDIRS-$(CONFIG_RTE_LIBRTE_EAL_LINUXAPP) += lib/core/librte_eal/common
 
-include $(RTE_SDK)/mk/rte.lib.mk
-
+include $(RTE_SDK)/mk/rte.corelib.mk
diff --git a/lib/core/librte_malloc/Makefile b/lib/core/librte_malloc/Makefile
index 8ed6e7d..8bc3d06 100644
--- a/lib/core/librte_malloc/Makefile
+++ b/lib/core/librte_malloc/Makefile
@@ -45,4 +45,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_MALLOC)-include := rte_malloc.h
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_MALLOC) += lib/core/librte_eal
 
-include $(RTE_SDK)/mk/rte.lib.mk
+include $(RTE_SDK)/mk/rte.corelib.mk
diff --git a/lib/core/librte_mbuf/Makefile b/lib/core/librte_mbuf/Makefile
index b916d77..ceb4bd6 100644
--- a/lib/core/librte_mbuf/Makefile
+++ b/lib/core/librte_mbuf/Makefile
@@ -45,4 +45,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_MBUF) += lib/core/librte_eal lib/core/librte_mempool
 
-include $(RTE_SDK)/mk/rte.lib.mk
+include $(RTE_SDK)/mk/rte.corelib.mk
diff --git a/lib/core/librte_mempool/Makefile b/lib/core/librte_mempool/Makefile
index 94a7fc1..6e1e7c3 100644
--- a/lib/core/librte_mempool/Makefile
+++ b/lib/core/librte_mempool/Makefile
@@ -48,4 +48,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_MEMPOOL)-include := rte_mempool.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/core/librte_eal lib/core/librte_ring
 DEPDIRS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += lib/core/librte_malloc
 
-include $(RTE_SDK)/mk/rte.lib.mk
+include $(RTE_SDK)/mk/rte.corelib.mk
diff --git a/lib/core/librte_ring/Makefile b/lib/core/librte_ring/Makefile
index 0b196e8..5111d34 100644
--- a/lib/core/librte_ring/Makefile
+++ b/lib/core/librte_ring/Makefile
@@ -45,4 +45,4 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h
 # this lib needs eal and rte_malloc
 DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/core/librte_eal lib/core/librte_malloc
 
-include $(RTE_SDK)/mk/rte.lib.mk
+include $(RTE_SDK)/mk/rte.corelib.mk
diff --git a/mk/rte.corelib.mk b/mk/rte.corelib.mk
new file mode 100644
index 0000000..0f83021
--- /dev/null
+++ b/mk/rte.corelib.mk
@@ -0,0 +1,81 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Intel Corporation nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/internal/rte.compile-pre.mk
+include $(RTE_SDK)/mk/internal/rte.install-pre.mk
+include $(RTE_SDK)/mk/internal/rte.clean-pre.mk
+include $(RTE_SDK)/mk/internal/rte.build-pre.mk
+include $(RTE_SDK)/mk/internal/rte.depdirs-pre.mk
+
+# VPATH contains at least SRCDIR
+VPATH += $(SRCDIR)
+
+LIB := $(patsubst %.a,%.touch,$(LIB))
+
+_BUILD = $(LIB)
+_INSTALL = $(INSTALL-FILES-y) $(SYMLINK-FILES-y)
+_CLEAN = doclean
+
+.PHONY: all
+all: install
+
+.PHONY: install
+install: build _postinstall
+
+_postinstall: build
+
+.PHONY: build
+build: _postbuild
+
+$(LIB): $(OBJS-y)
+	@mkdir -p $(COREDIR);
+	@cp -f $? $(COREDIR) && touch $(LIB)
+
+#
+# Clean all generated files
+#
+.PHONY: clean
+clean: _postclean
+
+.PHONY: doclean
+doclean:
+	$(Q)rm -rf $(LIB) $(OBJS-all) $(DEPS-all) $(DEPSTMP-all) \
+	  $(CMDS-all) $(INSTALL-FILES-all)
+	$(Q)rm -f $(_BUILD_TARGETS) $(_INSTALL_TARGETS) $(_CLEAN_TARGETS)
+
+include $(RTE_SDK)/mk/internal/rte.compile-post.mk
+include $(RTE_SDK)/mk/internal/rte.install-post.mk
+include $(RTE_SDK)/mk/internal/rte.clean-post.mk
+include $(RTE_SDK)/mk/internal/rte.build-post.mk
+include $(RTE_SDK)/mk/internal/rte.depdirs-post.mk
+
+.PHONY: FORCE
+FORCE:
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 10/13] lib: Set LDLIBS for each library
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (8 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 09/13] mk: new corelib makefile Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 11/13] mk: Use LDLIBS when linking shared libraries Sergio Gonzalez Monroy
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

This patch set LDLIBS for each library.
When creating shared libraries, each library will be linked against
their dependant libraries - LDLIBS.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 lib/librte_acl/Makefile           | 1 +
 lib/librte_cfgfile/Makefile       | 1 +
 lib/librte_cmdline/Makefile       | 1 +
 lib/librte_distributor/Makefile   | 1 +
 lib/librte_ether/Makefile         | 1 +
 lib/librte_hash/Makefile          | 1 +
 lib/librte_ip_frag/Makefile       | 1 +
 lib/librte_ivshmem/Makefile       | 1 +
 lib/librte_kni/Makefile           | 1 +
 lib/librte_kvargs/Makefile        | 1 +
 lib/librte_lpm/Makefile           | 1 +
 lib/librte_meter/Makefile         | 2 ++
 lib/librte_pipeline/Makefile      | 2 ++
 lib/librte_pmd_af_packet/Makefile | 2 ++
 lib/librte_pmd_bond/Makefile      | 2 ++
 lib/librte_pmd_e1000/Makefile     | 2 ++
 lib/librte_pmd_enic/Makefile      | 2 ++
 lib/librte_pmd_i40e/Makefile      | 2 ++
 lib/librte_pmd_ixgbe/Makefile     | 2 ++
 lib/librte_pmd_pcap/Makefile      | 2 ++
 lib/librte_pmd_ring/Makefile      | 2 ++
 lib/librte_pmd_virtio/Makefile    | 2 ++
 lib/librte_pmd_vmxnet3/Makefile   | 2 ++
 lib/librte_pmd_xenvirt/Makefile   | 2 ++
 lib/librte_port/Makefile          | 2 ++
 lib/librte_power/Makefile         | 2 ++
 lib/librte_sched/Makefile         | 2 ++
 lib/librte_table/Makefile         | 3 +++
 lib/librte_timer/Makefile         | 2 ++
 lib/librte_vhost/Makefile         | 2 ++
 30 files changed, 50 insertions(+)

diff --git a/lib/librte_acl/Makefile b/lib/librte_acl/Makefile
index d3636ed..63982e8 100644
--- a/lib/librte_acl/Makefile
+++ b/lib/librte_acl/Makefile
@@ -58,6 +58,7 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_ACL)-include += rte_acl_osdep_alone.h
 else
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ACL) += lib/core
+LDLIBS += -lrte_core
 endif
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cfgfile/Makefile b/lib/librte_cfgfile/Makefile
index c959f5b..4fc3cb1 100644
--- a/lib/librte_cfgfile/Makefile
+++ b/lib/librte_cfgfile/Makefile
@@ -49,5 +49,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_CFGFILE)-include += rte_cfgfile.h
 
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_cmdline/Makefile b/lib/librte_cmdline/Makefile
index ba5e49f..f75689d 100644
--- a/lib/librte_cmdline/Makefile
+++ b/lib/librte_cmdline/Makefile
@@ -59,5 +59,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_CMDLINE)-include := $(INCS)
 
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile
index e1ab6ee..2c8bce2 100644
--- a/lib/librte_distributor/Makefile
+++ b/lib/librte_distributor/Makefile
@@ -45,5 +45,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR)-include := rte_distributor.h
 
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 647c554..c925ab2 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -50,5 +50,6 @@ SYMLINK-y-include += rte_eth_ctrl.h
 
 # this lib depends upon:
 DEPDIRS-y += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_hash/Makefile b/lib/librte_hash/Makefile
index 220ba5d..d18147f 100644
--- a/lib/librte_hash/Makefile
+++ b/lib/librte_hash/Makefile
@@ -49,5 +49,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_HASH)-include += rte_fbk_hash.h
 
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_HASH) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ip_frag/Makefile b/lib/librte_ip_frag/Makefile
index 9fbff70..078ca9e 100644
--- a/lib/librte_ip_frag/Makefile
+++ b/lib/librte_ip_frag/Makefile
@@ -55,5 +55,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IP_FRAG)-include += rte_ip_frag.h
 
 # this library depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IP_FRAG) += lib/core lib/librte_ether
+LDLIBS += -lrte_core -lethdev
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ivshmem/Makefile b/lib/librte_ivshmem/Makefile
index d873195..c059b3f 100644
--- a/lib/librte_ivshmem/Makefile
+++ b/lib/librte_ivshmem/Makefile
@@ -44,5 +44,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_IVSHMEM)-include := rte_ivshmem.h
 
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IVSHMEM) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_kni/Makefile b/lib/librte_kni/Makefile
index d2472c2..63fe80d 100644
--- a/lib/librte_kni/Makefile
+++ b/lib/librte_kni/Makefile
@@ -45,5 +45,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_KNI)-include := rte_kni.h
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_KNI) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_KNI) += lib/librte_ether
+LDLIBS += -lrte_core -lethedev
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_kvargs/Makefile b/lib/librte_kvargs/Makefile
index 00564e2..8a015fc 100644
--- a/lib/librte_kvargs/Makefile
+++ b/lib/librte_kvargs/Makefile
@@ -47,5 +47,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_KVARGS)-include := $(INCS)
 
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
index 9bc0711..e041807 100644
--- a/lib/librte_lpm/Makefile
+++ b/lib/librte_lpm/Makefile
@@ -45,5 +45,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_LPM)-include := rte_lpm.h rte_lpm6.h
 
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_LPM) += lib/core
+LDLIBS += -lrte_core
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_meter/Makefile b/lib/librte_meter/Makefile
index 8a7bee2..c4fcd4a 100644
--- a/lib/librte_meter/Makefile
+++ b/lib/librte_meter/Makefile
@@ -50,4 +50,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_METER)-include := rte_meter.h
 # this lib depends upon:
 DEPDIRS-$(CONFIG_RTE_LIBRTE_METER) += lib/core
 
+LDLIBS += -lrte_core
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pipeline/Makefile b/lib/librte_pipeline/Makefile
index aae047a..873a228 100644
--- a/lib/librte_pipeline/Makefile
+++ b/lib/librte_pipeline/Makefile
@@ -52,4 +52,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += lib/librte_core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) := lib/librte_table
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += lib/librte_port
 
+LDLIBS += -lrte_core -lrte_table -lrte_port
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_af_packet/Makefile b/lib/librte_pmd_af_packet/Makefile
index 39853a1..1b669a7 100644
--- a/lib/librte_pmd_af_packet/Makefile
+++ b/lib/librte_pmd_af_packet/Makefile
@@ -56,4 +56,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += lib/librte_kvargs
 
+LDLIBS += -lrte_core -lethdev -lrte_kvargs
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_bond/Makefile b/lib/librte_pmd_bond/Makefile
index d271300..1e0e893 100644
--- a/lib/librte_pmd_bond/Makefile
+++ b/lib/librte_pmd_bond/Makefile
@@ -63,4 +63,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_kvargs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_BOND) += lib/librte_cmdline
 
+LDLIBS += -lrte_core -lethdev -lrte_kvargs -lrte_cmdline
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_e1000/Makefile b/lib/librte_pmd_e1000/Makefile
index b083db6..d6570d8 100644
--- a/lib/librte_pmd_e1000/Makefile
+++ b/lib/librte_pmd_e1000/Makefile
@@ -92,4 +92,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_E1000_PMD) += lib/librte_net
 
+LDLIBS += -lrte_core -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_enic/Makefile b/lib/librte_pmd_enic/Makefile
index 489ac1d..fe73b02 100644
--- a/lib/librte_pmd_enic/Makefile
+++ b/lib/librte_pmd_enic/Makefile
@@ -64,4 +64,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_net
 DEPDIRS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += lib/librte_hash
 
+LDLIBS += -lrte_core -lrte_hash -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_i40e/Makefile b/lib/librte_pmd_i40e/Makefile
index 664d0e5..e13186a 100644
--- a/lib/librte_pmd_i40e/Makefile
+++ b/lib/librte_pmd_i40e/Makefile
@@ -98,4 +98,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_I40E_PMD) += lib/librte_net
 
+LDLIBS += -lrte_core -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_ixgbe/Makefile b/lib/librte_pmd_ixgbe/Makefile
index 33f4aa3..ea41fff 100644
--- a/lib/librte_pmd_ixgbe/Makefile
+++ b/lib/librte_pmd_ixgbe/Makefile
@@ -114,4 +114,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += lib/librte_net
 
+LDLIBS += -lrte_core -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_pcap/Makefile b/lib/librte_pmd_pcap/Makefile
index 0ec383a..9fde23d 100644
--- a/lib/librte_pmd_pcap/Makefile
+++ b/lib/librte_pmd_pcap/Makefile
@@ -55,4 +55,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += lib/librte_kvargs
 
+LDLIBS += -lrte_core -lethdev -lrte_kvargs -lpcap
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_ring/Makefile b/lib/librte_pmd_ring/Makefile
index 6f5d423..4ab99d6 100644
--- a/lib/librte_pmd_ring/Makefile
+++ b/lib/librte_pmd_ring/Makefile
@@ -54,4 +54,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += lib/librte_kvargs
 
+LDLIBS += -lrte_core -lethdev -lrte_kvargs
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_virtio/Makefile b/lib/librte_pmd_virtio/Makefile
index 477bf9c..14e2103 100644
--- a/lib/librte_pmd_virtio/Makefile
+++ b/lib/librte_pmd_virtio/Makefile
@@ -54,4 +54,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VIRTIO_PMD) += lib/librte_net
 
+LDLIBS += -lrte_core -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_vmxnet3/Makefile b/lib/librte_pmd_vmxnet3/Makefile
index d833c90..5293ebd 100644
--- a/lib/librte_pmd_vmxnet3/Makefile
+++ b/lib/librte_pmd_vmxnet3/Makefile
@@ -77,4 +77,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VMXNET3_PMD) += lib/librte_net
 
+LDLIBS += -lrte_core -lethdev
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_pmd_xenvirt/Makefile b/lib/librte_pmd_xenvirt/Makefile
index bf6265e..81ca455 100644
--- a/lib/librte_pmd_xenvirt/Makefile
+++ b/lib/librte_pmd_xenvirt/Makefile
@@ -55,4 +55,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_net
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_XENVIRT) += lib/librte_cmdline
 
+LDLIBS += -lrte_core -lethdev -lrte_cmdline -lxenstore
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_port/Makefile b/lib/librte_port/Makefile
index 8b0650f..0c90c6a 100644
--- a/lib/librte_port/Makefile
+++ b/lib/librte_port/Makefile
@@ -72,4 +72,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_ether
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_sched
 DEPDIRS-$(CONFIG_RTE_LIBRTE_PORT) += lib/librte_ip_frag
 
+LDLIBS += -lrte_core -lethdev -lrte_sched -lrte_ip_frag
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_power/Makefile b/lib/librte_power/Makefile
index 3a531b0..a62072d 100644
--- a/lib/librte_power/Makefile
+++ b/lib/librte_power/Makefile
@@ -46,4 +46,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_POWER)-include := rte_power.h
 # this lib needs eal
 DEPDIRS-$(CONFIG_RTE_LIBRTE_POWER) += lib/core
 
+LDLIBS += -lrte_core
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
index a2b965f..39a47d8 100644
--- a/lib/librte_sched/Makefile
+++ b/lib/librte_sched/Makefile
@@ -54,4 +54,6 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_net
 DEPDIRS-$(CONFIG_RTE_LIBRTE_SCHED) += lib/librte_timer
 
+LDLIBS += -lrte_core -lrte_timer
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_table/Makefile b/lib/librte_table/Makefile
index fbd451a..6974f06 100644
--- a/lib/librte_table/Makefile
+++ b/lib/librte_table/Makefile
@@ -73,7 +73,10 @@ DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_port
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_lpm
 ifeq ($(CONFIG_RTE_LIBRTE_ACL),y)
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_acl
+LDLIBS += -lrte_acl
 endif
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TABLE) += lib/librte_hash
 
+LDLIBS += -lrte_core -lrte_port -lrte_lpm -lrte_hash
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_timer/Makefile b/lib/librte_timer/Makefile
index afdffb9..5e8c22a 100644
--- a/lib/librte_timer/Makefile
+++ b/lib/librte_timer/Makefile
@@ -45,4 +45,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_TIMER)-include := rte_timer.h
 # this lib needs
 DEPDIRS-$(CONFIG_RTE_LIBRTE_TIMER) += lib/core
 
+LDLIBS += -lrte_core
+
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
index 3c54be7..8bf1cef 100644
--- a/lib/librte_vhost/Makefile
+++ b/lib/librte_vhost/Makefile
@@ -45,4 +45,6 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_virtio_net.h
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/core
 DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_ether
 
+LDLIBS += -lrte_core -lethdev -lfuse
+
 include $(RTE_SDK)/mk/rte.lib.mk
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 11/13] mk: Use LDLIBS when linking shared libraries
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (9 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 10/13] lib: Set LDLIBS for each library Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 12/13] mk: update apps build Sergio Gonzalez Monroy
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

This patch mainly makes use of the LDLIBS variable when linking shared
libraries, setting proper DT_NEEDED entries.
This patch also fix a few nits like syntax highlighting, the command
string (O_TO_S_STR) used for linking shared libraries and the displayed
of dependencies when debugging is enable (D).

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 mk/rte.lib.mk | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 7c99fd1..559c76a 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -59,16 +59,19 @@ build: _postbuild
 
 exe2cmd = $(strip $(call dotfile,$(patsubst %,%.cmd,$(1))))
 
+_LDLIBS := -z defs --as-needed $(LDLIBS) $(EXECENV_LDLIBS) --no-as-needed
+
 ifeq ($(LINK_USING_CC),1)
 # Override the definition of LD here, since we're linking with CC
 LD := $(CC) $(CPU_CFLAGS)
 _CPU_LDFLAGS := $(call linkerprefix,$(CPU_LDFLAGS))
+_LDLIBS := $(call linkerprefix,$(_LDLIBS))
 else
 _CPU_LDFLAGS := $(CPU_LDFLAGS)
 endif
 
 O_TO_A = $(AR) crus $(LIB) $(OBJS-y)
-O_TO_A_STR = $(subst ','\'',$(O_TO_A)) #'# fix syntax highlight
+O_TO_A_STR = $(subst ','\'',$(O_TO_A)) #')# fix syntax highlight
 O_TO_A_DISP = $(if $(V),"$(O_TO_A_STR)","  AR $(@)")
 O_TO_A_CMD = "cmd_$@ = $(O_TO_A_STR)"
 O_TO_A_DO = @set -e; \
@@ -76,9 +79,11 @@ O_TO_A_DO = @set -e; \
 	$(O_TO_A) && \
 	echo $(O_TO_A_CMD) > $(call exe2cmd,$(@))
 
-O_TO_S = $(LD) $(_CPU_LDFLAGS) -shared $(OBJS-y) -o $(LIB)
-O_TO_S_STR = $(subst ','\'',$(O_TO_S)) #'# fix syntax highlight
+O_TO_S = $(LD) $(_CPU_LDFLAGS) -L $(RTE_OUTPUT)/lib \
+		 -shared $(OBJS-y) $(_LDLIBS) -o $(LIB)
+O_TO_S_STR = $(subst ','\'',$(O_TO_S)) #')# fix syntax highlight
 O_TO_S_DISP = $(if $(V),"$(O_TO_S_STR)","  LD $(@)")
+O_TO_S_CMD = "cmd_$@ = $(O_TO_S_STR)"
 O_TO_S_DO = @set -e; \
 	echo $(O_TO_S_DISP); \
 	$(O_TO_S) && \
@@ -93,7 +98,7 @@ ifeq ($(RTE_BUILD_SHARED_LIB),y)
 $(LIB): $(OBJS-y) $(DEP_$(LIB)) FORCE
 	@[ -d $(dir $@) ] || mkdir -p $(dir $@)
 	$(if $(D),\
-		@echo -n "$< -> $@ " ; \
+		@echo -n "$? -> $@ " ; \
 		echo -n "file_missing=$(call boolean,$(file_missing)) " ; \
 		echo -n "cmdline_changed=$(call boolean,$(call cmdline_changed,$(O_TO_S_STR))) " ; \
 		echo -n "depfile_missing=$(call boolean,$(depfile_missing)) " ; \
@@ -108,7 +113,7 @@ else
 $(LIB): $(OBJS-y) $(DEP_$(LIB)) FORCE
 	@[ -d $(dir $@) ] || mkdir -p $(dir $@)
 	$(if $(D),\
-	    @echo -n "$< -> $@ " ; \
+	    @echo -n "$? -> $@ " ; \
 	    echo -n "file_missing=$(call boolean,$(file_missing)) " ; \
 	    echo -n "cmdline_changed=$(call boolean,$(call cmdline_changed,$(O_TO_A_STR))) " ; \
 	    echo -n "depfile_missing=$(call boolean,$(depfile_missing)) " ; \
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 12/13] mk: update apps build
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (10 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 11/13] mk: Use LDLIBS when linking shared libraries Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 13/13] mk: add -lpthread to linuxapp EXECENV_LDLIBS Sergio Gonzalez Monroy
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

This patch does:
 - Update the app building command to link against librte_core.
 - Set --start-group/--end-group and --whole-archive/--no-whole-archive
 flags only when linking against static DPDK libs.
 - Set --as--need/--no-as-needed when linknig against shared DPDK libs.
 - Link against EXECENV_LIBS always with --as-needed flag.

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 mk/rte.app.mk | 64 ++++++++++++++++++++++++-----------------------------------
 1 file changed, 26 insertions(+), 38 deletions(-)

diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index becdac5..1fc19e1 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -59,22 +59,27 @@ LDLIBS += -L$(RTE_SDK_BIN)/lib
 #
 ifeq ($(NO_AUTOLIBS),)
 
-LDLIBS += --whole-archive
-
-ifeq ($(CONFIG_RTE_LIBRTE_DISTRIBUTOR),y)
-LDLIBS += -lrte_distributor
+ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
+LDLIBS += --as-needed
+else
+LDLIBS += --no-as-needed
+LDLIBS += --start-group
 endif
 
-ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
+LDLIBS += -lrte_core
+
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
+ifeq ($(CONFIG_RTE_LIBRTE_KNI),y)
 LDLIBS += -lrte_kni
 endif
-endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_IVSHMEM),y)
-ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 LDLIBS += -lrte_ivshmem
 endif
+endif # CONFIG_RTE_EXEC_ENV_LINUXAPP
+
+ifeq ($(CONFIG_RTE_LIBRTE_DISTRIBUTOR),y)
+LDLIBS += -lrte_distributor
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_PIPELINE),y)
@@ -123,16 +128,10 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_PCAP),y)
 LDLIBS += -lpcap
 endif
 
-LDLIBS += --start-group
-
 ifeq ($(CONFIG_RTE_LIBRTE_KVARGS),y)
 LDLIBS += -lrte_kvargs
 endif
 
-ifeq ($(CONFIG_RTE_LIBRTE_MBUF),y)
-LDLIBS += -lrte_mbuf
-endif
-
 ifeq ($(CONFIG_RTE_LIBRTE_IP_FRAG),y)
 LDLIBS += -lrte_ip_frag
 endif
@@ -141,22 +140,6 @@ ifeq ($(CONFIG_RTE_LIBRTE_ETHER),y)
 LDLIBS += -lethdev
 endif
 
-ifeq ($(CONFIG_RTE_LIBRTE_MALLOC),y)
-LDLIBS += -lrte_malloc
-endif
-
-ifeq ($(CONFIG_RTE_LIBRTE_MEMPOOL),y)
-LDLIBS += -lrte_mempool
-endif
-
-ifeq ($(CONFIG_RTE_LIBRTE_RING),y)
-LDLIBS += -lrte_ring
-endif
-
-ifeq ($(CONFIG_RTE_LIBRTE_EAL),y)
-LDLIBS += -lrte_eal
-endif
-
 ifeq ($(CONFIG_RTE_LIBRTE_CMDLINE),y)
 LDLIBS += -lrte_cmdline
 endif
@@ -165,6 +148,11 @@ ifeq ($(CONFIG_RTE_LIBRTE_CFGFILE),y)
 LDLIBS += -lrte_cfgfile
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_VHOST), y)
+LDLIBS += -lrte_vhost
+LDLIBS += -lfuse
+endif
+
 ifeq ($(CONFIG_RTE_LIBRTE_PMD_BOND),y)
 LDLIBS += -lrte_pmd_bond
 endif
@@ -175,7 +163,10 @@ LDLIBS += -lxenstore
 endif
 
 ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)
+#
 # plugins (link only if static libraries)
+#
+LDLIBS += --whole-archive
 
 ifeq ($(CONFIG_RTE_LIBRTE_VMXNET3_PMD),y)
 LDLIBS += -lrte_pmd_vmxnet3_uio
@@ -185,11 +176,6 @@ ifeq ($(CONFIG_RTE_LIBRTE_VIRTIO_PMD),y)
 LDLIBS += -lrte_pmd_virtio_uio
 endif
 
-ifeq ($(CONFIG_RTE_LIBRTE_VHOST), y)
-LDLIBS += -lrte_vhost
-LDLIBS += -lfuse
-endif
-
 ifeq ($(CONFIG_RTE_LIBRTE_ENIC_PMD),y)
 LDLIBS += -lrte_pmd_enic
 endif
@@ -218,13 +204,15 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_AF_PACKET),y)
 LDLIBS += -lrte_pmd_af_packet
 endif
 
-endif # plugins
-
-LDLIBS += $(EXECENV_LDLIBS)
+LDLIBS += --no-whole-archive
 
 LDLIBS += --end-group
 
-LDLIBS += --no-whole-archive
+LDLIBS += --as-needed
+
+endif # plugins
+
+LDLIBS += $(EXECENV_LDLIBS)
 
 endif # ifeq ($(NO_AUTOLIBS),)
 
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH RFC 13/13] mk: add -lpthread to linuxapp EXECENV_LDLIBS
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (11 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 12/13] mk: update apps build Sergio Gonzalez Monroy
@ 2015-01-12 16:34 ` Sergio Gonzalez Monroy
  2015-01-12 16:51 ` [dpdk-dev] [PATCH RFC 00/13] Update build system Thomas Monjalon
  2015-01-13 12:26 ` Neil Horman
  14 siblings, 0 replies; 21+ messages in thread
From: Sergio Gonzalez Monroy @ 2015-01-12 16:34 UTC (permalink / raw)
  To: dev

We need to add -lpthread to EXECENV_LDLIBS because we are not passing
-pthread flags in EXECENV_CFLAGS to GCC when linking apps/

Signed-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>
---
 mk/exec-env/linuxapp/rte.vars.mk | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mk/exec-env/linuxapp/rte.vars.mk b/mk/exec-env/linuxapp/rte.vars.mk
index e5af318..dc01ce9 100644
--- a/mk/exec-env/linuxapp/rte.vars.mk
+++ b/mk/exec-env/linuxapp/rte.vars.mk
@@ -49,6 +49,8 @@ endif
 EXECENV_LDFLAGS = --no-as-needed
 
 EXECENV_LDLIBS  = -lrt -lm
+EXECENV_LDLIBS  += -lpthread
+
 EXECENV_ASFLAGS =
 
 ifeq ($(RTE_BUILD_SHARED_LIB),y)
-- 
1.9.3

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (12 preceding siblings ...)
  2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 13/13] mk: add -lpthread to linuxapp EXECENV_LDLIBS Sergio Gonzalez Monroy
@ 2015-01-12 16:51 ` Thomas Monjalon
  2015-01-12 17:21   ` Gonzalez Monroy, Sergio
  2015-01-13 12:26 ` Neil Horman
  14 siblings, 1 reply; 21+ messages in thread
From: Thomas Monjalon @ 2015-01-12 16:51 UTC (permalink / raw)
  To: Sergio Gonzalez Monroy; +Cc: dev

Hi Sergio,

2015-01-12 16:33, Sergio Gonzalez Monroy:
> This patch series updates the DPDK build system.

Thanks for proposing such rework.
We need discussions on that topic. So I ask some questions below.

> Following are the goals it tries to accomplish:
>  - Create a library containing core DPDK libraries (librte_eal,
>    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
>    The idea of core libraries is to group those libraries that are
>    always required for any DPDK application.

How is it better? Is it only to reduce dependencies lines?

>  - Remove config option to build a combined library.

Why removing combined library? Is there people finding it helpful?

>  - For shared libraries, explicitly link against dependant
>    libraries (adding entries to DT_NEEDED).

OK, good.

>  - Update app linking flags against static/shared DPDK libs.
> 
> Note that this patch turns up being quite big because of moving lib
> directories to a new subdirectory.
> I have ommited the actual diff from the patch doing the move of librte_eal
> as it is quite big (6MB). Probably a different approach is preferred.

Why do you think moving directories is needed?

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-12 16:51 ` [dpdk-dev] [PATCH RFC 00/13] Update build system Thomas Monjalon
@ 2015-01-12 17:21   ` Gonzalez Monroy, Sergio
  2015-01-12 18:16     ` Neil Horman
  2015-01-22 10:03     ` Gonzalez Monroy, Sergio
  0 siblings, 2 replies; 21+ messages in thread
From: Gonzalez Monroy, Sergio @ 2015-01-12 17:21 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Hi Thomas,

> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, January 12, 2015 4:52 PM
> 
> Hi Sergio,
> 
> 2015-01-12 16:33, Sergio Gonzalez Monroy:
> > This patch series updates the DPDK build system.
> 
> Thanks for proposing such rework.
> We need discussions on that topic. So I ask some questions below.
> 
> > Following are the goals it tries to accomplish:
> >  - Create a library containing core DPDK libraries (librte_eal,
> >    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
> >    The idea of core libraries is to group those libraries that are
> >    always required for any DPDK application.
> 
> How is it better? Is it only to reduce dependencies lines?
>
In my opinion I think that there are a set of libraries that are always required
and therefore should be grouped as a single one.
Basically all apps and other DPDK libs would have dependencies to these core libraries.

Aside from that, I don't think there is any difference. Note that this affects shared libraries,
with no difference for apps linked against static libs. 

> >  - Remove config option to build a combined library.
> 
> Why removing combined library? Is there people finding it helpful?
> 
I don't think it makes sense from a shared library point of view, maybe it does for static?
For example, in the case of shared libraries I think we want to try to avoid the case where
we have an app linked against librte_dpdk.so, but such library may contain different libraries
depending on the options that were enabled when the lib was built.

The core libraries would be that set of libraries that are always required for an app, and its content
would be fixed regardless of the option libraries (like acl, hash, distributor, etc.)
We could add more libraries as core if we think it is a better solution, but the goal should be that
librte_core.so contains the same libraries/API regardless of the system/arch.

> >  - For shared libraries, explicitly link against dependant
> >    libraries (adding entries to DT_NEEDED).
> 
> OK, good.
> 
> >  - Update app linking flags against static/shared DPDK libs.
> >
> > Note that this patch turns up being quite big because of moving lib
> > directories to a new subdirectory.
> > I have ommited the actual diff from the patch doing the move of
> > librte_eal as it is quite big (6MB). Probably a different approach is
> preferred.
> 
> Why do you think moving directories is needed?
> 
Actually I am not sure is the best way to do this :) There is no need to move them, as the same result
could be achieved without moving directories, but I thought that it would be easier for anyone to see which
libraries are 'core' and which are not.

Not moving those directories would definitely simplify this patch series.

> Thanks
> --
> Thomas

Thanks,
Sergio

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-12 17:21   ` Gonzalez Monroy, Sergio
@ 2015-01-12 18:16     ` Neil Horman
  2015-01-22 10:03     ` Gonzalez Monroy, Sergio
  1 sibling, 0 replies; 21+ messages in thread
From: Neil Horman @ 2015-01-12 18:16 UTC (permalink / raw)
  To: Gonzalez Monroy, Sergio; +Cc: dev

On Mon, Jan 12, 2015 at 05:21:48PM +0000, Gonzalez Monroy, Sergio wrote:
> Hi Thomas,
> 
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Monday, January 12, 2015 4:52 PM
> > 
> > Hi Sergio,
> > 
> > 2015-01-12 16:33, Sergio Gonzalez Monroy:
> > > This patch series updates the DPDK build system.
> > 
> > Thanks for proposing such rework.
> > We need discussions on that topic. So I ask some questions below.
> > 
> > > Following are the goals it tries to accomplish:
> > >  - Create a library containing core DPDK libraries (librte_eal,
> > >    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
> > >    The idea of core libraries is to group those libraries that are
> > >    always required for any DPDK application.
> > 
> > How is it better? Is it only to reduce dependencies lines?
> >
> In my opinion I think that there are a set of libraries that are always required
> and therefore should be grouped as a single one.
> Basically all apps and other DPDK libs would have dependencies to these core libraries.
> 
> Aside from that, I don't think there is any difference. Note that this affects shared libraries,
> with no difference for apps linked against static libs. 
> 
> > >  - Remove config option to build a combined library.
> > 
> > Why removing combined library? Is there people finding it helpful?
> > 
> I don't think it makes sense from a shared library point of view, maybe it does for static?
> For example, in the case of shared libraries I think we want to try to avoid the case where
> we have an app linked against librte_dpdk.so, but such library may contain different libraries
> depending on the options that were enabled when the lib was built.
> 
> The core libraries would be that set of libraries that are always required for an app, and its content
> would be fixed regardless of the option libraries (like acl, hash, distributor, etc.)
> We could add more libraries as core if we think it is a better solution, but the goal should be that
> librte_core.so contains the same libraries/API regardless of the system/arch.
> 

FWIW, I think Sergios approach is likely a good balance.  As he notes, mempool,
eal, malloc and mbuf are needed for any dpdk application, and have
interdepedencies, so it makes sense to link them as a single library.
Everything else is optional.  For static libraries, you can just add a few extra
lines to the linker, but for DSO's you might want the option of not linking
against a PMD, option to dynamically load it via the dlopen interface (using the
-d option).  Theres not much sense in adding those PMD DSO's to a single library
just to save a few lines in the makefile there.  This approach strikes a good
balance, combining items that will have to be linked together anyway, and
leaving everying else separate.
Neil

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
                   ` (13 preceding siblings ...)
  2015-01-12 16:51 ` [dpdk-dev] [PATCH RFC 00/13] Update build system Thomas Monjalon
@ 2015-01-13 12:26 ` Neil Horman
  14 siblings, 0 replies; 21+ messages in thread
From: Neil Horman @ 2015-01-13 12:26 UTC (permalink / raw)
  To: Sergio Gonzalez Monroy; +Cc: dev

On Mon, Jan 12, 2015 at 04:33:53PM +0000, Sergio Gonzalez Monroy wrote:
> This patch series updates the DPDK build system.
> 
> Following are the goals it tries to accomplish:
>  - Create a library containing core DPDK libraries (librte_eal,
>    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
>    The idea of core libraries is to group those libraries that are
>    always required for any DPDK application.
>  - Remove config option to build a combined library.
>  - For shared libraries, explicitly link against dependant
>    libraries (adding entries to DT_NEEDED).
>  - Update app linking flags against static/shared DPDK libs.
> 
> Note that this patch turns up being quite big because of moving lib
> directories to a new subdirectory.
> I have ommited the actual diff from the patch doing the move of librte_eal
> as it is quite big (6MB). Probably a different approach is preferred.
> 
> Sergio Gonzalez Monroy (13):
>   mk: Remove combined library and related options
>   lib/core: create new core dir and makefiles
>   core: move librte_eal to core subdir
>   core: move librte_malloc to core subdir
>   core: move librte_mempool to core subdir
>   core: move librte_mbuf to core subdir
>   core: move librte_ring to core subdir
>   Update path of core libraries
>   mk: new corelib makefile
>   lib: Set LDLIBS for each library
>   mk: Use LDLIBS when linking shared libraries
>   mk: update apps build
>   mk: add -lpthread to linuxapp EXECENV_LDLIBS
> 
Series
Acked-by: Neil Horman <nhorman@tuxdriver.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-12 17:21   ` Gonzalez Monroy, Sergio
  2015-01-12 18:16     ` Neil Horman
@ 2015-01-22 10:03     ` Gonzalez Monroy, Sergio
  2015-01-22 10:38       ` Thomas Monjalon
  1 sibling, 1 reply; 21+ messages in thread
From: Gonzalez Monroy, Sergio @ 2015-01-22 10:03 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

> From: Gonzalez Monroy, Sergio
> Sent: Monday, January 12, 2015 5:22 PM
> To: Thomas Monjalon
> Subject: Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
> 
> Hi Thomas,
> 
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Monday, January 12, 2015 4:52 PM
> >
> > Hi Sergio,
> >
> > 2015-01-12 16:33, Sergio Gonzalez Monroy:
> > > This patch series updates the DPDK build system.
> >
> > Thanks for proposing such rework.
> > We need discussions on that topic. So I ask some questions below.
> >
> > > Following are the goals it tries to accomplish:
> > >  - Create a library containing core DPDK libraries (librte_eal,
> > >    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
> > >    The idea of core libraries is to group those libraries that are
> > >    always required for any DPDK application.
> >
> > How is it better? Is it only to reduce dependencies lines?
> >
> In my opinion I think that there are a set of libraries that are always required
> and therefore should be grouped as a single one.
> Basically all apps and other DPDK libs would have dependencies to these core
> libraries.
> 
> Aside from that, I don't think there is any difference. Note that this affects
> shared libraries, with no difference for apps linked against static libs.
> 
> > >  - Remove config option to build a combined library.
> >
> > Why removing combined library? Is there people finding it helpful?
> >
> I don't think it makes sense from a shared library point of view, maybe it
> does for static?
> For example, in the case of shared libraries I think we want to try to avoid the
> case where we have an app linked against librte_dpdk.so, but such library
> may contain different libraries depending on the options that were enabled
> when the lib was built.
> 
> The core libraries would be that set of libraries that are always required for
> an app, and its content would be fixed regardless of the option libraries (like
> acl, hash, distributor, etc.) We could add more libraries as core if we think it is
> a better solution, but the goal should be that librte_core.so contains the
> same libraries/API regardless of the system/arch.
> 
> > >  - For shared libraries, explicitly link against dependant
> > >    libraries (adding entries to DT_NEEDED).
> >
> > OK, good.
> >
> > >  - Update app linking flags against static/shared DPDK libs.
> > >
> > > Note that this patch turns up being quite big because of moving lib
> > > directories to a new subdirectory.
> > > I have ommited the actual diff from the patch doing the move of
> > > librte_eal as it is quite big (6MB). Probably a different approach
> > > is
> > preferred.
> >
> > Why do you think moving directories is needed?
> >
> Actually I am not sure is the best way to do this :) There is no need to move
> them, as the same result could be achieved without moving directories, but I
> thought that it would be easier for anyone to see which libraries are 'core'
> and which are not.
> 
> Not moving those directories would definitely simplify this patch series.
> 
> > Thanks
> > --
> > Thomas
> 
> Thanks,
> Sergio

Hi Thomas,

Any other comments/suggestions ? 
My main concern would be the patch needed to move librte_eal (around 6MB). 

Thoughts?

Thanks,
Sergio

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-22 10:03     ` Gonzalez Monroy, Sergio
@ 2015-01-22 10:38       ` Thomas Monjalon
  2015-01-22 11:01         ` Gonzalez Monroy, Sergio
  0 siblings, 1 reply; 21+ messages in thread
From: Thomas Monjalon @ 2015-01-22 10:38 UTC (permalink / raw)
  To: Gonzalez Monroy, Sergio; +Cc: dev

2015-01-22 10:03, Gonzalez Monroy, Sergio:
> > From: Gonzalez Monroy, Sergio
> > Sent: Monday, January 12, 2015 5:22 PM
> > To: Thomas Monjalon
> > Subject: Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
> > 
> > Hi Thomas,
> > 
> > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > Sent: Monday, January 12, 2015 4:52 PM
> > >
> > > Hi Sergio,
> > >
> > > 2015-01-12 16:33, Sergio Gonzalez Monroy:
> > > > This patch series updates the DPDK build system.
> > >
> > > Thanks for proposing such rework.
> > > We need discussions on that topic. So I ask some questions below.
> > >
> > > > Following are the goals it tries to accomplish:
> > > >  - Create a library containing core DPDK libraries (librte_eal,
> > > >    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
> > > >    The idea of core libraries is to group those libraries that are
> > > >    always required for any DPDK application.
> > >
> > > How is it better? Is it only to reduce dependencies lines?
> > >
> > In my opinion I think that there are a set of libraries that are always required
> > and therefore should be grouped as a single one.
> > Basically all apps and other DPDK libs would have dependencies to these core
> > libraries.
> > 
> > Aside from that, I don't think there is any difference. Note that this affects
> > shared libraries, with no difference for apps linked against static libs.
> > 
> > > >  - Remove config option to build a combined library.
> > >
> > > Why removing combined library? Is there people finding it helpful?
> > >
> > I don't think it makes sense from a shared library point of view, maybe it
> > does for static?
> > For example, in the case of shared libraries I think we want to try to avoid the
> > case where we have an app linked against librte_dpdk.so, but such library
> > may contain different libraries depending on the options that were enabled
> > when the lib was built.
> > 
> > The core libraries would be that set of libraries that are always required for
> > an app, and its content would be fixed regardless of the option libraries (like
> > acl, hash, distributor, etc.) We could add more libraries as core if we think it is
> > a better solution, but the goal should be that librte_core.so contains the
> > same libraries/API regardless of the system/arch.
> > 
> > > >  - For shared libraries, explicitly link against dependant
> > > >    libraries (adding entries to DT_NEEDED).
> > >
> > > OK, good.
> > >
> > > >  - Update app linking flags against static/shared DPDK libs.
> > > >
> > > > Note that this patch turns up being quite big because of moving lib
> > > > directories to a new subdirectory.
> > > > I have ommited the actual diff from the patch doing the move of
> > > > librte_eal as it is quite big (6MB). Probably a different approach
> > > > is
> > > preferred.
> > >
> > > Why do you think moving directories is needed?
> > >
> > Actually I am not sure is the best way to do this :) There is no need to move
> > them, as the same result could be achieved without moving directories, but I
> > thought that it would be easier for anyone to see which libraries are 'core'
> > and which are not.
> > 
> > Not moving those directories would definitely simplify this patch series.
> > 
> > > Thanks
> > > --
> > > Thomas
> > 
> > Thanks,
> > Sergio
> 
> Hi Thomas,
> 
> Any other comments/suggestions ? 
> My main concern would be the patch needed to move librte_eal (around 6MB). 
> 
> Thoughts?

I think you shouldn't move the libs.
Maybe we can link the core libs into one (not sure of the interest)
but I think we shouldn't move them in a core/ subdir.

On another side, I'd like to see KNI moving out of EAL.

-- 
Thomas

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
  2015-01-22 10:38       ` Thomas Monjalon
@ 2015-01-22 11:01         ` Gonzalez Monroy, Sergio
  0 siblings, 0 replies; 21+ messages in thread
From: Gonzalez Monroy, Sergio @ 2015-01-22 11:01 UTC (permalink / raw)
  To: Thomas Monjalon, Neil Horman; +Cc: dev

> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Thursday, January 22, 2015 10:39 AM
> To: Gonzalez Monroy, Sergio
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
> 
> 2015-01-22 10:03, Gonzalez Monroy, Sergio:
> > > From: Gonzalez Monroy, Sergio
> > > Sent: Monday, January 12, 2015 5:22 PM
> > > To: Thomas Monjalon
> > > Subject: Re: [dpdk-dev] [PATCH RFC 00/13] Update build system
> > >
> > > Hi Thomas,
> > >
> > > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > > Sent: Monday, January 12, 2015 4:52 PM
> > > >
> > > > Hi Sergio,
> > > >
> > > > 2015-01-12 16:33, Sergio Gonzalez Monroy:
> > > > > This patch series updates the DPDK build system.
> > > >
> > > > Thanks for proposing such rework.
> > > > We need discussions on that topic. So I ask some questions below.
> > > >
> > > > > Following are the goals it tries to accomplish:
> > > > >  - Create a library containing core DPDK libraries (librte_eal,
> > > > >    librte_malloc, librte_mempool, librte_mbuf and librte_ring).
> > > > >    The idea of core libraries is to group those libraries that are
> > > > >    always required for any DPDK application.
> > > >
> > > > How is it better? Is it only to reduce dependencies lines?
> > > >
> > > In my opinion I think that there are a set of libraries that are
> > > always required and therefore should be grouped as a single one.
> > > Basically all apps and other DPDK libs would have dependencies to
> > > these core libraries.
> > >
> > > Aside from that, I don't think there is any difference. Note that
> > > this affects shared libraries, with no difference for apps linked against
> static libs.
> > >
> > > > >  - Remove config option to build a combined library.
> > > >
> > > > Why removing combined library? Is there people finding it helpful?
> > > >
> > > I don't think it makes sense from a shared library point of view,
> > > maybe it does for static?
> > > For example, in the case of shared libraries I think we want to try
> > > to avoid the case where we have an app linked against
> > > librte_dpdk.so, but such library may contain different libraries
> > > depending on the options that were enabled when the lib was built.
> > >
> > > The core libraries would be that set of libraries that are always
> > > required for an app, and its content would be fixed regardless of
> > > the option libraries (like acl, hash, distributor, etc.) We could
> > > add more libraries as core if we think it is a better solution, but
> > > the goal should be that librte_core.so contains the same libraries/API
> regardless of the system/arch.
> > >
> > > > >  - For shared libraries, explicitly link against dependant
> > > > >    libraries (adding entries to DT_NEEDED).
> > > >
> > > > OK, good.
> > > >
> > > > >  - Update app linking flags against static/shared DPDK libs.
> > > > >
> > > > > Note that this patch turns up being quite big because of moving
> > > > > lib directories to a new subdirectory.
> > > > > I have ommited the actual diff from the patch doing the move of
> > > > > librte_eal as it is quite big (6MB). Probably a different
> > > > > approach is
> > > > preferred.
> > > >
> > > > Why do you think moving directories is needed?
> > > >
> > > Actually I am not sure is the best way to do this :) There is no
> > > need to move them, as the same result could be achieved without
> > > moving directories, but I thought that it would be easier for anyone to
> see which libraries are 'core'
> > > and which are not.
> > >
> > > Not moving those directories would definitely simplify this patch series.
> > >
> > > > Thanks
> > > > --
> > > > Thomas
> > >
> > > Thanks,
> > > Sergio
> >
> > Hi Thomas,
> >
> > Any other comments/suggestions ?
> > My main concern would be the patch needed to move librte_eal (around
> 6MB).
> >
> > Thoughts?
> 
> I think you shouldn't move the libs.
> Maybe we can link the core libs into one (not sure of the interest) but I think
> we shouldn't move them in a core/ subdir.
> 
> On another side, I'd like to see KNI moving out of EAL.
> 
> --
> Thomas

I think moving KNI out of EAL belongs to a different patch.

We can still link librte_core without moving the directories into core/

I'll work on it.

Thanks,
Sergio

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2015-01-22 11:01 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-12 16:33 [dpdk-dev] [PATCH RFC 00/13] Update build system Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 01/13] mk: Remove combined library and related options Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 02/13] lib/core: create new core dir and makefiles Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 03/13] core: move librte_eal to core subdir Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 04/13] core: move librte_malloc " Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 05/13] core: move librte_mempool " Sergio Gonzalez Monroy
2015-01-12 16:33 ` [dpdk-dev] [PATCH RFC 06/13] core: move librte_mbuf " Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 07/13] core: move librte_ring " Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 08/13] Update path of core libraries Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 09/13] mk: new corelib makefile Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 10/13] lib: Set LDLIBS for each library Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 11/13] mk: Use LDLIBS when linking shared libraries Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 12/13] mk: update apps build Sergio Gonzalez Monroy
2015-01-12 16:34 ` [dpdk-dev] [PATCH RFC 13/13] mk: add -lpthread to linuxapp EXECENV_LDLIBS Sergio Gonzalez Monroy
2015-01-12 16:51 ` [dpdk-dev] [PATCH RFC 00/13] Update build system Thomas Monjalon
2015-01-12 17:21   ` Gonzalez Monroy, Sergio
2015-01-12 18:16     ` Neil Horman
2015-01-22 10:03     ` Gonzalez Monroy, Sergio
2015-01-22 10:38       ` Thomas Monjalon
2015-01-22 11:01         ` Gonzalez Monroy, Sergio
2015-01-13 12:26 ` Neil Horman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).