* Re: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64
@ 2019-06-22 13:21 Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 3+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-06-22 13:21 UTC (permalink / raw)
To: Aaron Conole, Pavan Nikhilesh Bhagavatula
Cc: dev, Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Olivier Matz
> -----Original Message-----
> From: Aaron Conole <aconole@redhat.com>
> Sent: Saturday, June 22, 2019 12:57 AM
> To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dev@dpdk.org; Nithin
> Kumar Dabilpuram <ndabilpuram@marvell.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>; Olivier Matz <olivier.matz@6wind.com>
> Subject: Re: [EXT] Re: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2:
> add optimized dequeue operation for arm64
>
> Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com> writes:
>
> > Hi Aaron,
> >
> >>-----Original Message-----
> >>From: Aaron Conole <aconole@redhat.com>
> >>Sent: Tuesday, June 18, 2019 2:55 AM
> >>To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> >>Cc: dev@dpdk.org; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>;
> >>Vamsi Krishna Attunuru <vattunuru@marvell.com>; Pavan Nikhilesh
> >>Bhagavatula <pbhagavatula@marvell.com>; Olivier Matz
> >><olivier.matz@6wind.com>
> >>Subject: [EXT] Re: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2:
> >>add optimized dequeue operation for arm64
> >>
> >>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >>>
> >>> This patch adds an optimized arm64 instruction based routine to
> >>leverage
> >>> CPU pipeline characteristics of octeontx2. The theme is to fill the
> >>> pipeline with CASP operations as much HW can do so that HW can do
> >>alloc()
> >>> HW ops in full throttle.
> >>>
> >>> Cc: Olivier Matz <olivier.matz@6wind.com>
> >>> Cc: Aaron Conole <aconole@redhat.com>
> >>>
> >>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >>> Signed-off-by: Jerin Jacob <jerinj@marvell.com>
> >>> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> >>> ---
> >>> drivers/mempool/octeontx2/otx2_mempool_ops.c | 291
> >>+++++++++++++++++++
> >>> 1 file changed, 291 insertions(+)
> >>>
> >>> diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c
> >>b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> >>> index c59bd73c0..e6737abda 100644
> >>> --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
> >>> +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
> >>> @@ -37,6 +37,293 @@ npa_lf_aura_op_alloc_one(const int64_t
> >>wdata, int64_t * const addr,
> >>> return -ENOENT;
> >>> }
> >>>
> >>> +#if defined(RTE_ARCH_ARM64)
> >>> +static __rte_noinline int
> >>> +npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const
> >>addr,
> >>> + void **obj_table, unsigned int n) {
> >>> + uint8_t i;
> >>> +
> >>> + for (i = 0; i < n; i++) {
> >>> + if (obj_table[i] != NULL)
> >>> + continue;
> >>> + if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table,
> >>i))
> >>> + return -ENOENT;
> >>> + }
> >>> +
> >>> + return 0;
> >>> +}
> >>> +
> >>> +static __attribute__((optimize("-O3"))) __rte_noinline int __hot
> >>
> >>Sorry if I missed this before.
> >>
> >>Is there a good reason to hard-code this optimization, rather than let
> >>the build system provide it?
> >
> > Some versions of compiler don't have support for __int128_t for CASP
> inline-asm.
> > i.e. if the optimization level is reduced to -O0 the CASP restrictions
> > aren't followed and compiler might end up violation the CASP rules
> example:
> >
> > /tmp/ccSPMGzq.s:1648: Error: reg pair must start from even reg at
> > operand 1 - `casp x21,x22,x0,x1,[x19]'
> > /tmp/ccSPMGzq.s:1706: Error: reg pair must start from even reg at
> > operand 1 - `casp x13,x14,x0,x1,[x11]'
> > /tmp/ccSPMGzq.s:1745: Error: reg pair must start from even reg at
> > operand 1 - `casp x9,x10,x0,x1,[x7]'
> > /tmp/ccSPMGzq.s:1775: Error: reg pair must start from even reg at
> > operand 1 - `casp x7,x8,x0,x1,[x5]'*
> >
> > Forcing to -O3 with __rte_noinline in place fixes it as the alignment fits in.
>
> It makes sense to document this - it isn't apparent that it is needed.
> It would be good to put a comment just before that explains it, preferably
> with the compilers that aren't behaving. This would help in the future to
> determine when it would be safe to drop the flag.
Yes. Will add the comment.
^ permalink raw reply [flat|nested] 3+ messages in thread
* [dpdk-dev] [PATCH v2 00/27] OCTEON TX2 common and mempool driver @ 2019-06-01 1:48 jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 " jerinj 0 siblings, 1 reply; 3+ messages in thread From: jerinj @ 2019-06-01 1:48 UTC (permalink / raw) To: dev; +Cc: Jerin Jacob From: Jerin Jacob <jerinj@marvell.com> This patch set adds initial driver support for OCTEON TX2 SoC. OCTEON TX2 is an armv8.2 SoC with integrated HW based mempool, ethdev, cryptodev, compressdev, and eventdev devices. This patch set adds common driver and mempool device driver for OCTEON TX2 SoC. There will be three more patch series in this release to support ethdev, eventdev and cryptodev devices More details about the OCTEON TX2 platform may found in [PATCH 15/27] doc: add Marvell OCTEON TX2 platform guide under doc/guides/platform/octeontx2.rst file. This patches series also available at https://github.com/jerinjacobk/dpdk.git for quick download and review. # Note on check patch issues 1) The mailbox prototype is the same as Linux kernel. https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/marvell/octeontx2/af/mbox.h#L123 In order to keep the base code intact, mailbox prototype expressed as macros with warnings 2) There are a few warnings from tooling about expected to add new symbols in the EXPERIMENTAL section. Since these API will be called only by octeontx2 client drivers and prototype are not exported to the application, those warnings are not relevant, Discussion at http://patches.dpdk.org/patch/53590/ v2: 1) Added CONFIG_RTE_MAX_VFIO_GROUPS for octeontx2 config in patch 1 2) Rebased to master to accommodate latest rename like ETHER_ADDR_LEN to RTE_ETHER_ADDR_LEN 3) Added pmd.raw.octeontx2.dpi log message in patch 5 4) Update platform guide with debugfs documentation in patch 15 5) Fix arm64 build issue with patch 25(Aaron Conole) "mempool/octeontx2: add optimized dequeue operation for arm64" Jerin Jacob (22): common/octeontx2: add build infrastructure and HW definition common/octeontx2: add IO handling APIs common/octeontx2: add mbox request and response definition common/octeontx2: add mailbox base support infra common/octeontx2: add runtime log infra common/octeontx2: add mailbox send and receive support common/octeontx2: introduce common device class common/octeontx2: introduce irq handling functions common/octeontx2: handle intra device operations common/octeontx2: add VF mailbox IRQ and msg handler doc: add Marvell OCTEON TX2 platform guide mempool/octeontx2: add build infra and device probe drivers: add init and fini on octeontx2 NPA object mempool/octeontx2: add NPA HW operations mempool/octeontx2: add NPA IRQ handler mempool/octeontx2: add context dump support mempool/octeontx2: add mempool alloc op mempool/octeontx2: add mempool free op mempool/octeontx2: add remaining slow path ops mempool/octeontx2: add fast path mempool ops mempool/octeontx2: add devargs for max pool selection doc: add Marvell OCTEON TX2 mempool documentation Nithin Dabilpuram (4): common/octeontx2: add AF to PF mailbox IRQ and msg handlers common/octeontx2: add PF to VF mailbox IRQ and msg handlers common/octeontx2: add uplink message support common/octeontx2: add FLR IRQ handler Pavan Nikhilesh (1): mempool/octeontx2: add optimized dequeue operation for arm64 MAINTAINERS | 10 + config/common_base | 5 + config/defconfig_arm64-octeontx2-linuxapp-gcc | 3 + doc/guides/mempool/index.rst | 1 + doc/guides/mempool/octeontx2.rst | 90 + .../octeontx2_packet_flow_hw_accelerators.svg | 2804 +++++++++++++++++ .../img/octeontx2_resource_virtualization.svg | 2418 ++++++++++++++ doc/guides/platform/index.rst | 1 + doc/guides/platform/octeontx2.rst | 496 +++ doc/guides/rel_notes/release_19_08.rst | 2 + drivers/common/Makefile | 5 + drivers/common/meson.build | 2 +- drivers/common/octeontx2/Makefile | 37 + drivers/common/octeontx2/hw/otx2_nix.h | 1376 ++++++++ drivers/common/octeontx2/hw/otx2_npa.h | 305 ++ drivers/common/octeontx2/hw/otx2_npc.h | 467 +++ drivers/common/octeontx2/hw/otx2_rvu.h | 212 ++ drivers/common/octeontx2/hw/otx2_sso.h | 209 ++ drivers/common/octeontx2/hw/otx2_ssow.h | 56 + drivers/common/octeontx2/hw/otx2_tim.h | 34 + drivers/common/octeontx2/meson.build | 25 + drivers/common/octeontx2/otx2_common.c | 248 ++ drivers/common/octeontx2/otx2_common.h | 121 + drivers/common/octeontx2/otx2_dev.c | 1052 +++++++ drivers/common/octeontx2/otx2_dev.h | 97 + drivers/common/octeontx2/otx2_io_arm64.h | 95 + drivers/common/octeontx2/otx2_io_generic.h | 63 + drivers/common/octeontx2/otx2_irq.c | 254 ++ drivers/common/octeontx2/otx2_irq.h | 25 + drivers/common/octeontx2/otx2_mbox.c | 416 +++ drivers/common/octeontx2/otx2_mbox.h | 1483 +++++++++ .../rte_common_octeontx2_version.map | 39 + drivers/mempool/Makefile | 1 + drivers/mempool/meson.build | 2 +- drivers/mempool/octeontx2/Makefile | 39 + drivers/mempool/octeontx2/meson.build | 23 + drivers/mempool/octeontx2/otx2_mempool.c | 438 +++ drivers/mempool/octeontx2/otx2_mempool.h | 208 ++ .../mempool/octeontx2/otx2_mempool_debug.c | 135 + drivers/mempool/octeontx2/otx2_mempool_irq.c | 308 ++ drivers/mempool/octeontx2/otx2_mempool_ops.c | 760 +++++ .../rte_mempool_octeontx2_version.map | 8 + mk/rte.app.mk | 6 + 43 files changed, 14377 insertions(+), 2 deletions(-) create mode 100644 doc/guides/mempool/octeontx2.rst create mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg create mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg create mode 100644 doc/guides/platform/octeontx2.rst create mode 100644 drivers/common/octeontx2/Makefile create mode 100644 drivers/common/octeontx2/hw/otx2_nix.h create mode 100644 drivers/common/octeontx2/hw/otx2_npa.h create mode 100644 drivers/common/octeontx2/hw/otx2_npc.h create mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h create mode 100644 drivers/common/octeontx2/hw/otx2_sso.h create mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h create mode 100644 drivers/common/octeontx2/hw/otx2_tim.h create mode 100644 drivers/common/octeontx2/meson.build create mode 100644 drivers/common/octeontx2/otx2_common.c create mode 100644 drivers/common/octeontx2/otx2_common.h create mode 100644 drivers/common/octeontx2/otx2_dev.c create mode 100644 drivers/common/octeontx2/otx2_dev.h create mode 100644 drivers/common/octeontx2/otx2_io_arm64.h create mode 100644 drivers/common/octeontx2/otx2_io_generic.h create mode 100644 drivers/common/octeontx2/otx2_irq.c create mode 100644 drivers/common/octeontx2/otx2_irq.h create mode 100644 drivers/common/octeontx2/otx2_mbox.c create mode 100644 drivers/common/octeontx2/otx2_mbox.h create mode 100644 drivers/common/octeontx2/rte_common_octeontx2_version.map create mode 100644 drivers/mempool/octeontx2/Makefile create mode 100644 drivers/mempool/octeontx2/meson.build create mode 100644 drivers/mempool/octeontx2/otx2_mempool.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool.h create mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c create mode 100644 drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map -- 2.21.0 ^ permalink raw reply [flat|nested] 3+ messages in thread
* [dpdk-dev] [PATCH v3 00/27] OCTEON TX2 common and mempool driver 2019-06-01 1:48 [dpdk-dev] [PATCH v2 00/27] OCTEON TX2 common and mempool driver jerinj @ 2019-06-17 15:55 ` jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj 0 siblings, 1 reply; 3+ messages in thread From: jerinj @ 2019-06-17 15:55 UTC (permalink / raw) To: dev; +Cc: Jerin Jacob, Thomas Monjalon From: Jerin Jacob <jerinj@marvell.com> This patch set adds initial driver support for OCTEON TX2 SoC. OCTEON TX2 is an armv8.2 SoC with integrated HW based mempool, ethdev, cryptodev, compressdev, and eventdev devices. This patch set adds common driver and mempool device driver for OCTEON TX2 SoC. There will be three more patch series in this release to support ethdev, eventdev and cryptodev devices More details about the OCTEON TX2 platform may found in [PATCH 15/27] doc: add Marvell OCTEON TX2 platform guide under doc/guides/platform/octeontx2.rst file. This patches series also available at https://github.com/jerinjacobk/dpdk.git for quick download and review. # Note on check patch issues 1) The mailbox prototype is the same as Linux kernel. https://elixir.bootlin.com/linux/latest/source/drivers/net/ethernet/marvell/octeontx2/af/mbox.h#L123 In order to keep the base code intact, mailbox prototype expressed as macros with warnings 2) There are a few warnings from tooling about expected to add new symbols in the EXPERIMENTAL section. Since these API will be called only by octeontx2 client drivers and prototype are not exported to the application, those warnings are not relevant, Discussion at http://patches.dpdk.org/patch/53590/ v3: 1) Replace the reference to v19.08 from v19.05 2) remove rte_panic from driver code 3) rebase to dpdk.org master(as of 17-June) v2: 1) Added CONFIG_RTE_MAX_VFIO_GROUPS for octeontx2 config in patch 1 2) Rebased to master to accommodate latest rename like ETHER_ADDR_LEN to RTE_ETHER_ADDR_LEN 3) Added pmd.raw.octeontx2.dpi log message in patch 5 4) Update platform guide with debugfs documentation in patch 15 5) Fix arm64 build issue with patch 25(Aaron Conole) "mempool/octeontx2: add optimized dequeue operation for arm64" Cc: Thomas Monjalon <thomas@monjalon.net> Jerin Jacob (22): common/octeontx2: add build infrastructure and HW definition common/octeontx2: add IO handling APIs common/octeontx2: add mbox request and response definition common/octeontx2: add mailbox base support infra common/octeontx2: add runtime log infra common/octeontx2: add mailbox send and receive support common/octeontx2: introduce common device class common/octeontx2: introduce irq handling functions common/octeontx2: handle intra device operations common/octeontx2: add VF mailbox IRQ and msg handler doc: add Marvell OCTEON TX2 platform guide mempool/octeontx2: add build infra and device probe drivers: add init and fini on octeontx2 NPA object mempool/octeontx2: add NPA HW operations mempool/octeontx2: add NPA IRQ handler mempool/octeontx2: add context dump support mempool/octeontx2: add mempool alloc op mempool/octeontx2: add mempool free op mempool/octeontx2: add remaining slow path ops mempool/octeontx2: add fast path mempool ops mempool/octeontx2: add devargs for max pool selection doc: add Marvell OCTEON TX2 mempool documentation Nithin Dabilpuram (4): common/octeontx2: add AF to PF mailbox IRQ and msg handlers common/octeontx2: add PF to VF mailbox IRQ and msg handlers common/octeontx2: add uplink message support common/octeontx2: add FLR IRQ handler Pavan Nikhilesh (1): mempool/octeontx2: add optimized dequeue operation for arm64 MAINTAINERS | 10 + config/common_base | 5 + config/defconfig_arm64-octeontx2-linuxapp-gcc | 3 + doc/guides/mempool/index.rst | 1 + doc/guides/mempool/octeontx2.rst | 90 + .../octeontx2_packet_flow_hw_accelerators.svg | 2804 +++++++++++++++++ .../img/octeontx2_resource_virtualization.svg | 2418 ++++++++++++++ doc/guides/platform/index.rst | 1 + doc/guides/platform/octeontx2.rst | 496 +++ doc/guides/rel_notes/release_19_08.rst | 2 + drivers/common/Makefile | 5 + drivers/common/meson.build | 2 +- drivers/common/octeontx2/Makefile | 37 + drivers/common/octeontx2/hw/otx2_nix.h | 1379 ++++++++ drivers/common/octeontx2/hw/otx2_npa.h | 305 ++ drivers/common/octeontx2/hw/otx2_npc.h | 472 +++ drivers/common/octeontx2/hw/otx2_rvu.h | 212 ++ drivers/common/octeontx2/hw/otx2_sso.h | 209 ++ drivers/common/octeontx2/hw/otx2_ssow.h | 56 + drivers/common/octeontx2/hw/otx2_tim.h | 34 + drivers/common/octeontx2/meson.build | 25 + drivers/common/octeontx2/otx2_common.c | 248 ++ drivers/common/octeontx2/otx2_common.h | 121 + drivers/common/octeontx2/otx2_dev.c | 1052 +++++++ drivers/common/octeontx2/otx2_dev.h | 97 + drivers/common/octeontx2/otx2_io_arm64.h | 95 + drivers/common/octeontx2/otx2_io_generic.h | 63 + drivers/common/octeontx2/otx2_irq.c | 254 ++ drivers/common/octeontx2/otx2_irq.h | 25 + drivers/common/octeontx2/otx2_mbox.c | 416 +++ drivers/common/octeontx2/otx2_mbox.h | 1483 +++++++++ .../rte_common_octeontx2_version.map | 39 + drivers/mempool/Makefile | 1 + drivers/mempool/meson.build | 2 +- drivers/mempool/octeontx2/Makefile | 39 + drivers/mempool/octeontx2/meson.build | 23 + drivers/mempool/octeontx2/otx2_mempool.c | 438 +++ drivers/mempool/octeontx2/otx2_mempool.h | 208 ++ .../mempool/octeontx2/otx2_mempool_debug.c | 135 + drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 ++ drivers/mempool/octeontx2/otx2_mempool_ops.c | 760 +++++ .../rte_mempool_octeontx2_version.map | 8 + mk/rte.app.mk | 6 + 43 files changed, 14380 insertions(+), 2 deletions(-) create mode 100644 doc/guides/mempool/octeontx2.rst create mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg create mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg create mode 100644 doc/guides/platform/octeontx2.rst create mode 100644 drivers/common/octeontx2/Makefile create mode 100644 drivers/common/octeontx2/hw/otx2_nix.h create mode 100644 drivers/common/octeontx2/hw/otx2_npa.h create mode 100644 drivers/common/octeontx2/hw/otx2_npc.h create mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h create mode 100644 drivers/common/octeontx2/hw/otx2_sso.h create mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h create mode 100644 drivers/common/octeontx2/hw/otx2_tim.h create mode 100644 drivers/common/octeontx2/meson.build create mode 100644 drivers/common/octeontx2/otx2_common.c create mode 100644 drivers/common/octeontx2/otx2_common.h create mode 100644 drivers/common/octeontx2/otx2_dev.c create mode 100644 drivers/common/octeontx2/otx2_dev.h create mode 100644 drivers/common/octeontx2/otx2_io_arm64.h create mode 100644 drivers/common/octeontx2/otx2_io_generic.h create mode 100644 drivers/common/octeontx2/otx2_irq.c create mode 100644 drivers/common/octeontx2/otx2_irq.h create mode 100644 drivers/common/octeontx2/otx2_mbox.c create mode 100644 drivers/common/octeontx2/otx2_mbox.h create mode 100644 drivers/common/octeontx2/rte_common_octeontx2_version.map create mode 100644 drivers/mempool/octeontx2/Makefile create mode 100644 drivers/mempool/octeontx2/meson.build create mode 100644 drivers/mempool/octeontx2/otx2_mempool.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool.h create mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c create mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c create mode 100644 drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map -- 2.21.0 ^ permalink raw reply [flat|nested] 3+ messages in thread
* [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 " jerinj @ 2019-06-17 15:55 ` jerinj 2019-06-17 21:25 ` Aaron Conole 0 siblings, 1 reply; 3+ messages in thread From: jerinj @ 2019-06-17 15:55 UTC (permalink / raw) To: dev, Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru Cc: Pavan Nikhilesh, Olivier Matz, Aaron Conole From: Pavan Nikhilesh <pbhagavatula@marvell.com> This patch adds an optimized arm64 instruction based routine to leverage CPU pipeline characteristics of octeontx2. The theme is to fill the pipeline with CASP operations as much HW can do so that HW can do alloc() HW ops in full throttle. Cc: Olivier Matz <olivier.matz@6wind.com> Cc: Aaron Conole <aconole@redhat.com> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 291 +++++++++++++++++++ 1 file changed, 291 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index c59bd73c0..e6737abda 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -37,6 +37,293 @@ npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, return -ENOENT; } +#if defined(RTE_ARCH_ARM64) +static __rte_noinline int +npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr, + void **obj_table, unsigned int n) +{ + uint8_t i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) + continue; + if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i)) + return -ENOENT; + } + + return 0; +} + +static __attribute__((optimize("-O3"))) __rte_noinline int __hot +npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr, + unsigned int n, void **obj_table) +{ + const __uint128_t wdata128 = ((__uint128_t)wdata << 64) | wdata; + uint64x2_t failed = vdupq_n_u64(~0); + + switch (n) { + case 32: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7, t8, t9; + __uint128_t t10, t11; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t8], %H[t8], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t9], %H[t9], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t10], %H[t10], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t11], %H[t11], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + "fmov d16, %[t8]\n" + "fmov v16.D[1], %H[t8]\n" + "fmov d17, %[t9]\n" + "fmov v17.D[1], %H[t9]\n" + "fmov d18, %[t10]\n" + "fmov v18.D[1], %H[t10]\n" + "fmov d19, %[t11]\n" + "fmov v19.D[1], %H[t11]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t0]\n" + "fmov v20.D[1], %H[t0]\n" + "fmov d21, %[t1]\n" + "fmov v21.D[1], %H[t1]\n" + "fmov d22, %[t2]\n" + "fmov v22.D[1], %H[t2]\n" + "fmov d23, %[t3]\n" + "fmov v23.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7), [t8] "=&r" (t8), + [t9] "=&r" (t9), [t10] "=&r" (t10), [t11] "=&r" (t11) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", + "v19", "v20", "v21", "v22", "v23" + ); + break; + } + case 16: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19", + "v20", "v21", "v22", "v23" + ); + break; + } + case 8: + { + __uint128_t t0, t1, t2, t3; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19" + ); + break; + } + case 4: + { + __uint128_t t0, t1; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "st1 { v16.2d, v17.2d}, [%[dst]], 32\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17" + ); + break; + } + case 2: + { + __uint128_t t0; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "st1 { v16.2d}, [%[dst]], 16\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16" + ); + break; + } + case 1: + return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); + } + + if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1)))) + return npa_lf_aura_op_search_alloc(wdata, addr, (void **) + ((char *)obj_table - (sizeof(uint64_t) * n)), n); + + return 0; +} + +static __rte_noinline void +otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) { + otx2_npa_enq(mp, &obj_table[i], 1); + obj_table[i] = NULL; + } + } +} + +static inline int __hot +otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); + void **obj_table_bak = obj_table; + const unsigned int nfree = n; + unsigned int parts; + + int64_t * const addr = (int64_t * const) + (npa_lf_aura_handle_to_base(mp->pool_id) + + NPA_LF_AURA_OP_ALLOCX(0)); + while (n) { + parts = n > 31 ? 32 : rte_align32prevpow2(n); + n -= parts; + if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr, + parts, obj_table))) { + otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n); + return -ENOENT; + } + obj_table += parts; + } + + return 0; +} +#endif + static inline int __hot otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) { @@ -463,7 +750,11 @@ static struct rte_mempool_ops otx2_npa_ops = { .get_count = otx2_npa_get_count, .calc_mem_size = otx2_npa_calc_mem_size, .populate = otx2_npa_populate, +#if defined(RTE_ARCH_ARM64) + .dequeue = otx2_npa_deq_arm64, +#else .dequeue = otx2_npa_deq, +#endif }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); -- 2.21.0 ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj @ 2019-06-17 21:25 ` Aaron Conole 0 siblings, 0 replies; 3+ messages in thread From: Aaron Conole @ 2019-06-17 21:25 UTC (permalink / raw) To: jerinj Cc: dev, Nithin Dabilpuram, Vamsi Attunuru, Pavan Nikhilesh, Olivier Matz <jerinj@marvell.com> writes: > From: Pavan Nikhilesh <pbhagavatula@marvell.com> > > This patch adds an optimized arm64 instruction based routine to leverage > CPU pipeline characteristics of octeontx2. The theme is to fill the > pipeline with CASP operations as much HW can do so that HW can do alloc() > HW ops in full throttle. > > Cc: Olivier Matz <olivier.matz@6wind.com> > Cc: Aaron Conole <aconole@redhat.com> > > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> > --- > drivers/mempool/octeontx2/otx2_mempool_ops.c | 291 +++++++++++++++++++ > 1 file changed, 291 insertions(+) > > diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c > index c59bd73c0..e6737abda 100644 > --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c > +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c > @@ -37,6 +37,293 @@ npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, > return -ENOENT; > } > > +#if defined(RTE_ARCH_ARM64) > +static __rte_noinline int > +npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr, > + void **obj_table, unsigned int n) > +{ > + uint8_t i; > + > + for (i = 0; i < n; i++) { > + if (obj_table[i] != NULL) > + continue; > + if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i)) > + return -ENOENT; > + } > + > + return 0; > +} > + > +static __attribute__((optimize("-O3"))) __rte_noinline int __hot Sorry if I missed this before. Is there a good reason to hard-code this optimization, rather than let the build system provide it? > +npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr, > + unsigned int n, void **obj_table) > +{ > + const __uint128_t wdata128 = ((__uint128_t)wdata << 64) | wdata; > + uint64x2_t failed = vdupq_n_u64(~0); > + > + switch (n) { > + case 32: > + { > + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7, t8, t9; > + __uint128_t t10, t11; > + > + asm volatile ( > + ".cpu generic+lse\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t8], %H[t8], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t9], %H[t9], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t10], %H[t10], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t11], %H[t11], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d16, %[t0]\n" > + "fmov v16.D[1], %H[t0]\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d17, %[t1]\n" > + "fmov v17.D[1], %H[t1]\n" > + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d18, %[t2]\n" > + "fmov v18.D[1], %H[t2]\n" > + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d19, %[t3]\n" > + "fmov v19.D[1], %H[t3]\n" > + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "and %[failed].16B, %[failed].16B, v17.16B\n" > + "and %[failed].16B, %[failed].16B, v18.16B\n" > + "and %[failed].16B, %[failed].16B, v19.16B\n" > + "fmov d20, %[t4]\n" > + "fmov v20.D[1], %H[t4]\n" > + "fmov d21, %[t5]\n" > + "fmov v21.D[1], %H[t5]\n" > + "fmov d22, %[t6]\n" > + "fmov v22.D[1], %H[t6]\n" > + "fmov d23, %[t7]\n" > + "fmov v23.D[1], %H[t7]\n" > + "and %[failed].16B, %[failed].16B, v20.16B\n" > + "and %[failed].16B, %[failed].16B, v21.16B\n" > + "and %[failed].16B, %[failed].16B, v22.16B\n" > + "and %[failed].16B, %[failed].16B, v23.16B\n" > + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" > + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" > + "fmov d16, %[t8]\n" > + "fmov v16.D[1], %H[t8]\n" > + "fmov d17, %[t9]\n" > + "fmov v17.D[1], %H[t9]\n" > + "fmov d18, %[t10]\n" > + "fmov v18.D[1], %H[t10]\n" > + "fmov d19, %[t11]\n" > + "fmov v19.D[1], %H[t11]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "and %[failed].16B, %[failed].16B, v17.16B\n" > + "and %[failed].16B, %[failed].16B, v18.16B\n" > + "and %[failed].16B, %[failed].16B, v19.16B\n" > + "fmov d20, %[t0]\n" > + "fmov v20.D[1], %H[t0]\n" > + "fmov d21, %[t1]\n" > + "fmov v21.D[1], %H[t1]\n" > + "fmov d22, %[t2]\n" > + "fmov v22.D[1], %H[t2]\n" > + "fmov d23, %[t3]\n" > + "fmov v23.D[1], %H[t3]\n" > + "and %[failed].16B, %[failed].16B, v20.16B\n" > + "and %[failed].16B, %[failed].16B, v21.16B\n" > + "and %[failed].16B, %[failed].16B, v22.16B\n" > + "and %[failed].16B, %[failed].16B, v23.16B\n" > + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" > + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" > + : "+Q" (*addr), [failed] "=&w" (failed), > + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), > + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), > + [t6] "=&r" (t6), [t7] "=&r" (t7), [t8] "=&r" (t8), > + [t9] "=&r" (t9), [t10] "=&r" (t10), [t11] "=&r" (t11) > + : [wdata] "r" (wdata128), [dst] "r" (obj_table), > + [loc] "r" (addr) > + : "memory", "v16", "v17", "v18", > + "v19", "v20", "v21", "v22", "v23" > + ); > + break; > + } > + case 16: > + { > + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7; > + > + asm volatile ( > + ".cpu generic+lse\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d16, %[t0]\n" > + "fmov v16.D[1], %H[t0]\n" > + "fmov d17, %[t1]\n" > + "fmov v17.D[1], %H[t1]\n" > + "fmov d18, %[t2]\n" > + "fmov v18.D[1], %H[t2]\n" > + "fmov d19, %[t3]\n" > + "fmov v19.D[1], %H[t3]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "and %[failed].16B, %[failed].16B, v17.16B\n" > + "and %[failed].16B, %[failed].16B, v18.16B\n" > + "and %[failed].16B, %[failed].16B, v19.16B\n" > + "fmov d20, %[t4]\n" > + "fmov v20.D[1], %H[t4]\n" > + "fmov d21, %[t5]\n" > + "fmov v21.D[1], %H[t5]\n" > + "fmov d22, %[t6]\n" > + "fmov v22.D[1], %H[t6]\n" > + "fmov d23, %[t7]\n" > + "fmov v23.D[1], %H[t7]\n" > + "and %[failed].16B, %[failed].16B, v20.16B\n" > + "and %[failed].16B, %[failed].16B, v21.16B\n" > + "and %[failed].16B, %[failed].16B, v22.16B\n" > + "and %[failed].16B, %[failed].16B, v23.16B\n" > + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" > + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" > + : "+Q" (*addr), [failed] "=&w" (failed), > + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), > + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), > + [t6] "=&r" (t6), [t7] "=&r" (t7) > + : [wdata] "r" (wdata128), [dst] "r" (obj_table), > + [loc] "r" (addr) > + : "memory", "v16", "v17", "v18", "v19", > + "v20", "v21", "v22", "v23" > + ); > + break; > + } > + case 8: > + { > + __uint128_t t0, t1, t2, t3; > + > + asm volatile ( > + ".cpu generic+lse\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d16, %[t0]\n" > + "fmov v16.D[1], %H[t0]\n" > + "fmov d17, %[t1]\n" > + "fmov v17.D[1], %H[t1]\n" > + "fmov d18, %[t2]\n" > + "fmov v18.D[1], %H[t2]\n" > + "fmov d19, %[t3]\n" > + "fmov v19.D[1], %H[t3]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "and %[failed].16B, %[failed].16B, v17.16B\n" > + "and %[failed].16B, %[failed].16B, v18.16B\n" > + "and %[failed].16B, %[failed].16B, v19.16B\n" > + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" > + : "+Q" (*addr), [failed] "=&w" (failed), > + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), > + [t3] "=&r" (t3) > + : [wdata] "r" (wdata128), [dst] "r" (obj_table), > + [loc] "r" (addr) > + : "memory", "v16", "v17", "v18", "v19" > + ); > + break; > + } > + case 4: > + { > + __uint128_t t0, t1; > + > + asm volatile ( > + ".cpu generic+lse\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d16, %[t0]\n" > + "fmov v16.D[1], %H[t0]\n" > + "fmov d17, %[t1]\n" > + "fmov v17.D[1], %H[t1]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "and %[failed].16B, %[failed].16B, v17.16B\n" > + "st1 { v16.2d, v17.2d}, [%[dst]], 32\n" > + : "+Q" (*addr), [failed] "=&w" (failed), > + [t0] "=&r" (t0), [t1] "=&r" (t1) > + : [wdata] "r" (wdata128), [dst] "r" (obj_table), > + [loc] "r" (addr) > + : "memory", "v16", "v17" > + ); > + break; > + } > + case 2: > + { > + __uint128_t t0; > + > + asm volatile ( > + ".cpu generic+lse\n" > + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" > + "fmov d16, %[t0]\n" > + "fmov v16.D[1], %H[t0]\n" > + "and %[failed].16B, %[failed].16B, v16.16B\n" > + "st1 { v16.2d}, [%[dst]], 16\n" > + : "+Q" (*addr), [failed] "=&w" (failed), > + [t0] "=&r" (t0) > + : [wdata] "r" (wdata128), [dst] "r" (obj_table), > + [loc] "r" (addr) > + : "memory", "v16" > + ); > + break; > + } > + case 1: > + return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); > + } > + > + if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1)))) > + return npa_lf_aura_op_search_alloc(wdata, addr, (void **) > + ((char *)obj_table - (sizeof(uint64_t) * n)), n); > + > + return 0; > +} > + > +static __rte_noinline void > +otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n) > +{ > + unsigned int i; > + > + for (i = 0; i < n; i++) { > + if (obj_table[i] != NULL) { > + otx2_npa_enq(mp, &obj_table[i], 1); > + obj_table[i] = NULL; > + } > + } > +} > + > +static inline int __hot > +otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n) > +{ > + const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); > + void **obj_table_bak = obj_table; > + const unsigned int nfree = n; > + unsigned int parts; > + > + int64_t * const addr = (int64_t * const) > + (npa_lf_aura_handle_to_base(mp->pool_id) + > + NPA_LF_AURA_OP_ALLOCX(0)); > + while (n) { > + parts = n > 31 ? 32 : rte_align32prevpow2(n); > + n -= parts; > + if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr, > + parts, obj_table))) { > + otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n); > + return -ENOENT; > + } > + obj_table += parts; > + } > + > + return 0; > +} > +#endif > + > static inline int __hot > otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) > { > @@ -463,7 +750,11 @@ static struct rte_mempool_ops otx2_npa_ops = { > .get_count = otx2_npa_get_count, > .calc_mem_size = otx2_npa_calc_mem_size, > .populate = otx2_npa_populate, > +#if defined(RTE_ARCH_ARM64) > + .dequeue = otx2_npa_deq_arm64, > +#else > .dequeue = otx2_npa_deq, > +#endif > }; > > MEMPOOL_REGISTER_OPS(otx2_npa_ops); ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-06-22 13:21 UTC | newest] Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-06-22 13:21 [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 Jerin Jacob Kollanukkaran -- strict thread matches above, loose matches on Subject: below -- 2019-06-01 1:48 [dpdk-dev] [PATCH v2 00/27] OCTEON TX2 common and mempool driver jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 " jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj 2019-06-17 21:25 ` Aaron Conole
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).