From: Shahaf Shuler <shahafs@mellanox.com>
To: Yongseok Koh <yskoh@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v3 3/4] net/mlx5: remove device register remap
Date: Mon, 8 Apr 2019 05:48:32 +0000 [thread overview]
Message-ID: <AM0PR0502MB379559A3B4BB7509C74ACE9AC32C0@AM0PR0502MB3795.eurprd05.prod.outlook.com> (raw)
Message-ID: <20190408054832.dpvHb-bsMhdttHG5T141ys9Arj1Fx7tqy-UWe1LN2Zs@z> (raw)
In-Reply-To: <20190405013357.14503-4-yskoh@mellanox.com>
Hi Koh,
See small comments below. Same for mlx4 patch.
Friday, April 5, 2019 4:34 AM, Yongseok Koh:
> Subject: [dpdk-dev] [PATCH v3 3/4] net/mlx5: remove device register remap
>
> UAR (User Access Region) register does not need to be remapped for
> primary process but it should be remapped only for secondary process. UAR
> register table is in the process private structure in rte_eth_devices[],
> (struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private
>
> The actual UAR table follows the data structure and the table is used for both
> Tx and Rx.
>
> For Tx, BlueFlame in UAR is used to ring the doorbell. MLX5_TX_BFREG(txq)
> is defined to get a register for the txq. Processes access its own private data
> to acquire the register from the UAR table.
>
> For Rx, the doorbell in UAR is required in arming CQ event. However, it is a
> known issue that the register isn't remapped for secondary process.
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5.c | 198 +++++-----------------------------------
> drivers/net/mlx5/mlx5.h | 15 ++-
> drivers/net/mlx5/mlx5_ethdev.c | 17 ++++
> drivers/net/mlx5/mlx5_rxtx.h | 11 ++-
> drivers/net/mlx5/mlx5_trigger.c | 6 --
> drivers/net/mlx5/mlx5_txq.c | 180 ++++++++++++++++++++++-------------
> -
[...]
> /**
> @@ -1182,12 +1010,32 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> }
> DRV_LOG(DEBUG, "naming Ethernet device \"%s\"", name);
> if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
> + struct mlx5_proc_priv *ppriv;
> + size_t ppriv_size;
> +
> eth_dev = rte_eth_dev_attach_secondary(name);
> if (eth_dev == NULL) {
> DRV_LOG(ERR, "can not attach rte ethdev");
> rte_errno = ENOMEM;
> return NULL;
> }
> + priv = eth_dev->data->dev_private;
> + /*
> + * UAR register table follows the process private structure.
> + * BlueFlame registers for Tx queues come first and registers
> + * for Rx queues follows.
> + */
> + ppriv_size = sizeof(struct mlx5_proc_priv) +
> + (priv->rxqs_n + priv->txqs_n) * sizeof(void *);
Why you add also the rxqs_n? why not only the txqs?
> + ppriv = rte_malloc_socket("mlx5_proc_priv", ppriv_size,
> + RTE_CACHE_LINE_SIZE,
> + dpdk_dev->numa_node);
> + if (!ppriv) {
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> + ppriv->uar_table_sz = ppriv_size;
> + eth_dev->process_private = ppriv;
> eth_dev->device = dpdk_dev;
> eth_dev->dev_ops = &mlx5_dev_sec_ops;
> /* Receive command fd from primary process */ @@ -1195,7
> +1043,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> if (err < 0)
> return NULL;
> /* Remap UAR for Tx queues. */
> - err = mlx5_tx_uar_remap(eth_dev, err);
> + err = mlx5_tx_uar_init_secondary(eth_dev, err);
> if (err)
> return NULL;
> /*
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> 699c8fcf6d..1ac4ad71b1 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -97,8 +97,6 @@ struct mlx5_shared_data {
> /* Global spinlock for primary and secondary processes. */
> int init_done; /* Whether primary has done initialization. */
> unsigned int secondary_cnt; /* Number of secondary processes
> init'd. */
> - void *uar_base;
> - /* Reserved UAR address space for TXQ UAR(hw doorbell) mapping.
> */
> struct mlx5_dev_list mem_event_cb_list;
> rte_rwlock_t mem_event_rwlock;
> };
> @@ -106,8 +104,6 @@ struct mlx5_shared_data {
> /* Per-process data structure, not visible to other processes. */ struct
> mlx5_local_data {
> int init_done; /* Whether a secondary has done initialization. */
> - void *uar_base;
> - /* Reserved UAR address space for TXQ UAR(hw doorbell) mapping.
> */
> };
>
> extern struct mlx5_shared_data *mlx5_shared_data; @@ -282,6 +278,17
> @@ struct mlx5_ibv_shared {
> struct mlx5_ibv_shared_port port[]; /* per device port data array. */
> };
>
> +/* Per-process private structure. */
> +struct mlx5_proc_priv {
> + size_t uar_table_sz;
> + /* Size of UAR register table. */
> + void *uar_table[];
> + /* Table of UAR registers for each process. */ };
> +
> +#define MLX5_PROC_PRIV(port_id) \
> + ((struct mlx5_proc_priv *)rte_eth_devices[port_id].process_private)
> +
> struct mlx5_priv {
> LIST_ENTRY(mlx5_priv) mem_event_cb;
> /**< Called by memory event callback. */ diff --git
> a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index
> 9ae9dddd3c..42297f11c9 100644
> --- a/drivers/net/mlx5/mlx5_ethdev.c
> +++ b/drivers/net/mlx5/mlx5_ethdev.c
> @@ -382,6 +382,8 @@ int
> mlx5_dev_configure(struct rte_eth_dev *dev) {
> struct mlx5_priv *priv = dev->data->dev_private;
> + struct mlx5_proc_priv *ppriv;
> + size_t ppriv_size;
> unsigned int rxqs_n = dev->data->nb_rx_queues;
> unsigned int txqs_n = dev->data->nb_tx_queues;
> unsigned int i;
> @@ -450,6 +452,21 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
> if (++j == rxqs_n)
> j = 0;
> }
> + /*
> + * UAR register table follows the process private structure. BlueFlame
> + * registers for Tx queues come first and registers for Rx queues
> + * follows.
> + */
> + ppriv_size = sizeof(struct mlx5_proc_priv) +
> + (priv->rxqs_n + priv->txqs_n) * sizeof(void *);
Ditto.
> + ppriv = rte_malloc_socket("mlx5_proc_priv", ppriv_size,
> + RTE_CACHE_LINE_SIZE, dev->device-
> >numa_node);
> + if (!ppriv) {
> + rte_errno = ENOMEM;
> + return -rte_errno;
> + }
> + ppriv->uar_table_sz = ppriv_size;
> + dev->process_private = ppriv;
> return 0;
> }
>
> diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
> index 7b58063ceb..5d49892429 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.h
> +++ b/drivers/net/mlx5/mlx5_rxtx.h
> @@ -201,8 +201,8 @@ struct mlx5_txq_data {
> volatile void *wqes; /* Work queue (use volatile to write into). */
> volatile uint32_t *qp_db; /* Work queue doorbell. */
> volatile uint32_t *cq_db; /* Completion queue doorbell. */
> - volatile void *bf_reg; /* Blueflame register remapped. */
> struct rte_mbuf *(*elts)[]; /* TX elements. */
> + uint16_t port_id; /* Port ID of device. */
> uint16_t idx; /* Queue index. */
> struct mlx5_txq_stats stats; /* TX queue counters. */ #ifndef
> RTE_ARCH_64 @@ -231,9 +231,12 @@ struct mlx5_txq_ctrl {
> struct mlx5_txq_ibv *ibv; /* Verbs queue object. */
> struct mlx5_priv *priv; /* Back pointer to private data. */
> off_t uar_mmap_offset; /* UAR mmap offset for non-primary
> process. */
> - volatile void *bf_reg_orig; /* Blueflame register from verbs. */
> + void *bf_reg; /* BlueFlame register from Verbs. */
I guess you keep this one in order to get the VA offset for the secondary mapping, right? Because otherwise we can take the bf_reg from the UAR table on the process private.
If so, better to rename it to uar_page_offset (or other name you like) in order to avoid fields duplication.
> };
>
> +#define MLX5_TX_BFREG(txq) \
> + (MLX5_PROC_PRIV((txq)->port_id)->uar_table[(txq)->idx])
> +
> /* mlx5_rxq.c */
>
> extern uint8_t rss_hash_default_key[];
> @@ -301,7 +304,7 @@ uint64_t mlx5_get_rx_queue_offloads(struct
> rte_eth_dev *dev); int mlx5_tx_queue_setup(struct rte_eth_dev *dev,
> uint16_t idx, uint16_t desc,
> unsigned int socket, const struct rte_eth_txconf
> *conf); void mlx5_tx_queue_release(void *dpdk_txq); -int
> mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd);
> +int mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd);
> struct mlx5_txq_ibv *mlx5_txq_ibv_new(struct rte_eth_dev *dev, uint16_t
> idx); struct mlx5_txq_ibv *mlx5_txq_ibv_get(struct rte_eth_dev *dev,
> uint16_t idx); int mlx5_txq_ibv_release(struct mlx5_txq_ibv *txq_ibv); @@
> -704,7 +707,7 @@ static __rte_always_inline void
> mlx5_tx_dbrec_cond_wmb(struct mlx5_txq_data *txq, volatile struct
> mlx5_wqe *wqe,
> int cond)
> {
> - uint64_t *dst = (uint64_t *)((uintptr_t)txq->bf_reg);
> + uint64_t *dst = MLX5_TX_BFREG(txq);
I guess no perf penalty due to this change right?
Would you consider to prefetch the addr before the db logic just to be on the safe side?
> volatile uint64_t *src = ((volatile uint64_t *)wqe);
>
> rte_cio_wmb();
> diff --git a/drivers/net/mlx5/mlx5_trigger.c
> b/drivers/net/mlx5/mlx5_trigger.c index 7c1e5594d6..b7fde35758 100644
> --- a/drivers/net/mlx5/mlx5_trigger.c
> +++ b/drivers/net/mlx5/mlx5_trigger.c
> @@ -58,12 +58,6 @@ mlx5_txq_start(struct rte_eth_dev *dev)
> goto error;
> }
> }
> - ret = mlx5_tx_uar_remap(dev, priv->sh->ctx->cmd_fd);
> - if (ret) {
> - /* Adjust index for rollback. */
> - i = priv->txqs_n - 1;
> - goto error;
> - }
> return 0;
> error:
> ret = rte_errno; /* Save rte_errno before cleanup. */ diff --git
> a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index
> 4bd08cb035..5fb1761955 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -229,13 +229,99 @@ mlx5_tx_queue_release(void *dpdk_txq)
> }
> }
>
> +/**
> + * Initialize Tx UAR registers for primary process.
> + *
> + * @param txq_ctrl
> + * Pointer to Tx queue control structure.
> + */
> +static void
> +txq_uar_init(struct mlx5_txq_ctrl *txq_ctrl) {
> + struct mlx5_priv *priv = txq_ctrl->priv;
> + struct mlx5_proc_priv *ppriv = MLX5_PROC_PRIV(PORT_ID(priv));
> +
> + assert(rte_eal_process_type() == RTE_PROC_PRIMARY);
> + assert(ppriv);
> + ppriv->uar_table[txq_ctrl->txq.idx] = txq_ctrl->bf_reg; #ifndef
> +RTE_ARCH_64
> + struct mlx5_priv *priv = txq_ctrl->priv;
> + struct mlx5_txq_data *txq = &txq_ctrl->txq;
> + unsigned int lock_idx;
> + /* Assign an UAR lock according to UAR page number */
> + lock_idx = (txq_ctrl->uar_mmap_offset / page_size) &
> + MLX5_UAR_PAGE_NUM_MASK;
> + txq->uar_lock = &priv->uar_lock[lock_idx]; #endif }
>
> /**
> - * Mmap TX UAR(HW doorbell) pages into reserved UAR address space.
> - * Both primary and secondary process do mmap to make UAR address
> - * aligned.
> + * Remap UAR register of a Tx queue for secondary process.
> *
> - * @param[in] dev
> + * Remapped address is stored at the table in the process private
> +structure of
> + * the device, indexed by queue index.
> + *
> + * @param txq_ctrl
> + * Pointer to Tx queue control structure.
> + * @param fd
> + * Verbs file descriptor to map UAR pages.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +static int
> +txq_uar_init_secondary(struct mlx5_txq_ctrl *txq_ctrl, int fd) {
> + struct mlx5_priv *priv = txq_ctrl->priv;
> + struct mlx5_proc_priv *ppriv = MLX5_PROC_PRIV(PORT_ID(priv));
> + struct mlx5_txq_data *txq = &txq_ctrl->txq;
> + void *addr;
> + uintptr_t uar_va;
> + uintptr_t offset;
> + const size_t page_size = sysconf(_SC_PAGESIZE);
> +
> + assert(ppriv);
> + /*
> + * As rdma-core, UARs are mapped in size of OS page
> + * size. Ref to libmlx5 function: mlx5_init_context()
> + */
> + uar_va = (uintptr_t)txq_ctrl->bf_reg;
> + offset = uar_va & (page_size - 1); /* Offset in page. */
> + addr = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED, fd,
> + txq_ctrl->uar_mmap_offset);
> + if (addr == MAP_FAILED) {
> + DRV_LOG(ERR,
> + "port %u mmap failed for BF reg of txq %u",
> + txq->port_id, txq->idx);
> + rte_errno = ENXIO;
> + return -rte_errno;
> + }
> + addr = RTE_PTR_ADD(addr, offset);
> + ppriv->uar_table[txq->idx] = addr;
> + return 0;
> +}
> +
> +/**
> + * Unmap UAR register of a Tx queue for secondary process.
> + *
> + * @param txq_ctrl
> + * Pointer to Tx queue control structure.
> + */
> +static void
> +txq_uar_uninit_secondary(struct mlx5_txq_ctrl *txq_ctrl) {
> + struct mlx5_proc_priv *ppriv = MLX5_PROC_PRIV(PORT_ID(txq_ctrl-
> >priv));
> + const size_t page_size = sysconf(_SC_PAGESIZE);
> + void *addr;
> +
> + addr = ppriv->uar_table[txq_ctrl->txq.idx];
> + munmap(RTE_PTR_ALIGN_FLOOR(addr, page_size), page_size); }
> +
> +/**
> + * Initialize Tx UAR registers for secondary process.
> + *
> + * @param dev
> * Pointer to Ethernet device.
> * @param fd
> * Verbs file descriptor to map UAR pages.
> @@ -244,81 +330,36 @@ mlx5_tx_queue_release(void *dpdk_txq)
> * 0 on success, a negative errno value otherwise and rte_errno is set.
> */
> int
> -mlx5_tx_uar_remap(struct rte_eth_dev *dev, int fd)
> +mlx5_tx_uar_init_secondary(struct rte_eth_dev *dev, int fd)
> {
> struct mlx5_priv *priv = dev->data->dev_private;
> - unsigned int i, j;
> - uintptr_t pages[priv->txqs_n];
> - unsigned int pages_n = 0;
> - uintptr_t uar_va;
> - uintptr_t off;
> - void *addr;
> - void *ret;
> struct mlx5_txq_data *txq;
> struct mlx5_txq_ctrl *txq_ctrl;
> - int already_mapped;
> - size_t page_size = sysconf(_SC_PAGESIZE);
> -#ifndef RTE_ARCH_64
> - unsigned int lock_idx;
> -#endif
> + unsigned int i;
> + int ret;
>
> - memset(pages, 0, priv->txqs_n * sizeof(uintptr_t));
> - /*
> - * As rdma-core, UARs are mapped in size of OS page size.
> - * Use aligned address to avoid duplicate mmap.
> - * Ref to libmlx5 function: mlx5_init_context()
> - */
> + assert(rte_eal_process_type() == RTE_PROC_SECONDARY);
> for (i = 0; i != priv->txqs_n; ++i) {
> if (!(*priv->txqs)[i])
> continue;
> txq = (*priv->txqs)[i];
> txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
> assert(txq->idx == (uint16_t)i);
> - /* UAR addr form verbs used to find dup and offset in page.
> */
> - uar_va = (uintptr_t)txq_ctrl->bf_reg_orig;
> - off = uar_va & (page_size - 1); /* offset in page. */
> - uar_va = RTE_ALIGN_FLOOR(uar_va, page_size); /* page
> addr. */
> - already_mapped = 0;
> - for (j = 0; j != pages_n; ++j) {
> - if (pages[j] == uar_va) {
> - already_mapped = 1;
> - break;
> - }
> - }
> - /* new address in reserved UAR address space. */
> - addr = RTE_PTR_ADD(mlx5_shared_data->uar_base,
> - uar_va & (uintptr_t)(MLX5_UAR_SIZE - 1));
> - if (!already_mapped) {
> - pages[pages_n++] = uar_va;
> - /* fixed mmap to specified address in reserved
> - * address space.
> - */
> - ret = mmap(addr, page_size,
> - PROT_WRITE, MAP_FIXED | MAP_SHARED,
> fd,
> - txq_ctrl->uar_mmap_offset);
> - if (ret != addr) {
> - /* fixed mmap have to return same address
> */
> - DRV_LOG(ERR,
> - "port %u call to mmap failed on UAR"
> - " for txq %u",
> - dev->data->port_id, txq->idx);
> - rte_errno = ENXIO;
> - return -rte_errno;
> - }
> - }
> - if (rte_eal_process_type() == RTE_PROC_PRIMARY) /* save
> once */
> - txq_ctrl->txq.bf_reg = RTE_PTR_ADD((void *)addr,
> off);
> - else
> - assert(txq_ctrl->txq.bf_reg ==
> - RTE_PTR_ADD((void *)addr, off));
> -#ifndef RTE_ARCH_64
> - /* Assign a UAR lock according to UAR page number */
> - lock_idx = (txq_ctrl->uar_mmap_offset / page_size) &
> - MLX5_UAR_PAGE_NUM_MASK;
> - txq->uar_lock = &priv->uar_lock[lock_idx];
> -#endif
> + ret = txq_uar_init_secondary(txq_ctrl, fd);
> + if (ret)
> + goto error;
> }
> return 0;
> +error:
> + /* Rollback. */
> + do {
> + if (!(*priv->txqs)[i])
> + continue;
> + txq = (*priv->txqs)[i];
> + txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
> + txq_uar_uninit_secondary(txq_ctrl);
> + } while (i--);
> + return -rte_errno;
> }
>
> /**
> @@ -507,7 +548,6 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev,
> uint16_t idx)
> txq_data->wqes = qp.sq.buf;
> txq_data->wqe_n = log2above(qp.sq.wqe_cnt);
> txq_data->qp_db = &qp.dbrec[MLX5_SND_DBR];
> - txq_ctrl->bf_reg_orig = qp.bf.reg;
> txq_data->cq_db = cq_info.dbrec;
> txq_data->cqes =
> (volatile struct mlx5_cqe (*)[])
> @@ -521,6 +561,8 @@ mlx5_txq_ibv_new(struct rte_eth_dev *dev,
> uint16_t idx)
> txq_ibv->qp = tmpl.qp;
> txq_ibv->cq = tmpl.cq;
> rte_atomic32_inc(&txq_ibv->refcnt);
> + txq_ctrl->bf_reg = qp.bf.reg;
> + txq_uar_init(txq_ctrl);
> if (qp.comp_mask & MLX5DV_QP_MASK_UAR_MMAP_OFFSET) {
> txq_ctrl->uar_mmap_offset = qp.uar_mmap_offset;
> DRV_LOG(DEBUG, "port %u: uar_mmap_offset 0x%lx", @@ -
> 778,6 +820,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx,
> uint16_t desc,
> tmpl->priv = priv;
> tmpl->socket = socket;
> tmpl->txq.elts_n = log2above(desc);
> + tmpl->txq.port_id = dev->data->port_id;
> tmpl->txq.idx = idx;
> txq_set_params(tmpl);
> DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d", @@ -
> 836,15 +879,12 @@ mlx5_txq_release(struct rte_eth_dev *dev, uint16_t idx)
> {
> struct mlx5_priv *priv = dev->data->dev_private;
> struct mlx5_txq_ctrl *txq;
> - size_t page_size = sysconf(_SC_PAGESIZE);
>
> if (!(*priv->txqs)[idx])
> return 0;
> txq = container_of((*priv->txqs)[idx], struct mlx5_txq_ctrl, txq);
> if (txq->ibv && !mlx5_txq_ibv_release(txq->ibv))
> txq->ibv = NULL;
> - munmap((void *)RTE_ALIGN_FLOOR((uintptr_t)txq->txq.bf_reg,
> page_size),
> - page_size);
> if (rte_atomic32_dec_and_test(&txq->refcnt)) {
> txq_free_elts(txq);
> mlx5_mr_btree_free(&txq->txq.mr_ctrl.cache_bh);
> --
> 2.11.0
next prev parent reply other threads:[~2019-04-08 5:48 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-25 19:36 [dpdk-dev] [PATCH 0/3] net/mlx: " Yongseok Koh
2019-03-25 19:36 ` Yongseok Koh
2019-03-25 19:36 ` [dpdk-dev] [PATCH 1/3] net/mlx5: fix recursive inclusion of header file Yongseok Koh
2019-03-25 19:36 ` Yongseok Koh
2019-03-25 19:36 ` [dpdk-dev] [PATCH 2/3] net/mlx5: remove device register remap Yongseok Koh
2019-03-25 19:36 ` Yongseok Koh
2019-03-25 19:36 ` [dpdk-dev] [PATCH 3/3] net/mlx4: " Yongseok Koh
2019-03-25 19:36 ` Yongseok Koh
2019-04-01 21:22 ` [dpdk-dev] [PATCH v2 0/3] net/mlx: " Yongseok Koh
2019-04-01 21:22 ` Yongseok Koh
2019-04-01 21:22 ` [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix recursive inclusion of header file Yongseok Koh
2019-04-01 21:22 ` Yongseok Koh
2019-04-02 5:39 ` Shahaf Shuler
2019-04-02 5:39 ` Shahaf Shuler
2019-04-01 21:22 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: remove device register remap Yongseok Koh
2019-04-01 21:22 ` Yongseok Koh
2019-04-02 6:50 ` Shahaf Shuler
2019-04-02 6:50 ` Shahaf Shuler
2019-04-01 21:22 ` [dpdk-dev] [PATCH v2 3/3] net/mlx4: " Yongseok Koh
2019-04-01 21:22 ` Yongseok Koh
2019-04-05 1:33 ` [dpdk-dev] [PATCH v3 0/4] net/mlx: " Yongseok Koh
2019-04-05 1:33 ` Yongseok Koh
2019-04-05 1:33 ` [dpdk-dev] [PATCH v3 1/4] net/mlx5: fix recursive inclusion of header file Yongseok Koh
2019-04-05 1:33 ` Yongseok Koh
2019-04-05 1:33 ` [dpdk-dev] [PATCH v3 2/4] net/mlx5: remove redundant queue index Yongseok Koh
2019-04-05 1:33 ` Yongseok Koh
2019-04-08 5:24 ` Shahaf Shuler
2019-04-08 5:24 ` Shahaf Shuler
2019-04-05 1:33 ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: remove device register remap Yongseok Koh
2019-04-05 1:33 ` Yongseok Koh
2019-04-08 5:48 ` Shahaf Shuler [this message]
2019-04-08 5:48 ` Shahaf Shuler
2019-04-09 19:36 ` Yongseok Koh
2019-04-09 19:36 ` Yongseok Koh
2019-04-05 1:33 ` [dpdk-dev] [PATCH v3 4/4] net/mlx4: " Yongseok Koh
2019-04-05 1:33 ` Yongseok Koh
2019-04-09 23:13 ` [dpdk-dev] [PATCH v4 0/4] net/mlx: " Yongseok Koh
2019-04-09 23:13 ` Yongseok Koh
2019-04-09 23:13 ` [dpdk-dev] [PATCH v4 1/4] net/mlx5: fix recursive inclusion of header file Yongseok Koh
2019-04-09 23:13 ` Yongseok Koh
2019-04-09 23:13 ` [dpdk-dev] [PATCH v4 2/4] net/mlx5: remove redundant queue index Yongseok Koh
2019-04-09 23:13 ` Yongseok Koh
2019-04-09 23:13 ` [dpdk-dev] [PATCH v4 3/4] net/mlx5: remove device register remap Yongseok Koh
2019-04-09 23:13 ` Yongseok Koh
2019-04-10 17:50 ` Ferruh Yigit
2019-04-10 17:50 ` Ferruh Yigit
2019-04-10 19:12 ` Yongseok Koh
2019-04-10 19:12 ` Yongseok Koh
2019-04-09 23:13 ` [dpdk-dev] [PATCH v4 4/4] net/mlx4: " Yongseok Koh
2019-04-09 23:13 ` Yongseok Koh
2019-04-10 6:58 ` [dpdk-dev] [PATCH v4 0/4] net/mlx: " Shahaf Shuler
2019-04-10 6:58 ` Shahaf Shuler
2019-04-10 17:50 ` Ferruh Yigit
2019-04-10 17:50 ` Ferruh Yigit
2019-04-10 18:41 ` [dpdk-dev] [PATCH v5 " Yongseok Koh
2019-04-10 18:41 ` Yongseok Koh
2019-04-10 18:41 ` [dpdk-dev] [PATCH v5 1/4] net/mlx5: fix recursive inclusion of header file Yongseok Koh
2019-04-10 18:41 ` Yongseok Koh
2019-04-10 18:41 ` [dpdk-dev] [PATCH v5 2/4] net/mlx5: remove redundant queue index Yongseok Koh
2019-04-10 18:41 ` Yongseok Koh
2019-04-10 18:41 ` [dpdk-dev] [PATCH v5 3/4] net/mlx5: remove device register remap Yongseok Koh
2019-04-10 18:41 ` Yongseok Koh
2019-04-10 18:41 ` [dpdk-dev] [PATCH v5 4/4] net/mlx4: " Yongseok Koh
2019-04-10 18:41 ` Yongseok Koh
2019-04-11 8:40 ` [dpdk-dev] [PATCH v5 0/4] net/mlx: " Shahaf Shuler
2019-04-11 8:40 ` Shahaf Shuler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AM0PR0502MB379559A3B4BB7509C74ACE9AC32C0@AM0PR0502MB3795.eurprd05.prod.outlook.com \
--to=shahafs@mellanox.com \
--cc=dev@dpdk.org \
--cc=yskoh@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).