From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Karlsson, Magnus" <magnus.karlsson@intel.com>,
"Topel, Bjorn" <bjorn.topel@intel.com>
Subject: Re: [dpdk-dev] [RFC 1/7] net/af_xdp: new PMD driver
Date: Thu, 1 Mar 2018 01:51:08 +0000 [thread overview]
Message-ID: <039ED4275CED7440929022BC67E706115315D3B2@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <20180228154217.4aff5095@xeon-e3>
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Thursday, March 1, 2018 7:42 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; magnus.karlsson@intei.com; Topel, Bjorn
> <bjorn.topel@intel.com>
> Subject: Re: [dpdk-dev] [RFC 1/7] net/af_xdp: new PMD driver
>
> On Tue, 27 Feb 2018 17:33:00 +0800
> Qi Zhang <qi.z.zhang@intel.com> wrote:
>
> > struct pmd_internals {
> > + int sfd;
> > + int if_index;
> > + char if_name[0x100];
>
> why not IFNAMSIZ?
>
> > + struct ether_addr eth_addr;
> > + struct xdp_queue rx;
> > + struct xdp_queue tx;
> > + struct xdp_umem *umem;
> > + struct rte_mempool *mb_pool;
> > +
> > + unsigned long rx_pkts;
> > + unsigned long rx_bytes;
> > + unsigned long rx_dropped;
> > +
> > + unsigned long tx_pkts;
> > + unsigned long err_pkts;
> > + unsigned long tx_bytes;
>
> why not per-queue stats? per-port stats are expensive
multi-queue is not supported in this implementation, but will be considered.
Regards
Qi
>
> > + uint16_t port_id;
> > + uint16_t queue_idx;
> > + int ring_size;
> > + struct rte_ring *buf_ring;
> > +};
next prev parent reply other threads:[~2018-03-01 1:51 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-27 9:32 [dpdk-dev] [RFC 0/7] PMD driver for AF_XDP Qi Zhang
2018-02-27 9:33 ` [dpdk-dev] [RFC 1/7] net/af_xdp: new PMD driver Qi Zhang
2018-02-28 23:40 ` Stephen Hemminger
2018-02-28 23:42 ` Stephen Hemminger
2018-03-01 1:51 ` Zhang, Qi Z [this message]
2018-02-28 23:42 ` Stephen Hemminger
2018-02-28 23:45 ` Stephen Hemminger
2018-03-01 1:59 ` Zhang, Qi Z
2018-02-27 9:33 ` [dpdk-dev] [RFC 2/7] lib/mbuf: enable parse flags when create mempool Qi Zhang
2018-02-27 9:33 ` [dpdk-dev] [RFC 3/7] lib/mempool: allow page size aligned mempool Qi Zhang
2018-02-27 9:33 ` [dpdk-dev] [RFC 4/7] net/af_xdp: use mbuf mempool for buffer management Qi Zhang
2018-03-01 2:08 ` Stephen Hemminger
2018-02-27 9:33 ` [dpdk-dev] [RFC 5/7] net/af_xdp: enable share mempool Qi Zhang
2018-02-27 9:33 ` [dpdk-dev] [RFC 6/7] net/af_xdp: load BPF file Qi Zhang
2018-03-01 2:10 ` Stephen Hemminger
2018-02-27 9:33 ` [dpdk-dev] [RFC 7/7] app/testpmd: enable parameter for mempool flags Qi Zhang
2018-03-01 2:52 ` [dpdk-dev] [RFC 0/7] PMD driver for AF_XDP Jason Wang
2018-03-01 4:18 ` Zhang, Qi Z
2018-03-01 4:20 ` Zhang, Qi Z
2018-03-01 7:46 ` Jason Wang
2018-03-01 12:56 ` Zhang, Qi Z
2018-03-01 13:18 ` Jason Wang
2018-03-02 4:05 ` Zhang, Qi Z
2018-02-27 9:35 Qi Zhang
2018-02-27 9:35 ` [dpdk-dev] [RFC 1/7] net/af_xdp: new PMD driver Qi Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=039ED4275CED7440929022BC67E706115315D3B2@SHSMSX103.ccr.corp.intel.com \
--to=qi.z.zhang@intel.com \
--cc=bjorn.topel@intel.com \
--cc=dev@dpdk.org \
--cc=magnus.karlsson@intel.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).