From: Qi Zhang <qi.z.zhang@intel.com>
To: ferruh.yigit@intel.com
Cc: thomas@monjalon.net, dev@dpdk.org, xueqin.lin@intel.com,
Qi Zhang <qi.z.zhang@intel.com>
Subject: [dpdk-dev] [PATCH v3 2/2] net/pcap: enable data path for secondary
Date: Thu, 15 Nov 2018 03:56:47 +0800 [thread overview]
Message-ID: <20181114195647.196648-3-qi.z.zhang@intel.com> (raw)
In-Reply-To: <20181114195647.196648-1-qi.z.zhang@intel.com>
Private vdev was the way previously, when pdump developed, now with shared
device mode on virtual devices, pcap data path in secondary is not working.
When secondary adds a virtual device, related data transferred to primary
and primary creates the device and shares device back with secondary.
When pcap device created in primary, pcap handlers (pointers) are process
local and they are not valid for secondary process. This breaks secondary.
So we can't directly share the pcap handlers, but need to create a new set
of handlers for secondary, that's what we done in this patch.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
drivers/net/pcap/rte_eth_pcap.c | 61 +++++++++++++++++++++++++++++++++++------
1 file changed, 52 insertions(+), 9 deletions(-)
diff --git a/drivers/net/pcap/rte_eth_pcap.c b/drivers/net/pcap/rte_eth_pcap.c
index 88cb404f1..245288428 100644
--- a/drivers/net/pcap/rte_eth_pcap.c
+++ b/drivers/net/pcap/rte_eth_pcap.c
@@ -77,6 +77,7 @@ struct pcap_tx_queue {
struct pmd_internals {
struct pcap_rx_queue rx_queue[RTE_PMD_PCAP_MAX_QUEUES];
struct pcap_tx_queue tx_queue[RTE_PMD_PCAP_MAX_QUEUES];
+ char devargs[ETH_PCAP_ARG_MAXLEN];
struct ether_addr eth_addr;
int if_index;
int single_iface;
@@ -1139,6 +1140,7 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
struct pmd_devargs pcaps = {0};
struct pmd_devargs dumpers = {0};
struct rte_eth_dev *eth_dev;
+ struct pmd_internals *internal;
int single_iface = 0;
int ret;
@@ -1155,16 +1157,18 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
PMD_LOG(ERR, "Failed to probe %s", name);
return -1;
}
- /* TODO: request info from primary to set up Rx and Tx */
- eth_dev->dev_ops = &ops;
- eth_dev->device = &dev->device;
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
- }
- kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_arguments);
- if (kvlist == NULL)
- return -1;
+ internal = eth_dev->data->dev_private;
+
+ kvlist = rte_kvargs_parse(internal->devargs, valid_arguments);
+ if (kvlist == NULL)
+ return -1;
+ } else {
+ kvlist = rte_kvargs_parse(rte_vdev_device_args(dev),
+ valid_arguments);
+ if (kvlist == NULL)
+ return -1;
+ }
/*
* If iface argument is passed we open the NICs and use them for
@@ -1229,6 +1233,45 @@ pmd_pcap_probe(struct rte_vdev_device *dev)
goto free_kvlist;
create_eth:
+ if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
+ struct pmd_process_private *pp;
+ unsigned int i;
+
+ internal = eth_dev->data->dev_private;
+ pp = (struct pmd_process_private *)
+ rte_zmalloc(NULL,
+ sizeof(struct pmd_process_private),
+ RTE_CACHE_LINE_SIZE);
+
+ if (pp == NULL) {
+ PMD_LOG(ERR,
+ "Failed to allocate memory for process private");
+ return -1;
+ }
+
+ eth_dev->dev_ops = &ops;
+ eth_dev->device = &dev->device;
+
+ /* setup process private */
+ for (i = 0; i < pcaps.num_of_queue; i++)
+ pp->rx_pcap[i] = pcaps.queue[i].pcap;
+
+ for (i = 0; i < dumpers.num_of_queue; i++) {
+ pp->tx_dumper[i] = dumpers.queue[i].dumper;
+ pp->tx_pcap[i] = dumpers.queue[i].pcap;
+ }
+
+ eth_dev->process_private = pp;
+ eth_dev->rx_pkt_burst = eth_pcap_rx;
+ if (is_tx_pcap)
+ eth_dev->tx_pkt_burst = eth_pcap_tx_dumper;
+ else
+ eth_dev->tx_pkt_burst = eth_pcap_tx;
+
+ rte_eth_dev_probing_finish(eth_dev);
+ return 0;
+ }
+
ret = eth_from_pcaps(dev, &pcaps, pcaps.num_of_queue, &dumpers,
dumpers.num_of_queue, single_iface, is_tx_pcap);
--
2.13.6
next prev parent reply other threads:[~2018-11-14 19:55 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-05 21:08 [dpdk-dev] [PATCH] net/pcap: enable data path on secondary Qi Zhang
2018-11-09 21:13 ` Ferruh Yigit
2018-11-09 21:24 ` Zhang, Qi Z
2018-11-12 16:51 ` [dpdk-dev] [PATCH v2] " Qi Zhang
2018-11-13 16:56 ` Ferruh Yigit
2018-11-13 17:11 ` [dpdk-dev] [PATCH] net/pcap: fix pcap handlers for secondary Ferruh Yigit
2018-11-13 17:14 ` [dpdk-dev] [PATCH v2] net/pcap: enable data path on secondary Thomas Monjalon
2018-11-13 18:27 ` Zhang, Qi Z
2018-11-13 18:43 ` Ferruh Yigit
2018-11-13 19:18 ` Zhang, Qi Z
2018-11-14 19:56 ` [dpdk-dev] [PATCH v3 0/2] fix pcap handlers for secondary Qi Zhang
2018-11-14 19:56 ` [dpdk-dev] [PATCH v3 1/2] net/pcap: move pcap handler to process private Qi Zhang
2018-11-14 23:05 ` Ferruh Yigit
2018-11-15 0:13 ` Zhang, Qi Z
2018-11-14 19:56 ` Qi Zhang [this message]
2018-11-14 23:08 ` [dpdk-dev] [PATCH v3 2/2] net/pcap: enable data path for secondary Ferruh Yigit
2018-11-15 0:06 ` Zhang, Qi Z
2018-11-15 1:37 ` [dpdk-dev] [PATCH v4 0/2] fix pcap handlers " Qi Zhang
2018-11-15 1:37 ` [dpdk-dev] [PATCH v4 1/2] net/pcap: move pcap handler to process private Qi Zhang
2018-11-16 15:56 ` Ferruh Yigit
2018-11-15 1:37 ` [dpdk-dev] [PATCH v4 2/2] net/pcap: enable data path for secondary Qi Zhang
2018-11-16 14:54 ` [dpdk-dev] [PATCH v4 0/2] fix pcap handlers " Ferruh Yigit
2018-11-16 16:12 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181114195647.196648-3-qi.z.zhang@intel.com \
--to=qi.z.zhang@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=thomas@monjalon.net \
--cc=xueqin.lin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).