From: "Zhao1, Wei" <wei.zhao1@intel.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: "stable@dpdk.org" <stable@dpdk.org>,
"Zhang, Qi Z" <qi.z.zhang@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] net/ice: enable VLAN filter offloads support
Date: Wed, 13 Feb 2019 04:05:49 +0000 [thread overview]
Message-ID: <A2573D2ACFCADC41BB3BE09C6DE313CA07EB216E@PGSMSX103.gar.corp.intel.com> (raw)
In-Reply-To: <1550028615-44721-1-git-send-email-wei.zhao1@intel.com>
Error patch, please ignore it.
> -----Original Message-----
> From: Zhao1, Wei
> Sent: Wednesday, February 13, 2019 11:30 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: [PATCH v2] net/ice: enable VLAN filter offloads support
>
> There is need to check whether dev_conf.rxmode.offloads is set when start
> ice device, if one of the vlan related bits is set, for example
> DEV_RX_OFFLOAD_VLAN_FILTER and so on, sevice start process to enable
> this offloads request.
>
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
>
> ---
>
> v2:
> -rework patch for compile error.
> ---
> drivers/net/avf/avf_ethdev.c | 2 +-
> drivers/net/ice/ice_ethdev.c | 10 +++++++++-
> 2 files changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/avf/avf_ethdev.c b/drivers/net/avf/avf_ethdev.c
> index 13eec1b..797f505 100644
> --- a/drivers/net/avf/avf_ethdev.c
> +++ b/drivers/net/avf/avf_ethdev.c
> @@ -1159,7 +1159,7 @@ avf_enable_irq0(struct avf_hw *hw)
> AVF_WRITE_REG(hw, AVFINT_ICR0_ENA1,
> AVFINT_ICR0_ENA1_ADMINQ_MASK);
>
> AVF_WRITE_REG(hw, AVFINT_DYN_CTL01,
> AVFINT_DYN_CTL01_INTENA_MASK |
> -
> AVFINT_DYN_CTL01_ITR_INDX_MASK);
> + AVFINT_DYN_CTL01_CLEARPBA_MASK |
> AVFINT_DYN_CTL01_ITR_INDX_MASK);
>
> AVF_WRITE_FLUSH(hw);
> }
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 6ab66fa..5753d79 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -1720,7 +1720,7 @@ ice_dev_start(struct rte_eth_dev *dev)
> struct ice_vsi *vsi = pf->main_vsi;
> uint16_t nb_rxq = 0;
> uint16_t nb_txq, i;
> - int ret;
> + int mask, ret;
>
> /* program Tx queues' context in hardware */
> for (nb_txq = 0; nb_txq < data->nb_tx_queues; nb_txq++) { @@ -
> 1748,6 +1748,14 @@ ice_dev_start(struct rte_eth_dev *dev)
>
> ice_set_rx_function(dev);
>
> + mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
> + ETH_VLAN_EXTEND_MASK;
> + ret = ice_vlan_offload_set(dev, mask);
> + if (ret) {
> + PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
> + goto rx_err;
> + }
> +
> /* enable Rx interrput and mapping Rx queue to interrupt vector */
> if (ice_rxq_intr_setup(dev))
> return -EIO;
> --
> 2.7.5
next prev parent reply other threads:[~2019-02-13 4:05 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-28 7:53 [dpdk-dev] [PATCH] " Wei Zhao
2019-02-13 3:30 ` [dpdk-dev] [PATCH v2] " Wei Zhao
2019-02-13 4:05 ` Zhao1, Wei [this message]
2019-02-13 3:49 ` Wei Zhao
2019-02-13 18:43 ` Stillwell Jr, Paul M
2019-02-18 12:39 ` Zhang, Qi Z
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=A2573D2ACFCADC41BB3BE09C6DE313CA07EB216E@PGSMSX103.gar.corp.intel.com \
--to=wei.zhao1@intel.com \
--cc=dev@dpdk.org \
--cc=qi.z.zhang@intel.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).