DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liu, Mingxia" <mingxia.liu@intel.com>
To: "Wu, Jingjing" <jingjing.wu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Xing, Beilei" <beilei.xing@intel.com>
Subject: RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message
Date: Thu, 2 Feb 2023 07:39:04 +0000	[thread overview]
Message-ID: <PH0PR11MB5877AA89E985B0230665CD96ECD69@PH0PR11MB5877.namprd11.prod.outlook.com> (raw)
In-Reply-To: <MW3PR11MB4587CFAE31859DF605242A92E3D69@MW3PR11MB4587.namprd11.prod.outlook.com>



> -----Original Message-----
> From: Wu, Jingjing <jingjing.wu@intel.com>
> Sent: Thursday, February 2, 2023 12:24 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl
> message
> 
> > @@ -83,12 +84,49 @@ static int
> >  idpf_dev_link_update(struct rte_eth_dev *dev,
> >  		     __rte_unused int wait_to_complete)  {
> > +	struct idpf_vport *vport = dev->data->dev_private;
> >  	struct rte_eth_link new_link;
> >
> >  	memset(&new_link, 0, sizeof(new_link));
> >
> > -	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > +	switch (vport->link_speed) {
> > +	case 10:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> > +		break;
> > +	case 100:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> > +		break;
> > +	case 1000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> > +		break;
> > +	case 10000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> > +		break;
> > +	case 20000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> > +		break;
> > +	case 25000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> > +		break;
> > +	case 40000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> > +		break;
> > +	case 50000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> > +		break;
> > +	case 100000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> > +		break;
> > +	case 200000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> > +		break;
> > +	default:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > +	}
> > +
> >  	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> > +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> > +		RTE_ETH_LINK_DOWN;
> >  	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> >  				  RTE_ETH_LINK_SPEED_FIXED);
> Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.
> 
[Liu, Mingxia] According to the comment description of struct rte_eth_conf, RTE_ETH_LINK_SPEED_FIXED is better.
struct rte_eth_conf {
uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
				used. RTE_ETH_LINK_SPEED_FIXED disables link
				autonegotiation, and a unique speed shall be
				set. Otherwise, the bitmap defines the set of
				speeds to be advertised. If the special value
				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
				supported are advertised. */


> >
> > @@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device
> > *pci_dev, struct idpf_adapter_ext *adap
> >  	return ret;
> >  }
> >
> > +static struct idpf_vport *
> > +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
> > +{
> > +	struct idpf_vport *vport = NULL;
> > +	int i;
> > +
> > +	for (i = 0; i < adapter->cur_vport_nb; i++) {
> > +		vport = adapter->vports[i];
> > +		if (vport->vport_id != vport_id)
> > +			continue;
> > +		else
> > +			return vport;
> > +	}
> > +
> > +	return vport;
> > +}
> > +
> > +static void
> > +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg,
> > +uint16_t msglen) {
> > +	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> > +	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> > +
> > +	if (msglen < sizeof(struct virtchnl2_event)) {
> > +		PMD_DRV_LOG(ERR, "Error event");
> > +		return;
> > +	}
> > +
> > +	switch (vc_event->event) {
> > +	case VIRTCHNL2_EVENT_LINK_CHANGE:
> > +		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL2_EVENT_LINK_CHANGE");
> > +		vport->link_up = vc_event->link_status;
> Any conversion between bool and uint8?
> 
[Liu, Mingxia] Ok, thanks, I 'll use !! to convert uint8 to bool.


  reply	other threads:[~2023-02-02  7:39 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-01  8:48       ` Wu, Jingjing
2023-02-01 12:34         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-02  3:28       ` Wu, Jingjing
2023-02-07  3:10         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-02  3:45       ` Wu, Jingjing
2023-02-02  7:19         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-02  4:23       ` Wu, Jingjing
2023-02-02  7:39         ` Liu, Mingxia [this message]
2023-02-02  8:46           ` Wu, Jingjing
2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-08  2:00               ` Zhang, Qi Z
2023-02-08  8:28                 ` Liu, Mingxia
2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR11MB5877AA89E985B0230665CD96ECD69@PH0PR11MB5877.namprd11.prod.outlook.com \
    --to=mingxia.liu@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).