16:01:51 #startmeeting networking_ml2 16:01:52 Meeting started Wed Feb 19 16:01:51 2014 UTC and is due to finish in 60 minutes. The chair is mestery. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:56 The meeting name has been set to 'networking_ml2' 16:02:04 hi 16:02:08 #link https://wiki.openstack.org/wiki/Meetings/ML2 ML2 Meeting Agenda 16:02:09 Hi 16:02:20 Great to see a good turnout for this meeting! 16:02:29 Hi 16:02:38 We'll get started now and hope rkukura joins us soon. 16:02:44 hi 16:02:47 rkukura: Hey :) 16:02:52 #topic Action Item Review 16:02:56 hi rkukura 16:03:02 hi mestery 16:03:06 Hi trinaths 16:03:19 rkukura: You had an action to move the port binding discussion to Google Doc. Any update on that? 16:03:27 yes 16:04:11 I've puts some of the email text into https://docs.google.com/document/d/1k8tAqfQr8Ujzx5TzpXTYEUaym-0U-pE3YL9wjrS8cDg/edit#heading=h.cpck9ip3dbdv 16:04:52 #link https://docs.google.com/document/d/1k8tAqfQr8Ujzx5TzpXTYEUaym-0U-pE3YL9wjrS8cDg/edit#heading=h.cpck9ip3dbdv Port Binding Google Doc 16:04:56 Thanks rkukura! 16:04:57 And plan to reorganize it into a more cohesive description that could later get incorporated into ML2 developer documentation 16:05:14 Does the link work? 16:05:19 yes 16:05:28 yes, thanks rkukura! 16:05:37 #action ML2 team to review Port Binding document and comment inline 16:05:45 rkukura: Do you think we could collaborate on developer doc? 16:06:02 I do plan to start implementing this today. 16:06:09 I've got a patch in that is trying to expand it - https://review.openstack.org/#/c/72428/ 16:06:17 sc68cal: Sure 16:06:52 rkukura, start implementing https://bugs.launchpad.net/neutron/+bug/1276395 16:06:52 sorry to threadjack - have been spending time onboarding new comcasters 16:07:03 ? 16:07:05 sc68cal: No worries man. 16:07:28 rcurran: yes, along with the transactionality 16:07:38 great, thanks 16:07:42 OK, next action item. 16:07:44 matrohon to file BP for multi-node testing and bring this up in the infra meeting 16:07:48 matrohon: Any updates? 16:07:52 yes 16:08:02 rcurran: I'll look at whether its reasonable to do them as separate patches 16:08:20 https://blueprints.launchpad.net/devstack/+spec/lxc-computes 16:08:27 ok, and these can go in for I-3? 16:08:36 we devcided to fiil a bp in devstack 16:08:43 #link https://blueprints.launchpad.net/devstack/+spec/lxc-computes Multi-node testing BP 16:08:51 matrohon: Great! This looks pretty cool! 16:08:51 and use the new feature of devstack in a gate job 16:09:02 we talked about that in infra meeting 16:09:17 rcurran: That's the goal 16:09:21 matrohon: Great! This looks promising. 16:09:42 OK, thanks for action item updates matrohon and rkukura 16:09:47 #topic Migration Tool 16:09:58 #link https://blueprints.launchpad.net/neutron/+spec/ml2-deprecated-plugin-migration Migration BP 16:10:00 there seems to have other ways to implement multi-node (tripleo, nodepool, zuul improvment) but we continue with lxc firs 16:10:14 In the team meeting, marun volunteered to work on this with assistance from rkukura. 16:10:25 This is a critical BP, as without it, we can't deprecate Linuxbridge and OVS plugins. 16:10:38 Thanks to marun for volunteering to pick this up and rkukura for assisting here! 16:11:10 markmclain has indicated this can get an extension to yesterday's feature proposal deadline, but we need to get it in review soon 16:12:03 #action marun and rkukura to proceed with migration BP and get an extension from markmcclain 16:12:07 Thanks rkukura. 16:12:14 Any quesitons on migration? 16:13:17 Is the target of the migration to migrate from havana ovs/lb to icehouse ml2? or icehouse ovs/lb to icehosue ml2? 16:13:48 amotoki_: Good question! I think the target is Icehouse ov/lb to icehouse ML2, right rkukura? 16:14:18 amotoki_, mestery: I think it is current monolithic schema -> current ML2 schema 16:14:31 version migrations can be handled independently that way 16:14:43 but marun may have other ideas 16:15:32 amotoki_: Does that answer your question? 16:15:47 mestery: yes. we can discuss and collect comments later and check the migration plan. 16:16:01 Great, thanks amotoki_. 16:16:07 Moving on to the next topic 16:16:13 #topic ML2 SR-IOV BPs 16:16:25 rkukura and irenab have posted 3 patches for review. 16:16:30 I think all 3 are very close to going in. 16:16:39 See meeting agenda for BP links which have review links as well. 16:16:43 rkukura: Anything to add here? 16:17:15 nope, except that the vif-details patch relates to the VIF security work nachi was doing, and that needs to get completed 16:17:37 rkukura: You mean Nachi's end of your patch? 16:18:12 we need nova to propagate binding:vif_details to the VIF driver 16:18:19 rkukura: Got it. 16:18:27 rkukura: Is there a patch in review for that yet? 16:18:45 Then we need Nachi's work on using keys in this to control security groups, etc., which needs work on both neutron and nova 16:19:28 nachi's patch is https://review.openstack.org/#/c/21946/ 16:19:47 OK, so sounds like there is some work left here then. 16:19:48 was looking for that 16:20:06 That's a lot of interrelated patches. Can they be listed together in the agenda? 16:20:28 I think beagles may be willing to help out on this if needed, since he's very familiar with the nova side 16:20:41 HenryG: I'll work on getting those right in the agenda :) 16:21:14 add this patch too : https://bugs.launchpad.net/neutron/+bug/1274034 16:21:27 patch attached to the bug for the moment 16:21:36 nova side patch of vif security is https://review.openstack.org/#/c/44596/ . 16:22:50 Thanks for the patch links folks. 16:22:56 would be great to get the generic neutron vif-details and binding-profile patches merged soon, so the VIF security and SR-IOV work can easily proceed 16:23:01 I just updated the SR-IOV section wtih all these, will clean them up a bit post-meeting. 16:23:09 rkukura: Agreed. 16:23:51 we'd then be able to track the VIF security and SR-IOV stuff independently 16:24:08 rkukura: Agreed. 16:24:15 rkukura: Are those two patches the ones next on the agenda by any chance? 16:24:21 Under port binding bug fixes? 16:24:48 no, the port binding patches are the ones described by the google doc AI 16:25:05 Too many patches to keep my head straight :) 16:25:29 rkukura: Can you do "#link" for the two patches you are referencing? Would really help me at least. :) 16:26:06 the vif-details and ml2-binding-profile patches? 16:26:31 Yes, the ones which enable the other stuff. 16:26:39 #link https://review.openstack.org/#/c/72452/ (binding:vif_details) 16:26:51 #link https://review.openstack.org/#/c/73500/ (binding:profile) 16:26:54 Thanks rkukura! 16:27:01 Review feedback by ML2 folks on those two would be great! 16:27:10 I've reviewed them and they are pretty much ready to merge IMHO. 16:27:20 and this for SR-IOV #link https://review.openstack.org/#/c/72334/ (request vnic_type) 16:28:01 irenab: was just going mention that one again 16:28:16 Yes, all 3 of those look pretty ready to merge I think. 16:28:21 Lets see if we can make that happen by Friday. 16:28:52 Should we move on to the next item on the agenda? 16:28:57 Note that an MDs that bind ports will need to look to make sure the binding:vnic_type attribute is something they handle 16:29:11 s/an MDs/any MDs/ 16:30:41 #action All ML2 MechanismDrivers which bind ports should ensure they look to make sure the binding:vnic_type attribute is something they handle. 16:30:43 Thanks rkukura. 16:30:49 Is multiprovidernet and opening up the network segment attributes next? 16:30:57 sure 16:31:12 rkukura: Did we do the port binding patches? 16:31:16 I think so but confirm if not. 16:31:22 Then we'll move to multiprovidernet 16:32:20 We mentioned the proposal and google doc under AIs. There are no patches for these two bugs yet. I did followup to two emails on the openstack-dev thread. 16:32:52 rkukura : did you consider this thread for port_bnindings : 16:32:57 https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg16417.html 16:33:30 matrohon: Can you summarize how it might relate? 16:34:16 nova should bot up ona instance once its port is bound by an MD 16:34:50 the port gets active only if it is bound 16:35:41 I think this could avoid race condition I mentionned on the ML about port-binding 16:35:48 Doesn't nova have to boot the instance so that the vNIC will get created, the L2 agent will see it, and then the port gets active? 16:36:20 i think the original idea to check port status does not work because a port is plugged into the bridge in some vif driver. 16:36:22 rkukura: With libvirt, that is exactly how it's done yes. 16:37:19 rkukura : checking port status before booting could avoid a lot of race condition 16:37:21 is the goal of that thread to provide synchronization so that tempest can SSH into the VM as soon as it boots? 16:38:09 this is one goal, but to me, it would clarify the call-flow between nova and neutron 16:38:31 If so, it seems the harder part of that problem is making sure the dhcp-agent knows about the port and that the DHCP handshake completes. None of this is reflected in the port status, is it? 16:39:53 rkukura : yes, but don't you think it make sens to boot a VM only if its port is active (and bound in ML2 case) 16:40:11 I guess if "plug_vfs" can get done before the VM boots, then waiting for the ports to become active might make sense. 16:41:21 One small step would be to refuse to boot the VM if binding:vif_type is 'unbound' or 'binding failed'. I'm not sure waiting for this helps. 16:41:36 matrohon: should not port status represent its operational state? 16:42:18 rkukura : fine for a good first step 16:42:45 I definitely need to read and understand arosen's BP and patches 16:43:20 irenab : I don't think a port is operationnal if it is not bound by a MD 16:43:35 But it sounds like its a good idea to wait for the status to be active if the vNIC can be plugged after the port is bound but before booting the VM 16:44:44 So someone creates port, nova schedules to compute node and sets binding:host_id, ML2 binds, nova gets binding:vif_type, nova plugs port (creating tap device or whatver), nova waits for status to become active, then nova boots VM? 16:44:46 rkukura: it may be relevant for segmentation_id configuration as well 16:45:49 rkukura: sounds corrent, L2 agent updates status to active once properly configured 16:46:21 matrohon: Do you see any implications for the port binding proposal? 16:46:28 rkukura : but with the binding outside the create_port you have to tell nova once the binding finished 16:47:05 rkukura : this seems to be the direction took by aarosen 16:47:11 matrohon: If nova passes binding:host_id to create_port, binding should be finished when nova gets the port_dict that's returned 16:47:42 If nova instead passes binding:host_id via update_port, then binding should be finished when nova gets the port_dict returned from that. 16:48:31 nova needs the binding:vif_type and binding:vif_details from the bound port dict to know how its GenericVIFDriver should plug the port. 16:48:34 rkukura : it's up to the MD response no? 16:49:30 matrohon: When port binding succeeds, the bound MD supplies the binding:vif_type and binding:vif_details values. 16:49:53 And those get returned to nova, which needs to stash them in the VIF object for the VIF driver to use. 16:50:51 rkukura : I see some potential race condition, if the MD doesn't respond quickly, because nova create its VIF object based on port_list 16:51:06 AFAIK :) 16:51:22 rkukura, matrohon: when neutron return vif_details, can the port status be set to active - which is what nova is looking for? 16:51:44 matrohon: not sure what you mean by "based on port_list" 16:52:34 Sukhdev: I think port status is supposed to indicate that L2 connectivity is available, which needs to be based on the L2 agent (when used) having set things up 16:52:52 neutron_client.port_list; I will dig into that and update the Doc or email you 16:54:09 matrohon: Thanks. One point of confusion may be that with the port binding proposal, when their are two transactions surrounding an attempt to bind, the results from the operation are not returned until after the 2nd transaction has committed. 16:55:11 matrohon: And I think that would apply even if the binding is occurring as part of a "port list" operation. 16:56:13 mestery: Anything else on the agenda - 4 minutes left 16:56:19 Yes: :) 16:56:20 Thanks rkukura 16:56:22 rkukura : oh that's the confusing point I think. I dindn't noticed that the binding occur during et_port or port_list ! 16:56:23 #topic Open Discussion 16:56:34 So, wanted to leave time for questions from MD writers, etc. 16:56:41 We have 5 minutes left or s. 16:56:43 anything? 16:56:50 irenab: i have a question on “vnic_type” patch. 16:57:02 my question is about naming. 16:57:12 what does “vnic” mean? There are many cases where vnic and vic are used as the same meaning. I would like to clarify its meaning. 16:57:20 matrohon: The basic idea is that no port dict will be returned without an existing valid binding, or at least attempting to bind and failing as part of the operation returning the dictionary. 16:57:23 mestery, rkukura: need few review cycles from you to reveiew https://review.openstack.org/#/c/73482/ 16:57:38 rkukura : thanks 16:57:56 Sukhdev: Will do. 16:58:06 amotoki: vnic means VM VIF 16:58:18 Sukhdev: OK 16:58:22 s/vnic and vic/vnic and vif/ 16:58:48 irenab: we already have vif_type and the patch introduces vnic_type. 16:59:03 My understanding is that vnic_type is what the VM sees - i.e. what kind of driver it will use for the interface. 16:59:27 amotoki: yes, its the requested type for the VM interface 16:59:32 While vif_type is the host's side of it. 16:59:32 mestery, rkukura: Thanks 16:59:59 thanks for the clarfication. it is worth documented. 17:00:14 OK, we're out of time now. 17:00:17 Thanks to everyone for joining us! 17:00:23 Keep those reviews coming this week! 17:00:24 #endmeeting