16:03:01 #startmeeting networking_ml2 16:03:01 Meeting started Wed Apr 8 16:03:01 2015 UTC and is due to finish in 60 minutes. The chair is Sukhdev. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:04 The meeting name has been set to 'networking_ml2' 16:03:31 amotoki: it seems that way - on west coast people are still waking up :-) 16:03:45 #topic: Agenda 16:03:52 #link: https://wiki.openstack.org/wiki/Meetings/ML2#Meeting_April_8.2C_2015 16:04:15 #topic: Announcements - 16:04:42 Kilo is in the final stages - RC1 is due any day now 16:04:53 Final release due on 30th April 16:05:33 Kyle has scheduled Liberty mid-cycle sprint already 16:05:56 generally mid-cycle sprint is announced at the summit - but, he has already planned it 16:06:38 It is going to take place in Colorado June 24-26 16:06:59 I thought I pass this information along, as some of us attended the last sprint 16:07:14 If you plan to attend, it gives a lot of planning time 16:07:19 Wondering including sync/error/tasks in this sprint would make sense? 16:07:29 Wondering if ... 16:07:57 rkukura: we can bring it up with mestery and see if likes that idea 16:08:07 though I could not attend the last code sprint, AFAIK small groups are formed per topic. 16:08:17 amotoki: right 16:08:17 so I think it is a good candidate. 16:08:29 amotoki: correct 16:08:39 would depend on whether the right people could make it 16:08:40 The liberty laundry list includes splitting out ML2, not sure if that will be worked on at the sprint. 16:08:58 HenryG: To a separate repo? 16:09:35 rkukura: https://etherpad.openstack.org/p/liberty-neutron-summit-topics 16:09:43 should we decide at the summit what we may have for mid-cycle 16:09:52 rkukura: #3 16:10:27 HenryG: so, this could still be included in the agenda 16:10:31 ok, slightly obfuscated 16:10:33 i was hoping sync/tasks should be done at summit - 16:10:56 shivharis: This is a code sprint - design should be agreed at the summit 16:11:17 rkukura HenryG: shall we discuss with mestery about this? 16:11:20 ah.. ok 16:12:01 Sukhdev: I think we should run both ideas (split and/or task stuff) by mestery 16:12:02 Sukhdev: I think mestery is still asking whether it makes sense. please provide feedback 16:12:55 #action: rkukura and Sukhdev to discuss with mestery about task flow during mid-cycle sprint 16:13:18 Any body has any other announcement? 16:13:24 * Sukhdev waiting 16:13:31 actually regarding kilo... 16:13:35 what is the cutoff for Release Notes for kilo, and where does one find this document? 16:14:30 also - is this an openstack wide document or neutron 16:14:41 HenryG amotoki: do you know? 16:15:25 no specific info from me. in the past release the wiki page is created 16:15:37 and everyone can add appropriate information. 16:15:55 Best to check with mestery 16:16:03 when you want to deprecate code, it goes into release notes for one cycle and then removed after that.. 16:16:12 HenryG: ok, will do thanks 16:16:46 #action: shivharis to find the information about release notes for Kilo and share with the team 16:16:59 anything else? 16:17:20 #topic: Action items from last week 16:17:29 we have few action items 16:17:34 will be absent next week, will monitor bugs if there is an issue will jump in.. 16:18:00 shivharis: you had an action - want to provide update? 16:18:18 apologies will take care today... 16:18:25 no update 16:18:43 shivharis to buy coffee for everybody :-):-) 16:19:02 ok next week coffee on me 16:19:04 rkukura: You had an action - want to update? 16:19:26 complete - item # 40 - others can embellish if desired 16:19:47 rkukura: cool - thanks. 16:20:00 manish does not seem to be here 16:20:20 we'll skip his action and leave it on the agenda for next week - 16:20:41 #topic: Mechanism Drivers and DVR discussion 16:20:48 rkukura: Can you please lead this? 16:20:53 sure 16:21:08 I kind of outlined it in the agenda - will quickly summarize here 16:21:53 In working on the DVR schema/logic cleanup, which won’t make kilo, I discovered much of the DVR distribute port binding is not covered at all by unit tests 16:22:08 I could comment out important code and all tests still passed! 16:22:44 really bad news :- 16:23:00 So I started extending the test_port_binding tests that use mechanism_test that has asserts regarding the PortContext state to test the key DVR functions 16:23:21 In doing this, I discovered a number of issues 16:23:36 armax carl_baldwin are you guys around? Perhaps want to join this discussion 16:24:10 I’m thinking it makes sense to merge the tests and fixes for these issues either for a kilo RC, or soon after kilo for back-port 16:24:25 I discussed this a bit with mestery yesterday 16:25:19 rkukura: most of the unit coverage for DVR is useless 16:25:21 So today here I’d like to try to get consensus on the fixes and whether its reasonable to get them into kilo one way or the other 16:25:33 rkukura: we’re striving to provide functional coverage instead 16:25:38 which is way more effective 16:25:46 and we’re working on multi-node support 16:25:51 that’s the real silver bullet 16:25:59 granted unit coverage is poor 16:26:24 but the same can be said for many other areas of Neutron 16:26:28 armax: I completely agree we need functional coverage, but I think we can extend the current port binding unit tests to cover DVR cases 16:26:39 as for DVR in particular, we can add further increase unit coverage 16:26:57 but even with the one we have, many issues still crept in because are mostly integration issue 16:27:05 armax: that’s the idea - my point here is to discuss some issues that have turned up 16:27:11 rkukura: yes, agreed 16:27:38 rkukura: I am just giving you an update as to why you perceive that that area has been neglected 16:27:50 First, is anyone aware of any ToR switch or controller MDs that have been tested with DVR? 16:28:03 armax: appreciated! 16:28:22 rkukura: but I can only welcome your help and support in cleaning up the area! 16:28:48 armax: Wish I could do it more quickly but plugging along at it 16:29:00 rkukura: We tried to test DVR with Arista driver - ran into one issue (the host-id) in the update-port sometimes is not correct 16:29:15 Sukhdev: That sounds like one of the issues I hit 16:29:43 rkukura: I wanted to test/verify before opened a bug for this 16:30:06 The mechanism_test driver has asserts for what it thinks the rules are for the state of PortContext passed to the MDs 16:30:35 I’ve added some asserts and some logic to this to cover the DVR cases (i.e. binding:vif_type of ‘distributed’) 16:31:30 My view is that, for DVR distributed ports, update operations should be seen by MDs either as a host-independent update or as a host-specific update 16:31:55 So something like changing the name of a port would be a simple host-independent update, just like for non-DVR ports 16:32:21 The MDs would see vif_type=‘distributed’ and host_id=‘’ for these 16:32:29 And that much works fine 16:32:57 But when a port binding is committed, the state should completely reflect the host-specific state 16:33:23 rkukura: empty host_id does not help us. We need to know which host it is being paced so that we can plumb the networking at TOR 16:33:44 In this case, the vif_type should transition from ‘unbound’ to a real value, the binding_levels should be there, etc. 16:34:31 Sukhdev: Right, when the binding is committed, host_id should specify which host is being bound, both in previous and current dicts 16:35:11 rkukura: you mean pass the host_id in bind_port() as opposed to update_port() ? 16:35:29 Sukhdev: Both 16:35:53 rkukura: if it is in both, that is great and will work just fine 16:35:59 My patch as a straightforward fix to the current code for the host_id in the update_port following committing a binding 16:36:19 First, does anyone see any issue with that particular fix? 16:36:45 rkukura: Let me review your patch (I have not done it already) perhaps this will solve the issues that we are observing 16:37:25 is there a patch ut there or this is your thinking? 16:37:30 the logic sounds right 16:37:35 I haven’t posted the patch yet, or even a bug report - trying to get some consensus here on what the proper behaviour should be first 16:38:09 rkukura: no wonder I can't find it :-) 16:38:14 OK, so the trickier part is with the port_update that occurs during get_device_details to change the port status 16:38:21 rkukura: +1for the approch 16:38:56 Here, the port’s overall status is updated to some combination of all the host-specific port statuses 16:39:19 Its occuring during an RPC that is specific to one host 16:39:31 rkukura: in term of timings, if we acted on update_port() I hope it is not too late to act 16:39:54 but the PortContext contents is a mish-mash of host-specific and host-independent data 16:41:00 So I need to decide whether to call update_port methods with host-specific status changes, or only when the overall port status changes, or maybe somehow do both 16:41:41 This really boils down to what do the MDs need? 16:42:20 rkukura: we act on update_port() and if the host specific information comes late, it may be problematic for us 16:42:55 The current code calls update_port methods on the MDs for every individual host’s status changes, so I’m leaning towards just fixing it to have the PortContext reflect that specific host’s state - i.e. have that host’s vif_type rather than ‘distributed’, etc. 16:43:36 that should work for us….How about others? 16:43:52 Sukhdev: I think the update_port for the commit of the port binding is likely what you act on, or do you defintely act on the port status change that follows when the agent makes the get_device_details RPC? 16:44:37 rkukura: I will go back and check to be sure 16:44:44 Sukhdev: thanks 16:45:15 i think this is going to be too much update information that may not be useful - but more info is alright if some MD can use it 16:45:15 I’d appreciate if other MD maintainers that care about DVR also check into this. 16:45:52 rkukura: an email to the ML may be needed 16:45:54 For now, I’ll plan to fix the per-host status update to have consistent state, and we could look at changing it later if needed 16:46:06 banix: makes sense, along with filing a bug 16:46:16 hoping to wrap up this patch this week 16:46:51 shivharis: I think this will be needed to plumb the physical part of tun interfaces connectivity 16:47:00 i think mechanism driver may want to be called when port status changes from RPC. it is not DVR specific and may be another topic. 16:47:02 I don’t want to take too much of the meeting with this, but think we covered the basics and can followup on email and gerrity 16:47:27 amotoki: My overall goals is to abolutely minimize the special casing for DVR 16:47:59 My other patch (for liberty now) makes the non-DVR case work just like DVR, but with only a single host 16:48:24 So bringing the DVR behavior into line with the non-DVR behabior really helps 16:48:45 rkukura: thank you for driving this 16:48:51 really sounds reasonable. 16:49:01 I think we can move on unless anyone has something to add 16:49:19 rkukura: Thanks for the update 16:49:21 rkukura: thats great 16:49:34 shall we move on? 16:49:39 armax: sound OK to you so far? 16:50:03 * Sukhdev waiting 16:50:36 #topic: ML2 Drivers Decomposition 16:50:51 I think we are in good shape on this subject 16:51:02 I kept it on the agenda just to be sure 16:51:13 Lets think about what we’d want to cover at the summit on this 16:51:27 Anybody has any questions or any information to share on this topic 16:52:22 rkukura: HenryG hinted on the idea of moving ML2 out of the tree 16:53:02 * Sukhdev time check 16:53:12 Sukhdev, HenryG: I’m hoping this would mean moving it to a separate repo within the neutron project 16:53:28 rkukura: correct 16:53:39 Definitely a good topic for the summit 16:53:48 along with the L2 agents, etc. 16:54:16 shivharis has an action to add this to summit topic - perhaps this can be added there 16:54:32 ok will do 16:54:42 that as well 16:54:54 Since Manish is not here, I am going to skip task flow 16:55:01 #topic: Bugs 16:55:10 shivharis: anything quick? 16:55:16 bugs look fine at this stage for the kilo release 16:55:17 we have 5 min 16:55:30 if anyone has any issues please raise now.. 16:55:51 i will be off next week bug will keep a tab on the bugs in case some showstopper shows up 16:55:59 manishg just emailed that he missed last week and this week, and maybe next week due to jury duty 16:56:06 s/bug/but/ 16:56:45 that all from me... moving on.. 16:56:47 thanks 16:56:55 buts? :) 16:57:23 GLaupre:-):-) shivharis had too much coffee this morning :-) 16:57:30 ha ha 16:57:37 i do not do s/but/bug/g 'g was missing so only applies to the first one 16:57:45 right 16:57:54 shivharis: thanks for the update 16:58:03 #topic: Open Discussion 16:58:19 we have 2 mins - anybody wants to discuss anything? 16:58:35 like how is weather outside? 16:59:00 was hoping for the sping to start around here… not there yet though! 16:59:06 Sukhdev: I can see parts of my yard again :) 16:59:54 Folks, thanks for attending todays meeting - it was very informative and productive 17:00:02 Thanks Sukhdev! 17:00:04 #endmeeting