16:03:59 #startmeeting containers 16:04:00 Meeting started Tue Nov 24 16:03:59 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:01 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:04 The meeting name has been set to 'containers' 16:04:07 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-11-24_1600_UTC Our Agenda 16:04:13 #topic Roll Call 16:04:15 Adrian Otto 16:04:16 o/ 16:04:18 o/ 16:04:18 o/ 16:04:20 o/ 16:04:21 o/ 16:04:22 o/ 16:04:22 o/ 16:04:27 o/ 16:04:28 o/ 16:04:34 o/ 16:04:34 o/ 16:04:37 o/ 16:05:19 hello daneyon_ vilobhmm1 dimtruck rpothier muralia1 Kennan yolanda hongbin Tango jimmyxian houming and eghobo 16:05:25 o/ 16:05:30 #topic Announcements 16:05:52 first of all, I wanted to extend special thanks to hongbin for chairing our meeting last week. 16:06:01 o/ 16:06:03 np 16:06:05 you da man hongbin 16:06:30 second, if you have any open bugs chances are you noticed a bunch of activity yesterday and last night 16:06:43 1) Bug cleanup underway 16:06:59 o/ 16:07:01 Most open bugs were targeted to a release 16:07:17 I'm planning to go through the Wishlist ones and see which ones should actually be un-targeted. 16:07:25 we closed over 350 bugs 16:07:53 2) magnum and python-magnumclient 1.1.0 released (but not announced) 16:08:05 great 16:08:12 o/ 16:08:35 I need to take a moment to look in pypi to see if the client showed up there 16:09:19 o/ 16:09:28 #link https://pypi.python.org/pypi/python-magnumclient/1.1.0 Our New Client 16:10:13 3) As daneyon_ mentioned on our ML, there will be no network subteam meeting this week due to a US holiday on Thursday (Thanksgiving) 16:10:22 any other announcements from team members? 16:11:13 if you notice problems with the tagged release, just let me know and I can tag another. 16:11:27 then when we think that code is solid, we can announce it 16:12:09 #topic Container Networking Subteam Update 16:12:12 #link http://eavesdrop.openstack.org/meetings/container_networking/2015 Previous Meetings 16:12:22 daneyon_: any remarks? 16:12:41 The big news here is that the Magnum CNM has been implemented for swarm bay types using the flannel network-driver 16:12:48 whoot!! 16:13:03 :) 16:13:20 awesome! 16:13:29 Cool! 16:13:40 The team has also decided not to implement the CNM using flannel for Mesos. This is b/c Mesos does not support etcd, which is required for flannel. 16:14:11 we had a lot of good discussion around containerizing flannel 16:14:31 this would be applicable to all services running on nodes 16:14:50 wanghua is working on this effort 16:15:00 #link https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container 16:15:41 and Tango is working on option #3 of this patch to add flannel's host-gw mode 16:15:46 #link https://review.openstack.org/#/c/241866/ 16:16:06 option #2 ? 16:16:35 the host-gw mode aligns nicely with our networking model (shared L3 for nodes) and should provide a non-overlay solution.... improving networking performance. 16:17:13 Tango option #2 of https://review.openstack.org/#/c/241866/ 16:17:49 Please let me know if I misunderstood our discussion from last week. 16:17:58 what's option #2 means ? 16:18:08 replace with host-gw ? 16:18:18 As adrian_otto mentioned, not meeting this week due to Thanksgiving. 16:18:32 daneyon_: what happened with "hongbin follows up with Daneyon for the submeeting discussion"? 16:18:41 Kennan: It means the user can specify different backend options: udp, vxlan, host-gw 16:18:46 are we continuing the subteam meetings, or merging back in here? 16:19:15 adrian_otto thanks for the reminder 16:19:32 Tango: is it tried with CNM labels > 16:19:33 ? 16:19:48 We did have a brief discussion on whether to continue the sub team meetings now that the Magnum CNM is well underway. 16:19:54 Kennan: Yes it will be done with labels 16:20:02 I am indifferent to either direction. 16:20:14 I am ok either way 16:20:16 hello, sorry I'm late 16:20:18 ok Tango: will check your patch when reday 16:20:18 ok, so unless anyone feels differently, let's not change anything 16:20:20 What would the rest of the team like to do? Keep the meetings going or stop them? 16:20:43 daneyon_: as long as Stackers are showing up, we can keep them going 16:20:53 if attendance trails off, that will be our signal to stop 16:21:06 adrian_otto I think that's a good approach. 16:21:16 +1 on that 16:21:30 So far we still have good discussion 16:21:35 adrian_otto I think that's it. 16:21:40 agreed Tango 16:21:45 thanks daneyon_ 16:21:45 #topic Review Action Items 16:21:50 1) hongbin follow up with Adrian Otto for the release of python-mangumclient 16:21:50 Status: COMPLETE (Thanks hongbin and adrian_otto!) 16:21:58 2) hongbin follows up with Daneyon for the submeeting discussion 16:21:58 Status: COMPLETE (Meetings will continue) 16:22:04 3) hongbin starts a ML to discuss the renamed file issue 16:22:08 did I miss this one? 16:22:14 Done 16:22:21 and I really appreciate everyone's involvement. I think we have come a long way in the last few months. 16:22:27 I sent a ML. The team gave feedback 16:22:31 can we get a #link to it for posterity's sake? 16:22:41 You replied adrian_otto! 16:22:50 daneyon_: thanks for your continued leadership. 16:22:58 :-) 16:23:04 tcammann: goes to show my memory is leaky 16:23:14 finding the link 16:23:38 #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/079796.html 16:23:43 adrian_otto happens to us too 16:23:50 ! 16:24:28 Status: COMPLETE (Thanks again hongbin! <3) 16:24:46 #topic Blueprint/Bug Review 16:24:58 now, although I made like a thousands changes in the bug tracker 16:25:06 I have not finished with the blueprints yet 16:25:16 many of them have been approved, but not yet targeted to a release 16:25:26 we had an etherpad to list them 16:25:41 has anyone kept track of that link by chance (I can refer to notes otherwise) 16:26:12 #link https://etherpad.openstack.org/p/mitaka-magnum-planning 16:26:22 Kennan: thanks!! 16:26:23 thx Kennan 16:26:40 :) 16:26:57 #action adrian_otto to review https://etherpad.openstack.org/p/mitaka-magnum-planning and use it to target Mitaka blueprints 16:27:32 in order to actually do the targeting, it would be helpful for all directionally approved blueprints to get T-shirt size level-of-effort estimates 16:27:56 this is not something based on story points, just a ballpark guess. 16:28:56 so for the high/critical importance ones (including networking CNM stuff, probably) I will hound you for team updates in this section of our team meetings 16:29:19 with that said, are there any work items (bugs, or bp's) that require team discussion today? 16:30:07 This one: #link https://review.openstack.org/#/c/246609/ 16:30:58 should we validate baymodel’s attribute before bay-creation? 16:32:05 they might be changed after the baymodel is created 16:32:31 I think houming means at the point of bay-create 16:32:38 so if I add a baymodel that refers to image-id 1234, and then I delete image 1234, then baymodel creates on that baymodel will fail 16:33:14 but bays actually rely on baymodels 16:33:19 Yes, at the point of bay-cration, before it sent request to heat, we check the existence of OpenStack resources. 16:33:44 a bay without a corresponding baymodel will not work either. That's where improvement is needed, to tie them together better. 16:33:54 houming: have you cheked does nova boot verify the flavor parameters before booting an instance ? 16:34:11 houming: We should validate those that are not validated by Heat 16:34:24 + 1 Tango 16:34:35 vilobhmm1: In thoery, heat stack-create will verify flavor before creating a stack 16:34:40 yes. Nova will fail immidately when the image/network is not existed. 16:34:56 Still would prefer that Magnum checks these 16:34:57 Tango: if we allow the heat stack create to fail, we might not have as nice of an exception to raise 16:35:11 Otherwise you will have to dig into the heat stack to find the issue 16:35:15 seems not need double check right now. if heat handle that and response informative 16:35:16 houming : sure… we should also validate it.. 16:35:19 whereas, if we do the checks, we could potentially have a very actionable exception 16:35:20 In K8S bay creation, flavor_id seems not be checked. I verfied it today. 16:35:32 adrian_otto: +1 16:36:08 thanks for looking at that houming 16:36:11 regardless if heat checks or not, we should verify it to warn the user directly. 16:36:20 +1 16:36:36 My only concern is the performance impact of double validation 16:36:48 Each validation is an API call, which is not cheap 16:36:57 I did not think need all checks (only check what heat not good at ) 16:37:00 Bay creation is a slow process 16:37:06 heat is slow! 16:37:07 Right, f we send a bunch of API request for each attribute. 16:37:18 agreed, hongbin. We should reserve double checking for things that are likely to happen and have big impact. 16:37:36 if heat does a check, and raises a sensible exception we can parse, that's going to perform better 16:37:39 Yes, bay creation is slow and not a frequent operation. 16:37:44 We should be able to query heat for failure cause 16:38:34 we need to support bay-creation in scale case, I rememeber it was about one bp 16:39:31 I still don't think the bottle neck will be on API calls 16:39:57 bay creation performance is probably not our top concern 16:40:08 but user experience is important 16:40:27 Validation is very helpful, and giving user good error message is great. We just need to check what's the best approach. 16:40:31 so let's make the debugging required as painless as possible when interacting with magnum 16:41:05 If we hit scale/perf issues later, we can refactor 16:41:19 agreed 16:41:21 we talking about the API call to glance to pull the image data right ? I agree with Tom it shouldn't be a bottleneck as such 16:41:22 and I'd like to make performance decisions based on experimental data rather than based on pure worry. 16:43:25 OK, I’ll calculate the cost time of validaton API calls. 16:43:39 sounds good houming: 16:43:41 houming: +1 16:43:45 +1 16:43:49 thanks houming. Any more on this item? 16:44:08 So, we have a agreement that we should do validation, in bay-cration before it send the request to heat. right? 16:44:14 any more work items to cover as a team prior to advancing our agenda? 16:44:27 agreed adrian_otto 16:44:29 I think so houming 16:44:40 houming: Do you want to discuss soft error vs hard error? 16:44:56 houming: that's the direction, unless we can get the equivalent user experience by calling heat and parsing the error reason. 16:45:04 houming : Yes! 16:45:36 Tango: do you mean validate baymodel case ? 16:45:46 Tango, my opoion is hard error in Bay-Cration. 16:45:51 seems bay-creation only hae hard error 16:45:58 s/hae/has 16:46:11 and do you mean also add soft error in Baymodel creation? 16:46:17 bradjones: yt? 16:46:21 here 16:46:31 houming: yes 16:46:35 ok, let's get a subteam update… one moment while I switch the topic 16:47:02 #topic Magnum UI Subteam Update (bradjones) 16:47:23 Things are moving quickly in the UI world :) 16:47:39 there are patches out for detail views for containers/bays and bay models 16:47:56 which look really good so shout out to Shuu for the good work 16:48:16 Last week it was mentioned that we should have a road map for the UI work 16:48:32 I've started putting together a document just to collect some informal ideas 16:48:41 #link https://etherpad.openstack.org/p/magnum-ui-feature-list 16:49:03 if people could have a look and add there thoughts there it would be greatly appreciated 16:49:18 I'll start a mailing list thread as well linking that page 16:49:23 the more thoughts the better 16:49:48 thanks bradjones! 16:49:58 bradjones: So we can add as wishlist and you will reoganize? 16:50:12 Tango: sure that works 16:50:20 #topic Open Discussion 16:50:21 #link https://review.openstack.org/#/c/247083/ Tempest plugin work (dimtruck) 16:50:42 this is an initial effort to port our tempest tests to tempest plugin 16:50:55 they're blocked on adding tempest to gates 16:50:55 #link https://review.openstack.org/#/c/248123/ Adding tempest to gate (dimtruck) 16:51:03 which i did with that patch ^ 16:51:23 i also added it to openstack-infra's meeting agenda for later today to see if it can get some eyes 16:51:41 i can review 16:51:43 once it lands, i can continue refactoring the tempest plugin. 16:51:45 thanks yolanda! 16:51:54 thanks for driving that dimtruck. This is really important to get right. 16:52:05 yup, thanks dimtruck 16:52:16 good news is that our tests do not need to be refactored at all. tempest will just pick them up! 16:52:17 that's everything in our agenda 16:52:26 good job dimtruck 16:52:30 dimtruck: very good news!! 16:52:41 i wanted to show a blueprint i created, to build fedora images using dib 16:52:47 i believe i put that on agenda... 16:53:12 oh yolanda - it probably made it to previous week's (i remember seeing it on 11/17 when i was waiting to add mine) 16:53:28 #link https://blueprints.launchpad.net/magnum/+spec/fedora-atomic-image-build 16:53:43 dimtruck, yes, may be that 16:53:44 thx dimtruck 16:54:11 so in order to add support for shade, that's one of our blocker 16:54:30 in last meeting, we suggested to explore image creation using disk image builder, so i create the initial blueprint 16:54:42 Is shade used in infra? 16:54:48 the net subteam discussed creating a network-driver support matrix 16:54:59 any suggestions where this matrix should live? 16:55:03 tcammann, yes, it's used 16:55:06 for ansible and for nodepool 16:55:23 ok I see, thanks yolanda 16:55:34 yolanda: directionally approved and targeted for mitaka 16:55:38 daneyon_: the wiki probably the best place 16:55:42 adrian_otto, thanks 16:55:51 yolanda: +1 16:55:54 tcammann will do 16:56:39 ok so i will start progress on it 16:57:01 tcammann do you suggest under the resources section? https://wiki.openstack.org/wiki/Magnum#Resources 16:57:13 time check, we have just a few minutes remaining 16:57:15 or in the main page? 16:57:28 daneyon_ seems in etherpad is better 16:57:34 let's wrap up, and plan to carry unfinished discussions to conclusion in #openstack-containers 16:58:16 goodbye 16:58:51 safe travels for those doing so! 16:59:19 bye 16:59:25 thanks everyone for attending today. Our next meeting will be Tuesday 2015-12-01 at 1600 UTC. See you then! 16:59:32 #endmeeting