16:00:10 #startmeeting containers 16:00:10 Meeting started Tue Jan 10 16:00:10 2017 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:13 #liink https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-01-10_1600_UTC Our Agenda 16:00:14 The meeting name has been set to 'containers' 16:00:22 #topic Roll Call 16:00:25 Adrian Otto 16:00:32 Stephen Watson 16:00:33 Ton Ngo 16:00:40 Jason Dunsmore 16:00:41 Vijendar Komalla 16:00:41 Perry Rivera 16:00:45 Spyros Trigazis 16:00:49 Mathieu Velten 16:00:54 o/ 16:01:01 o/ 16:01:11 hello swatson_ tonanhngo jasond vijendar_ juggler strigazi mvelten hongbin and hieulq_ 16:01:18 jaycen Grant 16:01:52 hello jvgrant_ 16:02:13 o/ 16:02:48 hello Drago 16:02:51 o/ 16:02:57 yatin karel 16:03:04 hello randallburt and yatin 16:03:11 #topic Announcements 16:03:31 1) #link https://releases.openstack.org/ocata/schedule.html Jan 23 marks our Ocata feature freeze. 16:04:07 between now and then I'll be working to merge our features for this release, so expect to see an elevated level of review comments 16:04:35 cores, please plan to attend the review queue closely in the upcoming weeks to help us get those important items over the finish line. 16:04:47 are there any questions/concerns about the freeze date? 16:05:09 After that only bug fixes right? 16:05:10 i think there will be some on a few of our feature bps 16:05:51 yes, during freeze, bug fixes are allowed, and there is a process for merging features as exceptions, called an FFE. 16:05:51 but I'd prefer not to use that if we can avoid it 16:06:14 ok 16:06:55 any other announcements from team members? 16:07:31 #topic Review Action Items 16:07:35 1) ACTION: adrian_otto to respond to http://lists.openstack.org/pipermail/openstack-dev/2016-November/107613.html with a reference to this chat. 16:07:39 Status: complete 16:07:51 #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/109558.html ML Reference 16:08:33 strigazi: were you satisfied with the output of this discussion, and do you feel it is actionable? 16:08:50 Yeap, it's reasonable 16:08:54 ok 16:09:04 any more remarks on that topic? 16:09:24 #topic Blueprints/Bugs/Reviews/Ideas 16:09:29 Essential Blueprints 16:09:35 #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi] 16:10:32 I'll do my best to add by 23. I don't have a real blocker so far 16:10:42 s/add/merge 16:11:00 I don't have an update at the moment 16:11:55 strigazi++ 16:12:17 I think after this change upgrades could be done on driver level only 16:12:43 ok, any questions or concerns before we advance topics? 16:13:08 #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nedegroups [Drago] 16:13:34 I don't have any update for this, other than that I plan to give it some time this week. jvgrant_? 16:13:47 waiting on more feedback 16:14:00 haven't had any feedback in 3 weeks or so 16:14:22 i don't think this is likely to make it for the 23 16:14:34 jvgrant_: is there an open question in particular that we need to address before merge the spec? 16:14:39 it is dependent on the flatten attributes as well 16:14:52 jvgrant_: are you referring to spec, or implementation? 16:14:53 the spec can be merged though 16:15:10 adrian_otto: there were some comments that i address with changes and additional info 16:15:15 other than that i don't know of any 16:15:27 adrain_otto: implementation 16:15:31 ok, so we'll get that spec merged 16:15:51 i think the spec should probably be to get the spec merged with a goal for the next cycle on implementation 16:15:53 and let's see what we can implement in time. 16:16:14 Yeah, I'm sure there are things we can to this cycle 16:16:20 yes, I think that's reasonable. This is a short cycle, and that's a big feature. 16:16:36 agreed, the flatten implementation will be a big first step for this 16:16:42 but let's see about fitting some of the work in over the coming weeks. 16:16:47 Like DB migrations. It'd be nice to have Ocata clusters' data compatible with the v2 API 16:16:49 agreed 16:17:14 i think the DB changes would be a good goal 16:17:22 I think a couple things are missing from the data, namely UUIDs for the nodegroups, since we don't have any right now 16:17:35 Otherwise everything else can automatically map 16:17:53 Possibly looking into structuring the Heat templates so that they're compatible with v2 16:18:08 Drago, map clusters to nodegroups? 16:18:31 strigazi: map v1 cluster data to v2 cluster data 16:18:48 Drago would there be spec for nodegroup-heat as well 16:18:53 We know where attributes need to move, and it's basically a 1:1 mapping. 16:19:09 yatin: There could be. I think there's a blueprint to track it 16:19:18 yatin: i created the bp for nodegroup-heat 16:19:25 yes asking for that only 16:19:32 https://blueprints.launchpad.net/magnum/+spec/nodegroups-heat 16:19:32 i started looking to nodegroup spc yesterday but the spec is too large(didn't got time today) will look it at tomorrow. 16:20:12 yeah it is pretty big and that is after we broke out a few things into other specs :) 16:20:22 :) yup 16:21:18 ok, let's advance to the next essential BP 16:21:21 #link https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe Secure Central Data store(etcd) [yatin] 16:21:31 this one should complete this week 16:21:51 the last reviews are very close to merge... probably today. 16:21:51 yes patch for coreos and swarm are under review. 16:22:05 yatin: anything that you think may need to be revisited? 16:22:51 adrian_otto, i think no 16:22:56 ok, great 16:23:02 yatin: you have a list of pending reviews? 16:23:04 any more remarks on this BP? 16:23:21 last BP... 16:23:21 hongbin, for this bp 16:23:21 #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi] 16:23:36 one the previous one, we have deployed the merged patch for k8s, works fine 16:23:37 (oh, pausing for yatin's reply first.) 16:24:04 it looks this one is pendiing: https://review.openstack.org/#/c/414891/ 16:24:26 https://review.openstack.org/#/c/403501/, https://review.openstack.org/#/c/414891/ 16:24:34 hongbin: I'll be reviewing that this morning 16:24:59 i will review it as well 16:25:35 hongbin, adrian_otto Thanks 16:25:42 I can't today, tommorow for me, if it's not merged :) 16:26:29 For the last bp, 16:26:29 thanks strigazi. Please proceed with any updated on cluster-upgrades 16:26:38 *updates 16:27:53 I would like to land a basic upgrade functionality in this cycle, by node replacement as I have mentioned. But the spec is not megred. Needs some polishing. The main idea is there though 16:28:29 What do you think about pursuing that in this cycle? 16:28:37 strigazi: do you have clarity on what tweaks are needed to merge the spec? 16:29:13 in all honesty, my preference is to have basic upgrades supported in Ocata rather than part of the Nodegroups feature, if I had to choose. 16:29:47 I'll do a small revision on Imlementaion details, but the general idea is already there and it won't change 16:29:48 +1 16:29:56 +1 adrian_otto* 16:29:59 Yes upgrade would be more necessary+1 16:30:08 +1 16:30:14 it seems both work are in parallel 16:30:45 so it is unlikely that these works will interfere with each other 16:30:48 ok, so we can agree on get in a first iteration of this feature 16:31:27 hongbin: agreed. The question is where we should ask reviewers to focus their attention in order of preference 16:31:46 adrian_otto: ok, get that 16:31:55 but we should aim to advance as much as we can on each of these parallel work streams 16:32:14 giving slight preference to upgrades if review time is constrained at all 16:33:20 ok, other questions/concerns or other remarks on the upgrades bp? 16:33:37 (subtopic) Other Work Items 16:33:56 I have one, 16:33:56 my turn :) kubernetes containerization 16:34:08 wait mathieu it's a small one :) 16:34:13 go ahead 16:34:35 I have tested this patch and seems to work fine https://review.openstack.org/#/c/417457/ 16:34:50 it upgrades to fedora 25 with k8s 1.4.7 and docker 1.12.5 16:34:50 Atomic 25 16:35:35 ironic works too. so have a look 16:35:51 thanks strigazi 16:36:22 mvelten: your turn 16:37:09 do we want it for ocata ? on my side it is correctly tested, including kubelet and cinder support 16:37:42 it it works ok we want it :) 16:38:26 what is cinder support specifically? 16:38:42 let k8s creating cinder volumes 16:38:57 let k8s create cinder volumes 16:38:57 however we are still struggling with the CI and I am running out of idea of what is happening here. It is really difficult to understand what is the exact problem (connectivity ? speed ?) 16:38:58 ok, do we use trust to create cinder volumes? 16:39:09 hongbin in k8s 1.5 16:39:12 not yet we need 1.5 16:39:21 ok 16:39:37 but if we move to container we can bump easily to 1.5 :) 16:39:40 Why 1.5? 16:39:52 I wrote a patch to support trust 16:40:00 included in 1.5 16:40:01 the corresponding patch was merged in 1.5 16:40:15 mvelten: there's a config for trust in the file, but it doesn't work? 16:40:20 mvelten: in 1.4 16:40:56 hongbin, i raised the concern on running kubelet(cinder volume support) in container: https://review.openstack.org/#/c/374564/ I think thats why mvelten mentioned cinder support 16:40:57 Yeah, randallburt, you've seen cinder PV work in 1.4 with a trust, right? 16:41:09 Drago: not yet because of other issues 16:41:22 what config pls ? 16:41:34 mvelten: in the cloud configuration for the openstack plugin 16:42:01 mvelten: "trust-id" 16:42:11 yatin: you mean the comment here? https://review.openstack.org/#/c/374564/5/magnum/drivers/k8s_coreos_v1/templates/fragments/enable-kubelet-minion.yaml 16:42:16 indeed this is 1.5 specific 16:43:08 mvelten: Thanks! 16:43:16 hongbin, no, check 4th last comment 16:43:23 currently found 2 issues 16:44:06 cf https://review.openstack.org/#/c/410856/ for some more details 16:44:30 yatin: ok, so is the reason that's commented out because it only works with some versions of k8s? 16:44:59 it is because currently you have to manually add the creds 16:45:02 I'm gathering this based on the 1.4 vs 1.5 remarks above 16:45:08 the pass is ChangeMe :) 16:45:22 with 1.5 we can use the trust instead 16:45:26 and enable it by default 16:45:32 this needs to be fixed ASAP :) 16:46:01 ok, so perhaps we might consider some conditional logic there 16:46:02 adrian_otto, i will revise the coreos patch(on line with mvelten patch), after that please review. 16:46:14 ok 16:46:17 yes I am pushing for 1.5 to enable cinder by default 16:46:47 and also the k8s lb extension should work (untested) 16:47:03 ok, but whether it is on or off should be controlled by a magnum configuration directive. 16:47:26 not whether someone modifies a configuration by hand... see what I mean? 16:47:32 it is, we have volume_driver 16:47:39 we should assume that all deployment steps are handled by an automation system 16:47:44 of course 16:48:19 we already have the trust creds inside the cluster now for tls so I think enabling it by default make sense, no reason to not do it 16:48:34 ok, so have I misinterpreted something, or should I expect a revision to the patch to tighten this up? 16:48:40 no, it must be opt in 16:49:00 mvelten: yes, I was trying to get it working in 1.4 and all the info we need is already there so I think it makes sense to do it 16:49:10 adrian_otto, i think that(manual config change) was required because we nned to add password in kube_openstack_config, if trust work then this(auto config) is ok 16:49:21 yatin +1 16:49:29 yatin: +1 16:49:41 strigazi why ? I dont see the point to disable by default if we have the creds 16:49:51 ok thanks 16:50:05 cinder is optional service, so default may not work 16:50:09 There is work in progress to remove the creds from the nodes in clear text, 16:50:21 Even the trust creds 16:50:23 we can continue on this topic, but I also want to enter Open Discussion as well 16:50:29 ok 16:50:29 #topic Open Discussion 16:50:33 strigazi: they'd still have to be there in the cloud config 16:50:56 strigazi: maybe we tighten the trust instead? 16:51:04 randallburt, with limited power, yes 16:51:07 more urgent that default or not, the CI :D I dont know what to do 16:51:20 strigazi: question wrt https://review.openstack.org/#/c/417457/: any progress or thoughts on getting os-*-config and heat agents on the atomic image? 16:51:21 if I remember well the summit we are waiting for scoped token 16:51:38 to separate the rights in 2 trusts 16:51:46 randallburt, trying 16:52:01 mvelten that is frozen for now 16:52:07 randallburt, I'm trying 16:52:16 strigazi: k, thanks! 16:52:52 randallburt: No, I haven't done anything on that for a while. I expect that Atomic 25 would be no different than Atomic 23 in terms of getting the heat agent to work 16:53:17 Drago: I assume not and they'd have to be containerized 16:53:18 mvelten adrian_otto I'll start a ML thread for our CI problems 16:53:38 Drago do you have a wip regarding agent in container for atomic ? 16:53:45 randallburt: Which I got fully working and it takes so long that it's unusable 16:53:51 strigazi: ok, be sure to tag infra in the subject line 16:53:52 strigazi, +1 16:53:58 adrian_otto ok 16:53:58 mvelten: Yes, but only containerized 16:54:06 thx strigazi 16:54:11 and possible ping Yoanda individually 16:54:15 Drago: maybe preload the images on the vm image? 16:54:17 ^ strigazi 16:54:34 as she has expressed an interest in helping with these sorts of challenges. 16:54:36 randallburt: strigazi did that for me and it still took 5 minutes 16:54:42 mvelten: https://review.openstack.org/#/c/368981/ 16:54:44 Drago: wierd 16:54:55 I think time is not the issue, if it works we can wait 16:55:03 randallburt: Not weird when you keep in mind it's 300 MB 16:55:07 thx 16:55:13 Drago: ah, right 16:55:18 On a, what, 1 GB vm? 16:55:26 hongbin: adrian_otto: do we want to discuss putting https://review.openstack.org/#/c/416345/ in contrib vs. main tree? 16:55:34 pulling hyperkube 300mo, 20+mn 16:55:42 mvelten: lol 16:55:48 many 300mb stuff 16:55:54 in kvm software of course 16:56:11 Try Magnum! Clusters in only 30 minutes! 16:56:17 :D 16:56:31 mvelten: BTW, addressed your comments in https://review.openstack.org/#/c/416345/ 16:56:50 jasond: a new driver should start in contrib. Once it's known to work for at least one production use case, and someone has volunteered to maintain it over the long run, we can pull it out of contrib into tree. 16:57:18 adrian_otto: ok, did you see the discussion in that gerrit revie? 16:57:22 btw the current k8s driver works with non atomic at the moment 16:57:26 ideally it would have multiple users at that point. 16:57:31 jasond: I'm looking for that now 16:57:38 strigazi: it didn't for me 16:57:53 maybe we separate this from being a new driver and just work on the non-atomic image. Shouldn't it work essentially the same from a magnum standpoint? 16:58:27 jasond randallburt I'll share an image that works with it, and instructions of course 16:58:38 jasond yes saw that thx. I'll keep an eye on it 16:58:39 that means using a single set of heat templates for two distro? (atomic, non-atomic) 16:58:44 randallburt yes 16:58:52 cool 16:59:08 hongbin is almost the same 16:59:15 jasond: I'll open an ML discussion for this. 16:59:16 they use the same packages 16:59:17 Really, one driver, since driver.provides is an array 16:59:37 strigazi: frankly, i am all for consolidate all these drivers into one set of templates 16:59:39 seems we've run out of time 16:59:47 strigazi: cool, if we can get k8s 1.4 on it with the heat agents on that image, we won't have to introduce another driver 16:59:58 jasond, sounds good 17:00:01 Thanks everyone for attending today. Our next meeting is 2017-01-17 at 1600 UTC. 17:00:02 that would be nice 17:00:10 see you then! 17:00:18 #endmeeting