16:02:00 #startmeeting containers 16:02:01 Meeting started Tue Nov 15 16:02:00 2016 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:06 The meeting name has been set to 'containers' 16:02:21 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-11-15_1600_UTC Our Agenda 16:02:27 #topic Roll Call 16:02:28 o/ 16:02:32 Adrian Otto 16:02:34 Jaycen Grant 16:02:35 o/ 16:02:36 Ton Ngo 16:02:39 Spyros Trigazis 16:02:40 * pc_m lurking 16:02:41 Ricardo Rocha 16:02:41 o/ Hieu LE 16:02:47 Yatin Karel 16:02:50 Michal Jura 16:02:56 o/ 16:02:59 o/ 16:03:01 o/ 16:03:02 Mathieu Velten 16:03:16 Wow, hey everybody! 16:03:21 hello Drago jvgrant hongbin tonanhngo strigazi pc_m rochaporto hieulq_ yatin_ mjura randallburt jasond rpothier and mvelten 16:03:29 nice sized group today. 16:04:02 #topic Announcements 16:04:15 1) Welcome to our new core reviewers jvgrant and yatin 16:04:27 congrats! 16:04:37 Welcome Jaycent, Yatin! 16:04:38 congrats yatin_ and jvgrant 16:04:43 welcome 16:04:48 congratulations 16:04:51 we are glad that you have both stepped up to this expanded role on our team 16:05:04 Thanks! 16:05:12 Thanks 16:05:48 I've touched base with both jvgrant and yatin_ and I'll do my best to keep our core team closely engaged 16:06:35 2) Reminder: we will not hold a team meeting on 2016-11-22, but will regroup again on 2016-11-29 16:06:46 any other announcements from our team members? 16:07:03 one small one 16:07:18 proceed! 16:07:32 The magnum-specs repo is configured an ready, please review the initial commits 16:07:35 https://review.openstack.org/#/c/395673/ 16:07:36 https://review.openstack.org/#/c/395674/ 16:07:50 that's it 16:08:46 thanks strigazi 16:09:21 #topic Review Action Items 16:09:25 (none) 16:09:38 #topic Blueprints/Bugs/Reviews/Ideas 16:09:52 now, before we dive in here, some historical context 16:10:07 in this section we normally cover our essential blueprints for the cycle 16:10:21 and address points of discussion that we want to carry on as a team 16:10:47 I wanted to check with all of you to get your input on which blueprints should be marked as essential in this cycle 16:10:59 note that this is a short cycle, and will be done in Feb 16:11:18 so the question is what do we think is critical to complete by January 16:11:31 thoughts? 16:11:41 not to be a broken record but https://review.openstack.org/#/c/389835/ 16:11:42 did we add a list like that in the summit discussion? somewhere in the etherpad? 16:12:04 we made an effort to target blueprints to this cycle 16:12:12 but as a team we have not prioritized them 16:12:46 should we look at a list and vote? 16:12:51 in the small stuffs but anoying department I am working on finishing this : https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container 16:12:57 randallburt: acknowledged 16:12:59 https://blueprints.launchpad.net/magnum/ocata 16:13:03 this is the current list 16:13:22 does this look like the right list to you? 16:13:22 do we have a bp for coe upgrade? 16:13:40 hongbin: https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades is that 16:13:40 hongbin: cluster-upgrades 16:13:40 https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades 16:13:48 ok 16:13:56 it's currently a high priority 16:14:07 sound good 16:14:12 but it depends on two BPs 16:14:20 flatten-attributes 16:14:29 and template-versioning 16:14:41 I'm working already on flatten-attributes 16:14:41 so we should mark those as high as well 16:15:37 template-versioning will follow the flatten-attributes once it is far enough along. the nodegroup bp needs both of those as well 16:15:52 I am not sure how it's normally handled, but nodegroups is based on work in flatten-attributes. Does flatten-attributes need to be essential? 16:16:08 yes 16:17:00 okay, so starting in our next team meeting, I will be asking the assignee of each essential blueprint to give an update to the team about the progress made toward each during the previous work period 16:17:12 this one too: https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container 16:17:18 Okay, good to know. I think that the nodegroup work will continue to be split up into more sub-blueprints 16:17:22 we need to make measured forward progress to get these done for the release 16:17:24 this currently makes moving to more recent version of kubernetes harder 16:17:25 adrian_otto, please assign the flatten-attributes to me 16:17:38 and it's totally fine to break these up and give status on those parts 16:17:42 strigazi: ok 16:17:54 currently scheduler and controller-manager are running as containers but not the api-server and the kubelet. which means that we are not running the same version since the apiserver and kubelet are coming from the rpm. 16:18:05 template-versioning should at least be high or essential since it is needed for cluster-upgrades and nodegroups 16:18:09 strigazi: done 16:18:40 I think we should at least create a blueprint for the image automation work we discussed, because magnum adoption should be a high priority, and it would enable things like run-kube-as-container 16:18:56 We are still running k8s 1.2 in the fedora atomic driver 16:19:17 if we move in containers kube will be removed from the image 16:19:18 jvgrant: okay, I upgraded template-versioning 16:19:20 to Essential 16:19:25 thanks 16:19:41 +1 strigazi 16:19:46 Drago: can you submit at least a high level outline blueprint for that? 16:19:57 adrian_otto: Sure 16:20:07 thanks Drago 16:20:34 mvelten: rochaporto : for run-kube-as-conainer, we tried it before, but couldn't pass the gate: https://review.openstack.org/#/c/251158/ 16:20:50 mvelten: rochaporto possibly because the vm in the gate is too small 16:20:56 Drago, what kind of extra automation? I think upgrading Docker isn't something that we can predict how it will work 16:21:19 strigazi: What we talked about at the summit. 16:21:31 hongbin: seen that, mvelten restarted the work on it 16:21:33 Creating images that use upstream (COE) packages 16:21:34 what about solving that using a third party CI for testing the kubernetes cluster driver? 16:22:07 that way it could run on bare metal on systems large enough to actually test the code properly. 16:22:19 Drago,adrian_otto: this is not related to image automation 16:22:41 it could speed up the pace at which we iterate on that driver by reducing the duration of each text cycle 16:23:08 *test 16:23:12 rochaporto: What do you mean by "this"? And I think that anything we're talking about is valid to discuss if it's essential 16:23:26 thx hongbin I have a prototype working for both api-server and kubelet (tls off, directly on top of heat). I dont know yet if the CI is a problem I am fighting devstack to do proper testing 16:23:29 Drago: sorry, i meant the change for kube in containers 16:23:56 rochaporto: Ok 16:24:36 but anyway an external CI on barmetal would be great 16:25:18 does anyone think an external CI per cluster driver is a bad idea? 16:25:36 How do we do this? 16:25:41 we don't have to tackle it right now, but I'd like to get your feelings on if that's something we should aim for over time 16:25:43 We need a third party CI to have multi-node, correct? 16:25:52 Drago: yes, for now 16:25:56 and what if this third party stops providing the CI? 16:26:08 mvelten: have you run kubelet running under container 16:26:08 This should open the door for more functional tests 16:26:13 infra might offer something at come point, but it's going to use VMs 16:26:59 yatin_: yes. in privileged mode with net=host and pid=host and some volumes mapping 16:27:26 strigazi: that's a valid concern. I think the short answer is taht we'd need to address that concern at the time it materialized. 16:27:51 mvelten: http://kubernetes.io/docs/getting-started-guides/scratch/#selecting-images 16:27:52 Make CERN the third party :) 16:27:57 LOL 16:28:08 lol 16:28:14 i think we should have at least two third-party 16:28:14 I have a tichet for that for some time :( 16:28:22 if one of them drop, the other will still work. 16:28:28 yeah, maybe thr right answer is that capacity should be donated by more than one participant 16:28:50 so that we could continue to run it even if one needed to be dropped. 16:29:01 so HA? 16:29:18 hongbin and I are totally in sync today! 16:29:24 strigazi: yes 16:29:43 or have two and alternate back and forth per commit 16:29:49 hongbin: +1 16:29:54 yatin_: yes I use the hyperkube image 16:30:06 yep nice idea adrian 16:30:07 initially we were adding only for baremetal tests here 16:30:11 something like that... we don't have to solve all the details right now, but I like that conceptually. 16:30:42 BM or at least VM with nested KVM 16:31:18 so before we get too far down this path I'd like to come back the the blueprint that randallburt raised for discussion: https://review.openstack.org/389835 16:31:33 there has been great progress on this in the past week 16:31:42 https://blueprints.launchpad.net/magnum/+spec/bp-driver-consolodation 16:32:05 does everyone here feel like it's time to pull off the WIP status of 389835 and consider the spec for merge? 16:32:27 should it be resubmitted for the spec repo, or land it as-is? 16:32:27 I think the bp url should be bp-driver-encapsulation 16:32:49 in magnum-specs rebased on 16:32:52 my understanding is that our spec repo is not yet usable. Is that still true? 16:33:00 https://review.openstack.org/#/c/395674/ 16:33:10 it is usable 16:33:23 oh, sweet. 16:33:46 just create a dir specs/ocata 16:33:55 can push an empty dir 16:34:01 okay, so we can abandon https://review.openstack.org/389835 and resubmit to openstack/magnum-specs in specs/ocata 16:34:14 can I just push with the new directory and spec? 16:34:22 yes 16:34:23 yes, you can randallburt 16:34:34 cool. will sort that today then. 16:34:37 just rebase on my patch 16:34:46 strigazi: will do 16:35:00 Most +2s for an abandoned patch ever 16:35:04 lol 16:35:08 :D 16:35:40 hmmm... there is lots of good discussion there, is it better that we merge the specs still under review first then push them over to new repo? 16:36:02 rebase should preserve? 16:36:09 we can merge both 16:36:16 ok, so is there any additional input we should consider on 395957 before going through the merge process? 16:36:23 it is still in the gerrit anyway if needed 16:36:47 tonanhngo, no it's another repo 16:37:04 there's a wip implementation patch up already if folks want to review. will have a follow on today sometime 16:37:06 but the history in gerrit won't be lost 16:37:09 adrian_otto: 395957?? 16:37:32 strigazi: currently magnum-specs contains specs from which release 16:37:54 Drago: oops, I meant 389835 16:38:00 yatin_, we didn't have release directories before 16:38:20 strigazi: Ok so it's common 16:38:29 we should merge both then, to preserve the discussion for future reference. 16:38:52 yatin_, yes, but since ocate we can have dirs 16:38:58 abandoning won't remove the discussion. it wil be in gerrit forever. I'll add a note in the abandon that its moved 16:39:09 strigazi: (y) 16:39:16 randallburt: +1 16:39:19 can also add a link in the spec itself to the original patch 16:39:22 randallburt: Maybe also link back to the abandoned patch in the new patch 16:39:27 +1 16:39:28 ok, that works too 16:39:36 Drago: +1 16:39:44 +1 16:40:12 Was the original question about essential blueprints? 16:40:13 leave the 389835 alone 16:40:28 we will abandon that one,a dn you can link to that from the one in the new review 16:40:51 ok, so I'd like to wrap up this agenda item, and move to open discussion in a moment 16:40:55 mvelten: Ok, let's see how it works now, earlier there were some issues with running kubelet in container maybe they are resolved now. 16:40:59 anything else we want to cover here? 16:41:33 yatin_ do you have any pointers of failing use cases ? so I can test ? 16:41:54 mvenlten: Ok i will check and get back to you. 16:42:18 in meeting we have limited time. 16:42:34 #topic Open Discussion 16:42:53 perfect thx. you can ping me through strigazi if I am not there ;) 16:43:05 mvelten: Ok 16:43:17 please check whiteboard and suggest: https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe 16:43:40 adrian_otto, I have one more thing 16:43:47 yes strigazi ? 16:43:52 mvelten, I got some trouble when running kubelet in container with hyperkube 1.0.1 16:44:04 at CERN we have custom cluster drivers 16:44:21 and currently we modify the upstream ones 16:44:24 mvelten, iirc one is related to cadvisor 16:44:29 we don't have extra 16:44:48 We would like to manage them as separate packages in our case rpms 16:44:50 mvelten, I can recheck and send you feedback 16:44:55 but it applies to all distros 16:45:09 mvelten, but seems like 1.4 solved it 16:45:40 If we have custome driver, extra ones, we would like not to install the upstream ones 16:45:45 yatin_: we run flat/provider networks, so we can't have ports only on private networks 16:45:57 yatin_ indeed this unprotected etcd is not cool. kube as container should allow to use upstream vanilla 1.2.x which perhaps supports the tls arguments, TBC 16:46:21 yes tls please :) 16:46:23 so, I discussed with rdo to change the packages and they proposed two solutions 16:46:56 ok 16:46:56 to have packages per cluster_driver 16:47:00 we'd have to separate the drivers into their own repos or at least directories 16:47:09 one is to have a separate repo 16:47:26 and the other is to not create the entrypoints byt default 16:47:49 what do you mean by the "entrypoints"? 16:48:14 adrian_otto: in the setup.cfg 16:48:15 https://github.com/openstack/magnum/blob/master/setup.cfg#L47 16:48:27 randallburt :) 16:48:34 ok, got it 16:48:35 the latter is simpler? 16:48:47 sorry L61 16:48:55 simpler for us, not for users 16:49:02 we can keep core drivers in tree and any other out of tree 16:49:15 neutron style 16:49:17 its not about where the drivers are so much as where the entry points are 16:49:20 randallburt, why it's not simple for users? 16:49:35 rochaporto: i think some drivers currently do that. 16:49:36 users = ops? 16:49:38 strigazi: because users installing Magnum have to update setup.cfg. 16:49:43 yes 16:49:45 so it's really about https://github.com/openstack/magnum/blob/master/setup.cfg#L61 16:49:55 adrian_otto yes 16:49:58 so packaging becomes a bit of a pain 16:50:20 so if we use an auto-discovery technique for loading those (stevedore possibly) 16:50:28 you do apt-get install magnum-api magnum-swarm-atomic magnum-k8s-atomic 16:50:33 then they don't need to be explicitly listed in the config 16:50:40 I got no dog in this fight tbh, but IMO, the "right" way is separate repo for the default drivers 16:50:47 stevedore discovers the now 16:50:51 and if you added a package you could make them show up ang get included by the system at startup 16:51:01 adrian_otto: we are using stevedore, that's what the entry points are for 16:51:13 mvelten: Yes, i also checked parameters are available for v1.2.x but i think current image has not stable release version 16:51:19 so have one point that can include others? 16:51:36 adrian_otto, didn't get this 16:52:09 adrian_otto: no, the point is we're registering the default plugins by default in magnum's setup.cfg. What would be more modular is to have the plugins in their own modules that are installed separately witht heir own setup.cfg 16:52:09 now if you install another drivers it is discovered by magnum 16:52:35 randallburt, the driver will be in tree 16:52:56 yatin_ indeed the rpm packaged version is a weird stuff the packaged commit is not even on a public branch. That is also why I want to move away from it... 16:53:06 strigazi: sure, then its a question of refactoring the repo 16:53:09 but the package will not include the drivers in the magnum-common or python-magnum 16:54:09 I'm not sure what is best, the rdo folks said that separate repo is cleaner 16:54:21 I would agree with that 16:54:40 mvelten: Agree. Yes, going to containers is good, but container image size are too much, downloading over the internet for each node looks not good, but i think there is another bp for that. 16:54:52 any drawbacks of moving them in their own repo? versioning? 16:55:05 that being said, we've had this same discussion in Heat wrt keeping the resource plugins separate from the engine and installed via separate packages, and we've never actually done it 16:55:41 We can discuss it a bit further in the ML 16:55:49 yes, a separate repo is cleaner. 16:55:50 what do you think? 16:55:57 this is a good ML topic 16:56:05 and I'd like to see the packager's input on this there 16:56:24 I think debian would also say separate 16:56:43 I mentioned it some months before 16:56:53 that's my expectation too. I can help you find the right person to ask if you need to get that answer. 16:57:23 I know the debian and rdo folks, sould I start a thread? 16:57:25 has cinder broken all of its drivers into separate repos? 16:57:37 strigazi: yes please 16:57:50 I have always felt that approach would make sense 16:58:08 #action strigazi to start a ML thread about cluster-drivers repo 16:58:09 we run our own for neutron, code is in a local repo 16:58:39 yatin_ the plumbing is already there to pull from a "local" repo if it is a problem. and we can switch from hyperkube to kubelet for the nodes since they dont need the rest of the code (500mo vs 130 if I remember) 16:59:19 Thanks everyone for attending today. Our next meeting will be on 2016-11-29 at 1600 UTC. See you then! 16:59:38 see you ! enjoy thanksgiving 16:59:45 #endmeeting