16:02:00 <adrian_otto> #startmeeting containers
16:02:01 <openstack> Meeting started Tue Nov 15 16:02:00 2016 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:06 <openstack> The meeting name has been set to 'containers'
16:02:21 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-11-15_1600_UTC Our Agenda
16:02:27 <adrian_otto> #topic Roll Call
16:02:28 <Drago1> o/
16:02:32 <adrian_otto> Adrian Otto
16:02:34 <jvgrant> Jaycen Grant
16:02:35 <hongbin> o/
16:02:36 <tonanhngo> Ton Ngo
16:02:39 <strigazi> Spyros Trigazis
16:02:40 * pc_m lurking
16:02:41 <rochaporto> Ricardo Rocha
16:02:41 <hieulq_> o/ Hieu LE
16:02:47 <yatin_> Yatin Karel
16:02:50 <mjura> Michal Jura
16:02:56 <randallburt> o/
16:02:59 <jasond> o/
16:03:01 <rpothier> o/
16:03:02 <mvelten> Mathieu Velten
16:03:16 <Drago> Wow, hey everybody!
16:03:21 <adrian_otto> hello Drago jvgrant hongbin  tonanhngo strigazi pc_m rochaporto hieulq_ yatin_ mjura randallburt jasond rpothier and mvelten
16:03:29 <adrian_otto> nice sized group today.
16:04:02 <adrian_otto> #topic Announcements
16:04:15 <adrian_otto> 1) Welcome to our new core reviewers jvgrant and yatin
16:04:27 <randallburt> congrats!
16:04:37 <tonanhngo> Welcome Jaycent, Yatin!
16:04:38 <hieulq_> congrats yatin_ and jvgrant
16:04:43 <hongbin> welcome
16:04:48 <mjura> congratulations
16:04:51 <adrian_otto> we are glad that you have both stepped up to this expanded role on our team
16:05:04 <jvgrant> Thanks!
16:05:12 <yatin_> Thanks
16:05:48 <adrian_otto> I've touched base with both jvgrant and yatin_ and I'll do my best to keep our core team closely engaged
16:06:35 <adrian_otto> 2) Reminder: we will not hold a team meeting on 2016-11-22, but will regroup again on 2016-11-29
16:06:46 <adrian_otto> any other announcements from our team members?
16:07:03 <strigazi> one small one
16:07:18 <adrian_otto> proceed!
16:07:32 <strigazi> The magnum-specs repo is configured an ready, please review the initial commits
16:07:35 <strigazi> https://review.openstack.org/#/c/395673/
16:07:36 <strigazi> https://review.openstack.org/#/c/395674/
16:07:50 <strigazi> that's it
16:08:46 <adrian_otto> thanks strigazi
16:09:21 <adrian_otto> #topic Review Action Items
16:09:25 <adrian_otto> (none)
16:09:38 <adrian_otto> #topic Blueprints/Bugs/Reviews/Ideas
16:09:52 <adrian_otto> now, before we dive in here, some historical context
16:10:07 <adrian_otto> in this section we normally cover our essential blueprints for the cycle
16:10:21 <adrian_otto> and address points of discussion that we want to carry on as a team
16:10:47 <adrian_otto> I wanted to check with all of you to get your input on which blueprints should be marked as essential in this cycle
16:10:59 <adrian_otto> note that this is a short cycle, and will be done in Feb
16:11:18 <adrian_otto> so the question is what do we think is critical to complete by January
16:11:31 <adrian_otto> thoughts?
16:11:41 <randallburt> not to be a broken record but https://review.openstack.org/#/c/389835/
16:11:42 <rochaporto> did we add a list like that in the summit discussion? somewhere in the etherpad?
16:12:04 <adrian_otto> we made an effort to target blueprints to this cycle
16:12:12 <adrian_otto> but as a team we have not prioritized them
16:12:46 <tonanhngo> should we look at a list and vote?
16:12:51 <mvelten> in the small stuffs but anoying department I am working on finishing this : https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container
16:12:57 <adrian_otto> randallburt: acknowledged
16:12:59 <adrian_otto> https://blueprints.launchpad.net/magnum/ocata
16:13:03 <adrian_otto> this is the current list
16:13:22 <adrian_otto> does this look like the right list to you?
16:13:22 <hongbin> do we have a bp for coe upgrade?
16:13:40 <adrian_otto> hongbin: https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades is that
16:13:40 <Drago> hongbin: cluster-upgrades
16:13:40 <strigazi> https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades
16:13:48 <hongbin> ok
16:13:56 <adrian_otto> it's currently a high priority
16:14:07 <hongbin> sound good
16:14:12 <strigazi> but it depends on two BPs
16:14:20 <strigazi> flatten-attributes
16:14:29 <strigazi> and template-versioning
16:14:41 <strigazi> I'm working already on flatten-attributes
16:14:41 <adrian_otto> so we should mark those as high as well
16:15:37 <jvgrant> template-versioning will follow the flatten-attributes once it is far enough along. the nodegroup bp needs both of those as well
16:15:52 <Drago> I am not sure how it's normally handled, but nodegroups is based on work in flatten-attributes. Does flatten-attributes need to be essential?
16:16:08 <adrian_otto> yes
16:17:00 <adrian_otto> okay, so starting in our next team meeting, I will be asking the assignee of each essential blueprint to give an update to the team about the progress made toward each during the previous work period
16:17:12 <rochaporto> this one too: https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container
16:17:18 <Drago> Okay, good to know. I think that the nodegroup work will continue to be split up into more sub-blueprints
16:17:22 <adrian_otto> we need to make measured forward progress  to get these done for the release
16:17:24 <rochaporto> this currently makes moving to more recent version of kubernetes harder
16:17:25 <strigazi> adrian_otto, please assign the flatten-attributes to me
16:17:38 <adrian_otto> and it's totally fine to break these up and give status on those parts
16:17:42 <adrian_otto> strigazi: ok
16:17:54 <mvelten> currently scheduler and controller-manager are running as containers but not the api-server and the kubelet. which means that we are not running the same version since the apiserver and kubelet are coming from the rpm.
16:18:05 <jvgrant> template-versioning should at least be high or essential since it is needed for cluster-upgrades and nodegroups
16:18:09 <adrian_otto> strigazi: done
16:18:40 <Drago> I think we should at least create a blueprint for the image automation work we discussed, because magnum adoption should be a high priority, and it would enable things like run-kube-as-container
16:18:56 <Drago> We are still running k8s 1.2 in the fedora atomic driver
16:19:17 <strigazi> if we move in containers kube will be removed from the image
16:19:18 <adrian_otto> jvgrant: okay, I upgraded template-versioning
16:19:20 <adrian_otto> to Essential
16:19:25 <jvgrant> thanks
16:19:41 <rochaporto> +1 strigazi
16:19:46 <adrian_otto> Drago: can you submit at least a high level outline blueprint for that?
16:19:57 <Drago> adrian_otto: Sure
16:20:07 <adrian_otto> thanks Drago
16:20:34 <hongbin> mvelten: rochaporto : for run-kube-as-conainer, we tried it before, but couldn't pass the gate: https://review.openstack.org/#/c/251158/
16:20:50 <hongbin> mvelten: rochaporto possibly because the vm in the gate is too small
16:20:56 <strigazi> Drago, what kind of extra automation? I think upgrading Docker isn't something that we can predict how it will work
16:21:19 <Drago> strigazi: What we talked about at the summit.
16:21:31 <rochaporto> hongbin: seen that, mvelten restarted the work on it
16:21:33 <Drago> Creating images that use upstream (COE) packages
16:21:34 <adrian_otto> what about solving that using a third party CI for testing the kubernetes cluster driver?
16:22:07 <adrian_otto> that way it could run on bare metal on systems large enough to actually test the code properly.
16:22:19 <rochaporto> Drago,adrian_otto: this is not related to image automation
16:22:41 <adrian_otto> it could speed up the pace at which we iterate on that driver by reducing the duration of each text cycle
16:23:08 <adrian_otto> *test
16:23:12 <Drago> rochaporto: What do you mean by "this"? And I think that anything we're talking about is valid to discuss if it's essential
16:23:26 <mvelten> thx hongbin I have a prototype working for both api-server and kubelet (tls off, directly on top of heat). I dont know yet if the CI is a problem I am fighting devstack to do proper testing
16:23:29 <rochaporto> Drago: sorry, i meant the change for kube in containers
16:23:56 <Drago> rochaporto: Ok
16:24:36 <mvelten> but anyway an external CI on barmetal would be great
16:25:18 <adrian_otto> does anyone think an external CI per cluster driver is a bad idea?
16:25:36 <tonanhngo> How do we do this?
16:25:41 <adrian_otto> we don't have to tackle it right now, but I'd like to get your feelings on if that's something we should aim for over time
16:25:43 <Drago> We need a third party CI to have multi-node, correct?
16:25:52 <adrian_otto> Drago: yes, for now
16:25:56 <strigazi> and what if this third party stops providing the CI?
16:26:08 <yatin_> mvelten: have you run kubelet running under container
16:26:08 <tonanhngo> This should open the door for more functional tests
16:26:13 <adrian_otto> infra might offer something at come point, but it's going to use VMs
16:26:59 <mvelten> yatin_: yes. in privileged mode with net=host and pid=host and some volumes mapping
16:27:26 <adrian_otto> strigazi: that's a valid concern. I think the short answer is taht we'd need to address that concern at the time it materialized.
16:27:51 <yatin_> mvelten: http://kubernetes.io/docs/getting-started-guides/scratch/#selecting-images
16:27:52 <Drago> Make CERN the third party :)
16:27:57 <adrian_otto> LOL
16:28:08 <randallburt> lol
16:28:14 <hongbin> i think we should have at least two third-party
16:28:14 <strigazi> I have a tichet for that for some time :(
16:28:22 <hongbin> if one of them drop, the other will still work.
16:28:28 <adrian_otto> yeah, maybe thr right answer is that capacity should be donated by more than one participant
16:28:50 <adrian_otto> so that we could continue to run it even if one needed to be dropped.
16:29:01 <strigazi> so HA?
16:29:18 <adrian_otto> hongbin and I are totally in sync today!
16:29:24 <adrian_otto> strigazi: yes
16:29:43 <adrian_otto> or have two and alternate back and forth per commit
16:29:49 <Drago> hongbin: +1
16:29:54 <mvelten> yatin_: yes I use the hyperkube image
16:30:06 <mvelten> yep nice idea adrian
16:30:07 <rochaporto> initially we were adding only for baremetal tests here
16:30:11 <adrian_otto> something like that... we don't have to solve all the details right now, but I like that conceptually.
16:30:42 <mvelten> BM or at least VM with nested KVM
16:31:18 <adrian_otto> so before we get too far down this path I'd like to come back the the blueprint that randallburt raised for discussion: https://review.openstack.org/389835
16:31:33 <adrian_otto> there has been great progress on this in the past week
16:31:42 <Drago> https://blueprints.launchpad.net/magnum/+spec/bp-driver-consolodation
16:32:05 <adrian_otto> does everyone here feel like it's time to pull off the WIP status of 389835 and consider the spec for merge?
16:32:27 <adrian_otto> should it be resubmitted for the spec repo, or land it as-is?
16:32:27 <strigazi> I think the bp url should be bp-driver-encapsulation
16:32:49 <strigazi> in magnum-specs rebased on
16:32:52 <adrian_otto> my understanding is that our spec repo is not yet usable. Is that still true?
16:33:00 <strigazi> https://review.openstack.org/#/c/395674/
16:33:10 <strigazi> it is usable
16:33:23 <adrian_otto> oh, sweet.
16:33:46 <strigazi> just create a dir specs/ocata
16:33:55 <strigazi> can push an empty dir
16:34:01 <adrian_otto> okay, so we can abandon https://review.openstack.org/389835 and resubmit to openstack/magnum-specs in specs/ocata
16:34:14 <randallburt> can I just push with the new directory and spec?
16:34:22 <strigazi> yes
16:34:23 <adrian_otto> yes, you can randallburt
16:34:34 <randallburt> cool. will sort that today then.
16:34:37 <strigazi> just rebase on my patch
16:34:46 <randallburt> strigazi:  will do
16:35:00 <Drago> Most +2s for an abandoned patch ever
16:35:04 <randallburt> lol
16:35:08 <mvelten> :D
16:35:40 <jvgrant> hmmm... there is lots of good discussion there, is it better that we merge the specs still under review first then push them over to new repo?
16:36:02 <tonanhngo> rebase should preserve?
16:36:09 <strigazi> we can merge both
16:36:16 <adrian_otto> ok, so is there any additional input we should consider on 395957 before going through the merge process?
16:36:23 <mvelten> it is still in the gerrit anyway if needed
16:36:47 <strigazi> tonanhngo, no it's another repo
16:37:04 <randallburt> there's a wip implementation patch up already if folks want to review. will have a follow on today sometime
16:37:06 <strigazi> but the history in gerrit won't be lost
16:37:09 <Drago> adrian_otto: 395957??
16:37:32 <yatin_> strigazi: currently magnum-specs contains specs from which release
16:37:54 <adrian_otto> Drago: oops, I meant 389835
16:38:00 <strigazi> yatin_, we didn't have release directories before
16:38:20 <yatin_> strigazi: Ok so it's common
16:38:29 <tonanhngo> we should merge both then, to preserve the discussion for future reference.
16:38:52 <strigazi> yatin_, yes, but since ocate we can have dirs
16:38:58 <randallburt> abandoning won't remove the discussion. it wil be in gerrit forever. I'll add a note in the abandon that its moved
16:39:09 <yatin_> strigazi: (y)
16:39:16 <mvelten> randallburt: +1
16:39:19 <randallburt> can also add a link in the spec itself to the original patch
16:39:22 <Drago> randallburt: Maybe also link back to the abandoned patch in the new patch
16:39:27 <Drago> +1
16:39:28 <tonanhngo> ok, that works too
16:39:36 <jasond> Drago: +1
16:39:44 <yatin_> +1
16:40:12 <Drago> Was the original question about essential blueprints?
16:40:13 <adrian_otto> leave the 389835 alone
16:40:28 <adrian_otto> we will abandon that one,a dn you can link to that from the one in the new review
16:40:51 <adrian_otto> ok, so I'd like to wrap up this agenda item, and move to open discussion in a moment
16:40:55 <yatin_> mvelten: Ok, let's see how it works now, earlier there were some issues with running kubelet in container maybe they are resolved now.
16:40:59 <adrian_otto> anything else we want to cover here?
16:41:33 <mvelten> yatin_ do you have any pointers of failing use cases ? so I can test ?
16:41:54 <yatin_> mvenlten: Ok i will check and get back to you.
16:42:18 <yatin_> in meeting  we have limited time.
16:42:34 <adrian_otto> #topic Open Discussion
16:42:53 <mvelten> perfect thx. you can ping me through strigazi if I am not there ;)
16:43:05 <yatin_> mvelten: Ok
16:43:17 <yatin_> please check whiteboard and suggest: https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe
16:43:40 <strigazi> adrian_otto, I have one more thing
16:43:47 <adrian_otto> yes strigazi ?
16:43:52 <hieulq_> mvelten, I got some trouble when running kubelet in container with hyperkube 1.0.1
16:44:04 <strigazi> at CERN we have custom cluster drivers
16:44:21 <strigazi> and currently we modify the upstream ones
16:44:24 <hieulq_> mvelten, iirc one is related to cadvisor
16:44:29 <strigazi> we don't have extra
16:44:48 <strigazi> We would like to manage them as separate packages in our case rpms
16:44:50 <hieulq_> mvelten, I can recheck and send you feedback
16:44:55 <strigazi> but it applies to all distros
16:45:09 <hieulq_> mvelten, but seems like 1.4 solved it
16:45:40 <strigazi> If we have custome driver, extra ones, we would like not to install the upstream ones
16:45:45 <rochaporto> yatin_: we run flat/provider networks, so we can't have ports only on private networks
16:45:57 <mvelten> yatin_ indeed this unprotected etcd is not cool. kube as container should allow to use upstream vanilla 1.2.x which perhaps supports the tls arguments, TBC
16:46:21 <mvelten> yes tls please :)
16:46:23 <strigazi> so, I discussed with rdo to change the packages and they proposed two solutions
16:46:56 <adrian_otto> ok
16:46:56 <strigazi> to have packages per cluster_driver
16:47:00 <randallburt> we'd have to separate the drivers into their own repos or at least directories
16:47:09 <strigazi> one is to have a separate repo
16:47:26 <strigazi> and the other is to not create the entrypoints byt default
16:47:49 <adrian_otto> what do you mean by the "entrypoints"?
16:48:14 <randallburt> adrian_otto:  in the setup.cfg
16:48:15 <strigazi> https://github.com/openstack/magnum/blob/master/setup.cfg#L47
16:48:27 <strigazi> randallburt :)
16:48:34 <adrian_otto> ok, got it
16:48:35 <rochaporto> the latter is simpler?
16:48:47 <strigazi> sorry L61
16:48:55 <randallburt> simpler for us, not for users
16:49:02 <rochaporto> we can keep core drivers in tree and any other out of tree
16:49:15 <rochaporto> neutron style
16:49:17 <randallburt> its not about where the drivers are so much as where the entry points are
16:49:20 <strigazi> randallburt, why it's not simple for users?
16:49:35 <yatin_> rochaporto: i think some drivers currently do that.
16:49:36 <strigazi> users = ops?
16:49:38 <randallburt> strigazi:  because users installing Magnum have to update setup.cfg.
16:49:43 <randallburt> yes
16:49:45 <adrian_otto> so it's really about https://github.com/openstack/magnum/blob/master/setup.cfg#L61
16:49:55 <strigazi> adrian_otto yes
16:49:58 <randallburt> so packaging becomes a bit of a pain
16:50:20 <adrian_otto> so if we use an auto-discovery technique for loading those (stevedore possibly)
16:50:28 <strigazi> you do apt-get install magnum-api magnum-swarm-atomic magnum-k8s-atomic
16:50:33 <adrian_otto> then they don't need to be explicitly listed in the config
16:50:40 <randallburt> I got no dog in this fight tbh, but IMO, the "right" way is separate repo for the default drivers
16:50:47 <strigazi> stevedore discovers the now
16:50:51 <adrian_otto> and if you added a package you could make them show up ang get included by the system at startup
16:51:01 <randallburt> adrian_otto:  we are using stevedore, that's what the entry points are for
16:51:13 <yatin_> mvelten: Yes, i also checked parameters are available for v1.2.x but i think current image has not stable release version
16:51:19 <adrian_otto> so have one point that can include others?
16:51:36 <strigazi> adrian_otto, didn't get this
16:52:09 <randallburt> adrian_otto:  no, the point is we're registering the default plugins by default in magnum's setup.cfg. What would be more modular is to have the plugins in their own modules that are installed separately witht heir own setup.cfg
16:52:09 <strigazi> now if you install another drivers it is discovered by magnum
16:52:35 <strigazi> randallburt, the driver will be in tree
16:52:56 <mvelten> yatin_ indeed the rpm packaged version is a weird stuff the packaged commit is not even on a public branch. That is also why I want to move away from it...
16:53:06 <randallburt> strigazi:  sure, then its a question of refactoring the repo
16:53:09 <strigazi> but the package will not include the drivers in the magnum-common or python-magnum
16:54:09 <strigazi> I'm not sure what is best, the rdo folks said that separate repo is cleaner
16:54:21 <randallburt> I would agree with that
16:54:40 <yatin_> mvelten: Agree. Yes, going to containers is good, but container image size are too much, downloading over the internet for each node looks not good, but i think there is another bp for that.
16:54:52 <rochaporto> any drawbacks of moving them in their own repo? versioning?
16:55:05 <randallburt> that being said, we've had this same discussion in Heat wrt keeping the resource plugins separate from the engine and installed via separate packages, and we've never actually done it
16:55:41 <strigazi> We can discuss it a bit further in the ML
16:55:49 <adrian_otto> yes, a separate repo is cleaner.
16:55:50 <strigazi> what do you think?
16:55:57 <adrian_otto> this is a good ML topic
16:56:05 <adrian_otto> and I'd like to see the packager's input on this there
16:56:24 <strigazi> I think debian would also say separate
16:56:43 <strigazi> I mentioned it some months before
16:56:53 <adrian_otto> that's my expectation too. I can help you find the right person to ask if you need to get that answer.
16:57:23 <strigazi> I know the debian and rdo folks, sould I start a thread?
16:57:25 <adrian_otto> has cinder broken all of its drivers into separate repos?
16:57:37 <adrian_otto> strigazi: yes please
16:57:50 <adrian_otto> I have always felt that approach would make sense
16:58:08 <strigazi> #action strigazi to start a ML thread about cluster-drivers repo
16:58:09 <rochaporto> we run our own for neutron, code is in a local repo
16:58:39 <mvelten> yatin_ the plumbing is already there to pull from a "local" repo if it is a problem. and we can switch from hyperkube to kubelet for the nodes since they dont need the rest of the code (500mo vs 130 if I remember)
16:59:19 <adrian_otto> Thanks everyone for attending today. Our next meeting will be on 2016-11-29 at 1600 UTC. See you then!
16:59:38 <mvelten> see you ! enjoy thanksgiving
16:59:45 <adrian_otto> #endmeeting