16:00:26 <adrian_otto> #startmeeting containers
16:00:26 <openstack> Meeting started Tue Mar 28 16:00:26 2017 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:30 <openstack> The meeting name has been set to 'containers'
16:00:33 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-28_1600_UTC Our Agenda
16:00:37 <adrian_otto> #topic Roll Call
16:00:41 <adrian_otto> Adrian Otto
16:00:42 <vijendar_> o/
16:00:45 <hieulq_> o/
16:00:47 <mkrai_> Madhuri Kumari
16:00:49 <tonanhngo> Ton Ngo
16:00:50 <strigazi> Spyros Trigazis
16:00:51 <jvgrant_> Jaycen Grant
16:00:51 <coreyob> Corey O'Brien
16:01:10 <adrian_otto> hello vijendar_ hieulq_ mkrai_ tonanhngo strigazi jvgrant_ and coreyob
16:02:05 <adrian_otto> hello juggler
16:02:08 <juggler> o/
16:02:12 <randallburt> o/
16:02:32 <jasond> o/
16:02:41 <adrian_otto> hello randallburt and jasond
16:02:57 <Drago> o
16:02:59 <Drago> o/
16:03:01 <swatson_> o/
16:03:02 <adrian_otto> hello Drago
16:03:23 <adrian_otto> hi swatson_
16:03:40 <adrian_otto> let's begin.
16:03:43 <adrian_otto> #topic Announcements
16:03:45 <adrian_otto> (none)
16:03:45 <yatinkarel> yatin karel
16:04:01 <adrian_otto> any announcements from team members?
16:04:04 <adrian_otto> hi yatinkarel
16:04:33 <swatson_> I don't have an announcement but did want to ask about the OSC vote
16:05:02 <adrian_otto> swatson_: okay, we can touch base on that
16:05:04 <adrian_otto> #topic Action Items
16:05:04 <adrian_otto> (none)
16:05:21 <adrian_otto> #topic OSC command name discussion
16:05:25 <strigazi> swatson_ I don't like coe but works for me and my team
16:05:43 <strigazi> swatson_ from my side is +1
16:05:50 <randallburt> yikes
16:05:53 * adrian_otto looking for the link to the email thread about this
16:05:55 <randallburt> but better than nothing
16:06:02 <strigazi> randallburt exactly
16:06:18 <strigazi> kerberos \o/
16:06:36 <jvgrant_> feel the same way. I don't like it but i can't think of much better with the limitations we have
16:06:49 <mkrai_> +1 for coe
16:07:02 <adrian_otto> #link http://lists.openstack.org/pipermail/openstack-dev/2017-March/114640.html My email yesterday asking for us to express a preference between two options
16:07:03 <swatson_> Sounds like a basic consensus then
16:07:13 <yatinkarel> coe +1 for me
16:07:25 <Drago> +1 for coe
16:07:44 <swatson_> Going off the latest ML message, do we keep "cluster" for the commands too? Or drop it?
16:07:46 <tonanhngo> I have started hearing "container orchestration" from other context, so it may becoming commonly used now
16:08:03 <swatson_> e.g. keep it as "openstack coe cluster create..." or go with a simplified "openstack coe create..."
16:08:21 <yatinkarel> swatson_, then what about ct
16:08:29 <randallburt> right, keep it imo
16:08:35 <swatson_> yatinkarel: "openstack coe template create..."
16:08:52 <swatson_> and "openstack coe ca show/sign", etc.
16:09:00 <randallburt> rather it be explicit about the object being manipulated
16:09:07 <Drago> I'm on the fence, as it's our main resource
16:09:09 <yatinkarel> i think we should keep resources name same as we use now
16:09:25 <adrian_otto> yatinkarel: agreed
16:09:29 <Drago> nova list is nice though
16:09:32 <Drago> vs nova server list
16:09:41 <yatinkarel> openstack server list
16:09:45 <adrian_otto> the question is whether to keep the term "cluster" in the openstack command or not
16:09:56 <adrian_otto> my gut says yes, keep it in.
16:10:10 <Drago> my gut shrugs
16:10:38 <adrian_otto> it's possible that we could start with the word cluster, and later drop it if we find it burdensome to use
16:10:46 <strigazi> +1
16:10:52 <swatson_> +1
16:11:04 <yatinkarel> +1
16:11:09 <vijendar_> adrian_otto: sounds good
16:11:15 <adrian_otto> it could still alias back for compatibility if we make that decision dow the road
16:11:17 <mkrai_> +1
16:11:26 <jvgrant_> +1
16:11:31 <tonanhngo> +1
16:11:34 <randallburt> +1
16:11:45 <Drago> +1
16:11:52 <juggler> +1
16:11:58 <jasond> +1
16:12:11 <swatson_> Alright, I'll update my reviews for "openstack coe cluster..." away from "openstack infra cluster..."
16:13:02 <adrian_otto> thanks everyone.
16:13:08 <adrian_otto> Any opposing viewpoints to consider before releasing swatson_ to change that?
16:13:43 <adrian_otto> ok, thanks.
16:13:57 <adrian_otto> #topic Blueprints/Bugs/Reviews/Ideas
16:14:05 <adrian_otto> Essential Blueprints
16:14:11 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi]
16:15:07 <strigazi> I'm finishing team, sorry for this delay. Cleaning UTs
16:15:32 <adrian_otto> strigazi: any input needed from the team on this?
16:15:57 <adrian_otto> team: any discussion or questions on this work item?
16:15:59 <strigazi> not right now
16:16:14 <adrian_otto> ok, will advance to the next...
16:16:20 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nodegroups [Drago]
16:16:34 <Drago> Still nothing from me. jvgrant_?
16:16:48 <jvgrant_> been on vacation the last week so no updates from me
16:17:02 <adrian_otto> ok, last one is...
16:17:04 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi]
16:17:26 <strigazi> I gave some input in this in the driver-resource spec
16:18:10 <strigazi> I don't have anyting else
16:18:11 <jvgrant_> strigazi: thanks for the input i'll take a look at that today
16:18:28 <strigazi> About NGs
16:19:01 <strigazi> Do you think we can start touching the heat backend and add a POC with jinja?
16:19:30 <strigazi> The mosy obvious usecase for us is AZs
16:19:38 <strigazi> What do you think?
16:20:14 <adrian_otto> strigazi: to clarify you're suggesting we pick a driver to do this in, as a first step toward Nodegroup implementation?
16:20:37 <adrian_otto> to support availability zones
16:20:46 <strigazi> yes,
16:21:02 <strigazi> There is a dedicated bp for this on me :)
16:21:36 <strigazi> https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
16:21:49 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones Availability Zones Feature
16:22:01 <strigazi> Does it make sense?
16:22:36 <strigazi> Probably with the use of labels
16:22:45 <adrian_otto> yes
16:23:02 <adrian_otto> so the swarm driver would be where you'd try it first?
16:23:20 <adrian_otto> adding the AZ list to the CT first
16:23:45 <adrian_otto> why predefine the list?
16:24:03 <adrian_otto> seems that could be a parameter with defaults supplied by the driver
16:24:14 <adrian_otto> or is that maybe what you meant?
16:24:18 <strigazi> yes, it requires some assumptions like this % of nodes in one az and that in the other or can be a list
16:24:41 <strigazi> a list of percentages
16:25:36 <strigazi> in the basic implementaion allow two AZs but start buiding resource groups with jinja
16:27:06 <strigazi> team?
16:27:07 <adrian_otto> sounds fine to me. My only guidance is to try not to bake the semantics into the CT. Keep that in the driver's config.
16:27:31 <adrian_otto> we can see about generalizing it after we're tried it in a single driver
16:27:43 <adrian_otto> s/we're/we've/
16:27:55 <strigazi> I could use some help because flatten-attrs is my priority
16:28:18 <adrian_otto> need help writing the spec?
16:29:10 <strigazi> I would  start from proof of concept implementaion in parallel
16:29:27 <strigazi> in parallel with the spec
16:29:37 <adrian_otto> great
16:30:13 <strigazi> ok, if when I have something I'll oing you
16:30:23 <adrian_otto> ok
16:30:31 <strigazi> s/oing/ping
16:31:02 <adrian_otto> Other Work Items
16:31:08 <adrian_otto> any others from the team?
16:31:30 <adrian_otto> if not, I'll advance to Open Discussion
16:31:38 <adrian_otto> #topic Open Discussion
16:31:53 <strigazi> I want your input on something
16:32:34 <strigazi> I have started some time ago a swarm-mode driver which works fine and we have deployed last week.
16:33:05 <strigazi> There is an experimental ci for it too but no tests yet since it relies on the new python-docker client
16:33:12 <strigazi> The question is
16:34:07 <strigazi> Do we k continue to maintain the old swam? Maybe replace it with swarm-mode and make the old swam -> swarm-legacy ?
16:34:11 <adrian_otto> How can you have a ci without tests? I'm confused.
16:34:52 <strigazi> adrian_otto The ci runs and always fails since it tries to runt tests that do not exist
16:35:02 <adrian_otto> is there a good reason to keep the legacy driver at all?
16:35:29 <strigazi> Not for me, but someone may have users of it?
16:36:02 <adrian_otto> is there an upgrade path from the current driver to the new one?
16:36:07 <strigazi> no
16:36:20 <strigazi> not that I know of
16:36:25 <adrian_otto> I'm thinking....
16:36:34 <adrian_otto> it should still work
16:37:02 <adrian_otto> because swarm mode should adopt the running continers
16:37:16 <adrian_otto> and the legacy swarm driver had no concept of services or anything swarm specific that I can think of
16:37:34 <strigazi> I haven't tried and swarm mode doesn't even have etcd
16:37:57 <strigazi> I don't think that it will read etcd to import containers
16:38:00 <adrian_otto> that does not matter
16:38:15 <adrian_otto> etcd only holds the information about the cluster membership
16:38:27 <adrian_otto> the actual list of containers is still managed by docker
16:38:34 <strigazi> ok
16:38:41 <adrian_otto> say you had a two node cluster...
16:39:02 <adrian_otto> and in swarm you run "docker ps"
16:39:10 <adrian_otto> it will hit the apis of both servers, and combine the results in the resulting list
16:39:37 <adrian_otto> the etcd is used to determine where those API calls are routed.
16:40:13 <adrian_otto> so I think it could be upgradable
16:40:43 <strigazi> I'll have a look but not sure if it's worth it
16:41:02 <adrian_otto> in which case it could still be named "swarm"
16:41:32 <strigazi> if it's upgradable?
16:42:05 <adrian_otto> we might hit problems with different versions of devmapper somehow corrupting the /var/lib/docker contents
16:42:27 <adrian_otto> depending on how old the swarms are that we try to upgrade
16:43:02 <adrian_otto> I've had a few docker version upgrades that went horribly wrong
16:43:17 <adrian_otto> requiring me to discard /var/lib/docker and start over
16:43:38 <strigazi> That doesn't sound fun
16:43:48 <adrian_otto> no, but that has not happened recently
16:44:07 <adrian_otto> it's usually after a kernel upgrade that had a devmapper upgrade with it
16:44:14 <adrian_otto> after that, docker was busted.
16:44:27 <adrian_otto> the last few were fine
16:45:13 <adrian_otto> has anyone else seen problems going from swarm to 1.13+ with swarm mode?
16:46:15 <adrian_otto> strigazi: seems the rest of the team fell asleep ;-)
16:46:21 <juggler> still here :)
16:46:49 <yatinkarel> :) me too
16:46:53 <strigazi> I'll continue as swarm-mode and we see how it goes then
16:47:23 <juggler> are there any pre-planning etherpads out yet for:
16:47:26 <juggler> #link https://www.openstack.org/summit/
16:47:27 <adrian_otto> maybe we can find a volunteer to assist with putting the tests together for that
16:47:45 <adrian_otto> juggler: not yet
16:47:45 <strigazi> I was writing about this just now
16:47:54 <strigazi> *the tests
16:47:56 * randallburt startles awake
16:48:03 <juggler> adrian_otto: thanks
16:48:15 <juggler> lol randallburt
16:48:25 <randallburt> :)
16:48:39 <adrian_otto> maybe we can wrap a little early today?
16:48:46 <strigazi> sure
16:48:56 <juggler> no problem
16:49:33 <adrian_otto> Thanks everyone for attending today. Our next meeting will be on 2017-04-04 at 1600 UTC. See you then.
16:49:38 <adrian_otto> #endmeeting