16:00:22 <adrian_otto> #startmeeting containers
16:00:23 <openstack> Meeting started Tue Jan 17 16:00:22 2017 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:26 <openstack> The meeting name has been set to 'containers'
16:00:26 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-01-17_1600_UTC Our Agenda
16:00:30 <adrian_otto> #topic Roll Call
16:00:33 <adrian_otto> Adrian Otto
16:00:38 <jvgrant__> Jaycen Grant
16:00:39 <strigazi> o/
16:00:40 <mvelten> Mathieu Velten
16:00:43 <tonanhngo> Ton Ngo
16:00:48 <jasond> Jason Dunsmore
16:01:01 <adrian_otto> hello jvgrant__ strigazi mvelten tonanhngo and jasond
16:01:10 <Drago> o/
16:01:16 <hongbin> o/
16:01:31 <juggler> o/
16:01:32 <adrian_otto> hello Drago hongbin and juggler
16:01:53 <NikoHermannsEric> tmorin: is the bgpvpn meeting today?
16:01:55 <juggler> hello adrian_otto
16:02:06 <strigazi> NikoHermannsEric it kust ended
16:02:10 <strigazi> NikoHermannsEric it just ended
16:02:23 <NikoHermannsEric> oh, then I had the wrong time
16:02:24 <tmorin> hi NikoHermannsEric, yes indeed, it was 15:00 UTC
16:02:32 <NikoHermannsEric> oh ok
16:03:01 <adrian_otto> NikoHermannsEric: we re holding the Magnum meeting now, if that interests you at all
16:03:33 <adrian_otto> #topic Announcements
16:03:58 <adrian_otto> 1) Monday 2017-01-23 is the start of the feature freeze
16:04:13 <adrian_otto> for our Ocata release cycle
16:04:37 <adrian_otto> so this week we'll want to get all the new things merged prior to the freeze
16:04:39 <randallburt> o/
16:04:46 <adrian_otto> remember that bug fixes are welcome any time
16:04:50 <juggler> hey randallburt
16:04:51 <adrian_otto> even during the freeze.
16:05:05 <adrian_otto> any questions on the release?
16:05:31 <adrian_otto> any other announcements from team members?
16:06:01 <adrian_otto> #topic Review Action Items
16:06:04 <adrian_otto> (none)
16:06:15 <adrian_otto> #topic Blueprints/Bugs/Reviews/Ideas
16:06:21 <adrian_otto> Essential Blueprints
16:06:28 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi]
16:07:14 <strigazi> I don't have an update, I'll push tomorrow my patch for it
16:07:25 <adrian_otto> strigazi: thanks. I look forward to that.
16:07:29 <mvelten> regarding the release
16:07:36 <adrian_otto> mvelten: yes?
16:07:40 <juggler> strigazi excellent!
16:08:05 <mvelten> we really need f25 in I think
16:08:39 <mvelten> otherwise we are stuck with really old k8s for the next 6 months :(
16:08:45 <adrian_otto> mvelten: agreed
16:09:02 <adrian_otto> do you foresee any barriers to wrapping this up this week?
16:09:28 <mvelten> so now, we have a problem :) can you summarize strigazi ?
16:09:43 <strigazi> adrian_otto there maybe after action items?
16:09:53 <strigazi> adrian_otto, maybe after action items?
16:10:00 <adrian_otto> ok, I will revisit this after Essential Blueprints
16:10:04 <mvelten> ok
16:10:05 <adrian_otto> next is:
16:10:06 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nodegroups [Drago]
16:10:17 <adrian_otto> note the correct spelling from the agenda!
16:10:30 <jvgrant__> nice :)
16:10:34 <juggler> heh :)
16:10:48 <jvgrant__> there was some additional discussion and feedback on the spec this week
16:11:15 <Drago> Not much of an update from me. I've only made some comments on the spec. Hopefully jvgrant__ and I will be meeting this week to discuss breaking it up into smaller pieces that are more focused.
16:11:21 <jvgrant__> Drago and I are discussing breaking the spec down into smaller specs since it is so big and difficult to make progress on
16:11:32 <adrian_otto> ok, any reason to depart from previous plans for merging the spec in Ocata and implementing thereafter?
16:11:45 <jvgrant__> think that is a good plan still
16:12:34 <Drago> Any comment on a huge spec causes an iteration
16:12:51 <Drago> (well not any, but one worth iterating for)
16:12:52 <strigazi> breaking it in pieces means many focues specs?
16:12:54 <hongbin> the huge spec definiely cause difficulty to review
16:13:10 <adrian_otto> yes, I've struggled with it
16:13:17 <tonanhngo> True, breaking it up will help with the review
16:13:20 <Drago> Smaller specs that are single-subject would make it easier to review and vote on individually
16:13:20 <randallburt> +1 for smaller specs but would be nice if we tracked the big picture too somewhere
16:13:23 <strigazi> s/focues/focused
16:13:38 <Drago> randallburt: We plan to have a spec act as the high-level overview
16:13:53 <adrian_otto> the current review can be the overview, and we can break out details into smaller parts.
16:13:55 <randallburt> Drago:  sounds good
16:14:05 <strigazi> +1
16:14:30 <hongbin> for the big picture, i have a suggestion
16:14:35 <strigazi> We will merge the smaller parts first?
16:14:49 <juggler> adrian_otto / Drago / randallburt +1
16:14:53 <hongbin> consider using a devref as a reference for the big picture (like what kuryr team does: https://github.com/openstack/kuryr/tree/master/doc/source/devref)
16:15:12 <hongbin> the spec doesn't have to specify the big picture then
16:15:46 <strigazi> What is the difference to many smaller specs?
16:16:24 <Drago> hongbin: What is the difference between devref's goals_and_use_cases.rst and a separate spec that contains that information?
16:16:27 <adrian_otto> I'm missing the distinction between a spec and a devref. What is it?
16:16:51 <adrian_otto> if it's in the source tree, we need to vote to merge it
16:16:57 <hongbin> devref is a reference for developer to explore the project technical details
16:17:11 <hongbin> the spec is for specifying a feature design
16:17:25 <adrian_otto> they would be part of the same review, right?
16:17:37 <adrian_otto> just different documents
16:17:37 <hongbin> could be in different review
16:17:40 <Drago> strigazi: I'm not sure what you mean by "smaller parts first". The current plan is to remove parts of the NodeGroup spec and place them into new specs, so NodeGroups will get smaller too.
16:18:30 <adrian_otto> the rst format is good
16:18:38 <jvgrant__> strigazi, Drago: do you mean the implementation of those specs or just getting the specs reviewed?
16:18:52 <Drago> adrian_otto: Aren't specs already in rst?
16:18:53 <randallburt> adrian_otto:  specs are already rst tho
16:18:54 <adrian_otto> and it would be a nice way to thin out the spec
16:19:13 <adrian_otto> my comment appreciates the consistency of rst format with the spec format
16:19:27 <Drago> Okay
16:19:38 <Drago> Chocolate is good too?
16:19:48 <Drago> :P
16:19:53 <adrian_otto> can always count on Drago for a smile :-)
16:19:54 <randallburt> so basically we want to have multiple documents under the same "spec" sounds like
16:20:34 <Drago> If the idea is basically "put these specs in a subdirectory for easy browsing", that sounds fine to me
16:20:35 <juggler> Drago heh
16:20:37 <adrian_otto> randallburt: that should help reduce the context that a reviewer considers, and should accelerate progress
16:21:25 <adrian_otto> ok, any more discussion on this one?
16:21:40 <Drago> jvgrant__: I'd say getting the specs reviewed. Implementation patches can implement multiple things if they want
16:22:49 <adrian_otto> yatin is not here today
16:22:57 <adrian_otto> but I think we can celebrate completion of:
16:22:58 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe Secure Central Data store(etcd) [yatin]
16:23:11 <adrian_otto> All related patches have merged, from what I can see
16:23:12 <Drago> Huzzah!
16:23:33 <juggler> yea!
16:23:38 <adrian_otto> this was an important milestone to help Mangum be appropriate for more production use cases
16:23:47 <mvelten> \o/
16:23:51 <juggler> yatin++
16:24:34 <adrian_otto> thanks yatin form your work on this enhancement
16:24:39 <adrian_otto> s/form/for/
16:24:46 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi]
16:26:36 <strigazi> There were two comments in the spec and I didn't have time to push an update for this. I don't think it can make it for Ocata
16:27:22 <strigazi> Since the ff is on Monday
16:27:25 <adrian_otto> ok, so we should expect to merge the spec in Pike?
16:27:52 <strigazi> We can merge the spec in ocata if there is agreement
16:28:04 <tonanhngo> Have you given thought about how to proceed if the upgrade fails?  For existing cluster, this can be important
16:28:10 <adrian_otto> are the comments a big deal?
16:28:21 * adrian_otto (looking)
16:28:26 <Drago> Does it matter what cycle a spec lands in?
16:28:33 <strigazi> Drago, no
16:28:35 <adrian_otto> no
16:28:50 <Drago> And do specs have to abide by the feature freeze?
16:28:54 <adrian_otto> what matters is when the implementation id completed
16:28:59 <adrian_otto> s/id/is/
16:29:49 <randallburt> Drago:  not technically, but I wouldn't imagine incomplete specs get a lot of eyes during ff
16:29:52 <hongbin> then, sound like we don't have to push a spec
16:29:52 <strigazi> I can put effort to add an upgrade path for k8s
16:30:01 <randallburt> Drago:  given the review priority on stuff thats in-flight
16:30:41 <Drago> randallburt: Makes sense, thanks
16:31:26 <adrian_otto> ok, any more discussion on Cluster Upgrades?
16:32:04 <Drago> It's looking good, strigazi!
16:32:05 <adrian_otto> next subtopic:
16:32:08 <adrian_otto> Other Work Items
16:32:15 <strigazi> about f25
16:32:18 <adrian_otto> mvelten: you requested input from strigazi about f25
16:32:45 <strigazi> There a soecific but related to the latest cloud-init.
16:33:22 <strigazi> Due to a cycle in the systemd units,
16:34:00 <strigazi> systemd may decide or may not to kill cloud-init, wihtout it all our fragment scripts don't run
16:34:38 <Drago> What units are in the cycle, and is it a difficult thing to fix?
16:34:40 <strigazi> In our deployment and on my devstack, (a 4 core desktop) I wasn't able to regenerate the failure
16:35:05 <strigazi> Drago, it's in cloud-init, all our script are called by it
16:35:16 <strigazi> The fix is one line :)
16:35:17 <mvelten> it is indeed weird that nobody complains yet. It is in f25 since released
16:36:06 <mvelten> I would guess that the slowness make the units order change or smthg like that
16:36:12 <strigazi> I filed the bug but I don't know when the fix will get merged
16:36:17 <Drago> A one line fix to the upstream ostree, I assume?
16:36:19 <Drago> strigazi: Link?
16:37:11 <adrian_otto> so our fragment scripts need to depend on cloud-init explicitly so systemd starts it first?
16:37:29 <strigazi> https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1623868 this is on ubuntu, in fedora it's not published yet
16:37:29 <openstack> Launchpad bug 1623868 in cloud-init (Ubuntu) "cloud-final.service does not run due to dependency cycle" [High,Fix released]
16:37:29 <adrian_otto> I though the scripts wer eunrelated to systemd,
16:37:38 <randallburt> adrian_otto:  that's not a thing. Our fragments are cloud-intit scripts
16:37:43 <adrian_otto> that cloud-init picked them up and ran them. I guess that's not how it works?
16:37:58 <randallburt> adrian_otto:  no, the bug is that systemd can kill cloud-init
16:38:12 <Drago> cloud-init itself is a systemd unit
16:38:14 <strigazi> The point mathieu wants to make is that it might take one or two weeks
16:38:21 <adrian_otto> why? because it fails some heath check and needs to be restarted?
16:38:24 <strigazi> to be fixed
16:38:28 <adrian_otto> I"m just trying to understand the problem
16:38:33 <mvelten> strigazi did you open a bug on fedora tracker ?
16:38:59 * adrian_otto reading bug 1623868
16:38:59 <openstack> bug 1623868 in cloud-init (Ubuntu) "cloud-final.service does not run due to dependency cycle" [High,Fix released] https://launchpad.net/bugs/1623868
16:39:00 <strigazi> yes but I got an error on the website, trying again after the meeting
16:40:27 <strigazi> If the cloud init package is fixed we get a fresh image the next day
16:40:38 <strigazi> Atomic is built daily
16:40:47 <adrian_otto> so putting "After=multi-user.target" in the clou-init systemd unit file is the fix?
16:40:52 <strigazi> yes
16:40:59 <adrian_otto> ok, simple enough.
16:41:13 <mvelten> yep but it stays 7 days in testing usually at the rpm repo level
16:42:11 <adrian_otto> so let's connect this back to magnum
16:42:16 <adrian_otto> the fix is in fc25 now
16:42:35 <adrian_otto> and if we make fc25 image builds as part of Ocata, then we enjoy that fix too
16:42:40 <Drago> strigazi: Is there any chance that your f25 patch can be made to work with the current image and the image with f25?
16:43:13 <strigazi> No
16:43:24 <mvelten> current doesnt have the pb right ?
16:43:30 <mvelten> f24
16:43:34 <strigazi> no
16:43:43 <strigazi> for magnum,
16:43:55 <strigazi> can we take it in at any point before the release?
16:44:27 <adrian_otto> sure
16:44:30 <mvelten> yes can we ack that we want it and merge after the FF window deadline ?
16:44:31 <adrian_otto> this is a bug fix
16:44:36 <mvelten> ok perfect
16:44:44 <strigazi> ok, thanks
16:45:08 <adrian_otto> but I don't want to conflate the fix with new versions of COEs
16:45:11 <Drago> adrian_otto: Hate to be devil's advocate, but how is this a bug fix to magnum?
16:45:37 <mvelten> if we merge it it would be :P
16:46:05 <adrian_otto> if magnum is malfunctioning due to a bug in a system we depend on, and we need to merge something to include the fix to that lower level bug, that's a magnum bugfix too.
16:46:49 <Drago> So what you're saying is that we'll purposefully merge in something that breaks magnum, then wait for the new image?
16:47:04 <adrian_otto> strigazi: do you recognize my concern about defect repair and COE upgrades being coupled?
16:47:07 <Drago> Because as I understand it the "fix" is not even in the magnum codebase
16:47:20 <mvelten> for k8s v1.5.1 is in the testing fedora repo. Just got a -1 in bodhi so I guess it still needs 2 weeks before available
16:48:10 <Drago> By the way, this patch would be a great feature freeze exception candidate
16:48:27 <adrian_otto> yes, it would
16:48:40 <mvelten> is there a guideline for exception ?
16:48:41 <strigazi> adrian_otto they are coupled in the sense that you may upgrade in something that it's not working right?
16:49:01 <adrian_otto> mvelten: I have the ability to approve FFE requests on a case by case basis.
16:49:30 <mvelten> ok so no outside people have to validate good
16:49:35 <adrian_otto> the important thing is that we don't break other parts of OpenStack by making our changes.
16:49:53 <adrian_otto> that's the purpose of the freeze, so our various services can be tested together.
16:50:14 <adrian_otto> I'm not aware of other OpenStack services depending on Magnum, so that problem is unlikely
16:50:29 <mvelten> exactly what I was going to say ;)
16:50:51 <adrian_otto> I'll advance us to open discussion. We can continue this convo as well
16:50:59 <adrian_otto> #topic Open Disucssion
16:51:22 <Drago> adrian_otto: So FFE instead of merging in broken stuff?
16:51:29 <Drago> Agreed?
16:51:39 <randallburt> Drago:  that would sound more reasonable to me
16:51:43 <adrian_otto> yes
16:52:00 <adrian_otto> ideally we'd solve this before an FFE is needed
16:52:16 <Drago> Well sure
16:52:23 <vijendar_> https://review.openstack.org/391301 - Magnum stats API
16:52:23 <vijendar_> https://review.openstack.org/420946 - Magnum stats API documentation
16:52:23 <vijendar_> https://review.openstack.org/419099 - Resource Quota - Add config option to limit clusters
16:52:25 <vijendar_> https://review.openstack.org/419100 - Resource Quota - DB layer changes
16:52:26 <strigazi> I have one last thing. I'm wrapping up a patch of the driver field, I'll update 409253 later today
16:52:27 <vijendar_> https://review.openstack.org/419704 - Resource Quota - Adding quota API
16:52:29 <vijendar_> https://review.openstack.org/419705 - Resource Quota - Limit clusters per project
16:52:39 <vijendar_> I have few in review patches. Please review and let me know your comments.
16:52:57 <adrian_otto> thanks vijendar_
16:53:12 <hongbin> vijendar_: what are the priorities of these patches?
16:53:23 <randallburt> vijendar_:  am setting aside time to look at those this afternoon
16:53:37 <vijendar_> randallburt: thanks
16:53:49 <adrian_otto> I'll review them all today
16:53:50 <vijendar_> hongbin: please take a look at stats patches first
16:54:19 <vijendar_> adrian_otto: thanks
16:54:38 <mvelten> https://review.openstack.org/#/c/405374/
16:55:35 <juggler> looking ahead: https://www.socallinuxexpo.org/scale/15x
16:55:44 <mvelten> This one is useful for a bunch of tools built on top of k8s. Unfortunately it depends on the controller being up and running, so we are hitting the CI again
16:55:46 <juggler> hope to see some of you there
16:55:48 <adrian_otto> mvelten: looks like that one is missing footnote links in the commit message
16:56:28 <adrian_otto> juggler: yes, I'm speaking a SCaLE
16:56:36 <mvelten> in the bug ;)
16:56:45 <juggler> adrian_otto I know ;)
16:57:12 <adrian_otto> we might be the only two SoCal residents here
16:57:19 <hongbin> i have one topic to bring up. the kuryr team is looking for feedback for this proposal: https://blueprints.launchpad.net/magnum/+spec/integrate-kuryr-kubernetes
16:58:26 <adrian_otto> #link https://github.com/openstack/kuryr-kubernetes
16:58:37 <adrian_otto> hongbin: do you know what the status of the above code is?
16:58:57 <mvelten> does it support a trust user ?
16:59:11 <hongbin> adrian_otto: it seems it is ready to be integrated with
16:59:33 <hongbin> mvelten: this is a good question for the kuryr team
16:59:42 <adrian_otto> time to wrap up for today
16:59:56 <adrian_otto> thanks hongbin for raising that one.
16:59:59 <mvelten> for after ocata I would say, we already have troubles getting an updated k8s version in :) but on the principle would be great
17:00:04 <adrian_otto> Thanks for attending today. Our next team meeting is 2017-01-24 at 1600 UTC. See you then!
17:00:12 <adrian_otto> #endmeeting