16:00:22 #startmeeting containers 16:00:23 Meeting started Tue Jan 17 16:00:22 2017 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:26 The meeting name has been set to 'containers' 16:00:26 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-01-17_1600_UTC Our Agenda 16:00:30 #topic Roll Call 16:00:33 Adrian Otto 16:00:38 Jaycen Grant 16:00:39 o/ 16:00:40 Mathieu Velten 16:00:43 Ton Ngo 16:00:48 Jason Dunsmore 16:01:01 hello jvgrant__ strigazi mvelten tonanhngo and jasond 16:01:10 o/ 16:01:16 o/ 16:01:31 o/ 16:01:32 hello Drago hongbin and juggler 16:01:53 tmorin: is the bgpvpn meeting today? 16:01:55 hello adrian_otto 16:02:06 NikoHermannsEric it kust ended 16:02:10 NikoHermannsEric it just ended 16:02:23 oh, then I had the wrong time 16:02:24 hi NikoHermannsEric, yes indeed, it was 15:00 UTC 16:02:32 oh ok 16:03:01 NikoHermannsEric: we re holding the Magnum meeting now, if that interests you at all 16:03:33 #topic Announcements 16:03:58 1) Monday 2017-01-23 is the start of the feature freeze 16:04:13 for our Ocata release cycle 16:04:37 so this week we'll want to get all the new things merged prior to the freeze 16:04:39 o/ 16:04:46 remember that bug fixes are welcome any time 16:04:50 hey randallburt 16:04:51 even during the freeze. 16:05:05 any questions on the release? 16:05:31 any other announcements from team members? 16:06:01 #topic Review Action Items 16:06:04 (none) 16:06:15 #topic Blueprints/Bugs/Reviews/Ideas 16:06:21 Essential Blueprints 16:06:28 #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi] 16:07:14 I don't have an update, I'll push tomorrow my patch for it 16:07:25 strigazi: thanks. I look forward to that. 16:07:29 regarding the release 16:07:36 mvelten: yes? 16:07:40 strigazi excellent! 16:08:05 we really need f25 in I think 16:08:39 otherwise we are stuck with really old k8s for the next 6 months :( 16:08:45 mvelten: agreed 16:09:02 do you foresee any barriers to wrapping this up this week? 16:09:28 so now, we have a problem :) can you summarize strigazi ? 16:09:43 adrian_otto there maybe after action items? 16:09:53 adrian_otto, maybe after action items? 16:10:00 ok, I will revisit this after Essential Blueprints 16:10:04 ok 16:10:05 next is: 16:10:06 #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nodegroups [Drago] 16:10:17 note the correct spelling from the agenda! 16:10:30 nice :) 16:10:34 heh :) 16:10:48 there was some additional discussion and feedback on the spec this week 16:11:15 Not much of an update from me. I've only made some comments on the spec. Hopefully jvgrant__ and I will be meeting this week to discuss breaking it up into smaller pieces that are more focused. 16:11:21 Drago and I are discussing breaking the spec down into smaller specs since it is so big and difficult to make progress on 16:11:32 ok, any reason to depart from previous plans for merging the spec in Ocata and implementing thereafter? 16:11:45 think that is a good plan still 16:12:34 Any comment on a huge spec causes an iteration 16:12:51 (well not any, but one worth iterating for) 16:12:52 breaking it in pieces means many focues specs? 16:12:54 the huge spec definiely cause difficulty to review 16:13:10 yes, I've struggled with it 16:13:17 True, breaking it up will help with the review 16:13:20 Smaller specs that are single-subject would make it easier to review and vote on individually 16:13:20 +1 for smaller specs but would be nice if we tracked the big picture too somewhere 16:13:23 s/focues/focused 16:13:38 randallburt: We plan to have a spec act as the high-level overview 16:13:53 the current review can be the overview, and we can break out details into smaller parts. 16:13:55 Drago: sounds good 16:14:05 +1 16:14:30 for the big picture, i have a suggestion 16:14:35 We will merge the smaller parts first? 16:14:49 adrian_otto / Drago / randallburt +1 16:14:53 consider using a devref as a reference for the big picture (like what kuryr team does: https://github.com/openstack/kuryr/tree/master/doc/source/devref) 16:15:12 the spec doesn't have to specify the big picture then 16:15:46 What is the difference to many smaller specs? 16:16:24 hongbin: What is the difference between devref's goals_and_use_cases.rst and a separate spec that contains that information? 16:16:27 I'm missing the distinction between a spec and a devref. What is it? 16:16:51 if it's in the source tree, we need to vote to merge it 16:16:57 devref is a reference for developer to explore the project technical details 16:17:11 the spec is for specifying a feature design 16:17:25 they would be part of the same review, right? 16:17:37 just different documents 16:17:37 could be in different review 16:17:40 strigazi: I'm not sure what you mean by "smaller parts first". The current plan is to remove parts of the NodeGroup spec and place them into new specs, so NodeGroups will get smaller too. 16:18:30 the rst format is good 16:18:38 strigazi, Drago: do you mean the implementation of those specs or just getting the specs reviewed? 16:18:52 adrian_otto: Aren't specs already in rst? 16:18:53 adrian_otto: specs are already rst tho 16:18:54 and it would be a nice way to thin out the spec 16:19:13 my comment appreciates the consistency of rst format with the spec format 16:19:27 Okay 16:19:38 Chocolate is good too? 16:19:48 :P 16:19:53 can always count on Drago for a smile :-) 16:19:54 so basically we want to have multiple documents under the same "spec" sounds like 16:20:34 If the idea is basically "put these specs in a subdirectory for easy browsing", that sounds fine to me 16:20:35 Drago heh 16:20:37 randallburt: that should help reduce the context that a reviewer considers, and should accelerate progress 16:21:25 ok, any more discussion on this one? 16:21:40 jvgrant__: I'd say getting the specs reviewed. Implementation patches can implement multiple things if they want 16:22:49 yatin is not here today 16:22:57 but I think we can celebrate completion of: 16:22:58 #link https://blueprints.launchpad.net/magnum/+spec/secure-etcd-cluster-coe Secure Central Data store(etcd) [yatin] 16:23:11 All related patches have merged, from what I can see 16:23:12 Huzzah! 16:23:33 yea! 16:23:38 this was an important milestone to help Mangum be appropriate for more production use cases 16:23:47 \o/ 16:23:51 yatin++ 16:24:34 thanks yatin form your work on this enhancement 16:24:39 s/form/for/ 16:24:46 #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi] 16:26:36 There were two comments in the spec and I didn't have time to push an update for this. I don't think it can make it for Ocata 16:27:22 Since the ff is on Monday 16:27:25 ok, so we should expect to merge the spec in Pike? 16:27:52 We can merge the spec in ocata if there is agreement 16:28:04 Have you given thought about how to proceed if the upgrade fails? For existing cluster, this can be important 16:28:10 are the comments a big deal? 16:28:21 * adrian_otto (looking) 16:28:26 Does it matter what cycle a spec lands in? 16:28:33 Drago, no 16:28:35 no 16:28:50 And do specs have to abide by the feature freeze? 16:28:54 what matters is when the implementation id completed 16:28:59 s/id/is/ 16:29:49 Drago: not technically, but I wouldn't imagine incomplete specs get a lot of eyes during ff 16:29:52 then, sound like we don't have to push a spec 16:29:52 I can put effort to add an upgrade path for k8s 16:30:01 Drago: given the review priority on stuff thats in-flight 16:30:41 randallburt: Makes sense, thanks 16:31:26 ok, any more discussion on Cluster Upgrades? 16:32:04 It's looking good, strigazi! 16:32:05 next subtopic: 16:32:08 Other Work Items 16:32:15 about f25 16:32:18 mvelten: you requested input from strigazi about f25 16:32:45 There a soecific but related to the latest cloud-init. 16:33:22 Due to a cycle in the systemd units, 16:34:00 systemd may decide or may not to kill cloud-init, wihtout it all our fragment scripts don't run 16:34:38 What units are in the cycle, and is it a difficult thing to fix? 16:34:40 In our deployment and on my devstack, (a 4 core desktop) I wasn't able to regenerate the failure 16:35:05 Drago, it's in cloud-init, all our script are called by it 16:35:16 The fix is one line :) 16:35:17 it is indeed weird that nobody complains yet. It is in f25 since released 16:36:06 I would guess that the slowness make the units order change or smthg like that 16:36:12 I filed the bug but I don't know when the fix will get merged 16:36:17 A one line fix to the upstream ostree, I assume? 16:36:19 strigazi: Link? 16:37:11 so our fragment scripts need to depend on cloud-init explicitly so systemd starts it first? 16:37:29 https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1623868 this is on ubuntu, in fedora it's not published yet 16:37:29 Launchpad bug 1623868 in cloud-init (Ubuntu) "cloud-final.service does not run due to dependency cycle" [High,Fix released] 16:37:29 I though the scripts wer eunrelated to systemd, 16:37:38 adrian_otto: that's not a thing. Our fragments are cloud-intit scripts 16:37:43 that cloud-init picked them up and ran them. I guess that's not how it works? 16:37:58 adrian_otto: no, the bug is that systemd can kill cloud-init 16:38:12 cloud-init itself is a systemd unit 16:38:14 The point mathieu wants to make is that it might take one or two weeks 16:38:21 why? because it fails some heath check and needs to be restarted? 16:38:24 to be fixed 16:38:28 I"m just trying to understand the problem 16:38:33 strigazi did you open a bug on fedora tracker ? 16:38:59 * adrian_otto reading bug 1623868 16:38:59 bug 1623868 in cloud-init (Ubuntu) "cloud-final.service does not run due to dependency cycle" [High,Fix released] https://launchpad.net/bugs/1623868 16:39:00 yes but I got an error on the website, trying again after the meeting 16:40:27 If the cloud init package is fixed we get a fresh image the next day 16:40:38 Atomic is built daily 16:40:47 so putting "After=multi-user.target" in the clou-init systemd unit file is the fix? 16:40:52 yes 16:40:59 ok, simple enough. 16:41:13 yep but it stays 7 days in testing usually at the rpm repo level 16:42:11 so let's connect this back to magnum 16:42:16 the fix is in fc25 now 16:42:35 and if we make fc25 image builds as part of Ocata, then we enjoy that fix too 16:42:40 strigazi: Is there any chance that your f25 patch can be made to work with the current image and the image with f25? 16:43:13 No 16:43:24 current doesnt have the pb right ? 16:43:30 f24 16:43:34 no 16:43:43 for magnum, 16:43:55 can we take it in at any point before the release? 16:44:27 sure 16:44:30 yes can we ack that we want it and merge after the FF window deadline ? 16:44:31 this is a bug fix 16:44:36 ok perfect 16:44:44 ok, thanks 16:45:08 but I don't want to conflate the fix with new versions of COEs 16:45:11 adrian_otto: Hate to be devil's advocate, but how is this a bug fix to magnum? 16:45:37 if we merge it it would be :P 16:46:05 if magnum is malfunctioning due to a bug in a system we depend on, and we need to merge something to include the fix to that lower level bug, that's a magnum bugfix too. 16:46:49 So what you're saying is that we'll purposefully merge in something that breaks magnum, then wait for the new image? 16:47:04 strigazi: do you recognize my concern about defect repair and COE upgrades being coupled? 16:47:07 Because as I understand it the "fix" is not even in the magnum codebase 16:47:20 for k8s v1.5.1 is in the testing fedora repo. Just got a -1 in bodhi so I guess it still needs 2 weeks before available 16:48:10 By the way, this patch would be a great feature freeze exception candidate 16:48:27 yes, it would 16:48:40 is there a guideline for exception ? 16:48:41 adrian_otto they are coupled in the sense that you may upgrade in something that it's not working right? 16:49:01 mvelten: I have the ability to approve FFE requests on a case by case basis. 16:49:30 ok so no outside people have to validate good 16:49:35 the important thing is that we don't break other parts of OpenStack by making our changes. 16:49:53 that's the purpose of the freeze, so our various services can be tested together. 16:50:14 I'm not aware of other OpenStack services depending on Magnum, so that problem is unlikely 16:50:29 exactly what I was going to say ;) 16:50:51 I'll advance us to open discussion. We can continue this convo as well 16:50:59 #topic Open Disucssion 16:51:22 adrian_otto: So FFE instead of merging in broken stuff? 16:51:29 Agreed? 16:51:39 Drago: that would sound more reasonable to me 16:51:43 yes 16:52:00 ideally we'd solve this before an FFE is needed 16:52:16 Well sure 16:52:23 https://review.openstack.org/391301 - Magnum stats API 16:52:23 https://review.openstack.org/420946 - Magnum stats API documentation 16:52:23 https://review.openstack.org/419099 - Resource Quota - Add config option to limit clusters 16:52:25 https://review.openstack.org/419100 - Resource Quota - DB layer changes 16:52:26 I have one last thing. I'm wrapping up a patch of the driver field, I'll update 409253 later today 16:52:27 https://review.openstack.org/419704 - Resource Quota - Adding quota API 16:52:29 https://review.openstack.org/419705 - Resource Quota - Limit clusters per project 16:52:39 I have few in review patches. Please review and let me know your comments. 16:52:57 thanks vijendar_ 16:53:12 vijendar_: what are the priorities of these patches? 16:53:23 vijendar_: am setting aside time to look at those this afternoon 16:53:37 randallburt: thanks 16:53:49 I'll review them all today 16:53:50 hongbin: please take a look at stats patches first 16:54:19 adrian_otto: thanks 16:54:38 https://review.openstack.org/#/c/405374/ 16:55:35 looking ahead: https://www.socallinuxexpo.org/scale/15x 16:55:44 This one is useful for a bunch of tools built on top of k8s. Unfortunately it depends on the controller being up and running, so we are hitting the CI again 16:55:46 hope to see some of you there 16:55:48 mvelten: looks like that one is missing footnote links in the commit message 16:56:28 juggler: yes, I'm speaking a SCaLE 16:56:36 in the bug ;) 16:56:45 adrian_otto I know ;) 16:57:12 we might be the only two SoCal residents here 16:57:19 i have one topic to bring up. the kuryr team is looking for feedback for this proposal: https://blueprints.launchpad.net/magnum/+spec/integrate-kuryr-kubernetes 16:58:26 #link https://github.com/openstack/kuryr-kubernetes 16:58:37 hongbin: do you know what the status of the above code is? 16:58:57 does it support a trust user ? 16:59:11 adrian_otto: it seems it is ready to be integrated with 16:59:33 mvelten: this is a good question for the kuryr team 16:59:42 time to wrap up for today 16:59:56 thanks hongbin for raising that one. 16:59:59 for after ocata I would say, we already have troubles getting an updated k8s version in :) but on the principle would be great 17:00:04 Thanks for attending today. Our next team meeting is 2017-01-24 at 1600 UTC. See you then! 17:00:12 #endmeeting