16:00:14 #startmeeting containers 16:00:14 Meeting started Tue Jan 24 16:00:14 2017 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:16 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-01-24_1600_UTC Our Agenda 16:00:16 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:18 The meeting name has been set to 'containers' 16:00:23 #toic Roll Call 16:00:35 #topic Roll Call 16:00:41 Stephen Watson o/ 16:00:42 Jaycen Grant 16:00:43 Adrian Otto 16:00:44 Ton Ngo 16:00:46 Dusty Mabe 16:00:48 Perry Rivera o/ 16:00:59 * juggler will be back in 1m 16:01:05 o/ 16:01:19 hello swatson jvgrant tonanhngo dustymabe juggler and Drago1 16:01:24 o/ 16:01:31 o/ 16:01:44 hi dustymabe 16:01:52 hello strigazi and hieulq_ 16:01:54 strigazi: o/ 16:02:22 o/ 16:02:43 o/ 16:03:11 hello randallburt and jasond 16:03:23 hello all :) 16:03:43 we will begin with announcements in one moment 16:03:58 o/ 16:04:05 o/ 16:04:16 hi coreyob and vijendar 16:04:37 #topic Announcements 16:05:26 1) Feature Freeze. We scheduled feature freeze to begin Monday, but upon reviewing all the open reviews, it became apparent that we need to fit in at least one more important work item into the release. 16:05:41 We will come back to this after Essential Blueprints. 16:06:13 we can initiate string freeze now 16:06:21 this will allow docs to be translated, as needed. 16:06:31 2) We will not have an IRC team meeting on 2017-02-21 because of the Atlanta PTG event. 16:06:44 I have marked the meeting calendar accordingly. 16:07:14 for those not attending the PTG, how would you like to be updated on the events at the PTG? 16:07:34 woudl you like a follow up section in the subsequent team meeting to act as a summary, or something in a different form? 16:08:07 A summary etherpad should work 16:08:11 ^ 16:08:13 sometimes the PTL will make a blog post that summarizes the content of the PTG for the benefit o those who could not attend 16:08:28 etherpads are kept, so we can definitely share those 16:08:33 tonanhngo++ 16:08:57 ok, any other announcements from team members? 16:09:31 #topic Review Action Items 16:09:33 (none) 16:09:38 Blueprints/Bugs/Reviews/Ideas 16:09:44 Essential Blueprints 16:09:51 #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi] 16:10:45 I haven't push an update. I'm updating the db api to write clusters and cluster_templates transactionally. 16:11:18 Ok, thanks. Any questions from team members on this work item? 16:11:27 Before I was writing ClusterAttributes first and the cluster to CT which is highly undesirable 16:11:50 s/to/or 16:12:23 any more on this one? 16:12:35 no q's from here 16:12:40 ok, next is: 16:12:42 #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nodegroups [Drago] 16:13:20 jvgrant and I are splitting up the spec into 6 topics 16:13:42 So you'll start seeing them come out over the next week 16:13:46 should be easier to review 16:14:18 That's all I've really got about that 16:14:44 thanks Drago1 16:14:55 * dustymabe wonders if we can talk about atomic host soonish - i have a team dinner starting soon 16:14:56 ty jvgrant / Drago 16:15:14 dustymabe: yes, in just a few mins 16:15:17 #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi] 16:15:18 adrian_otto: +1 16:15:44 I don't have anything this week 16:15:50 thanks strigazi 16:15:58 #topic Other Work Items 16:16:20 now, I'd like to extend a special welcome to dustymabe who joins us from the Fedora Atomic team 16:16:27 \o/ 16:16:38 hello dustymabe! 16:16:40 welcome! 16:16:45 welcome! 16:16:58 I've engaged in dialogue with his team in an effort to bring our mutual interests together. 16:16:58 o/ 16:17:26 We've used Fedora Atomic in a number of our drivers, and have held a few team discussions about whether we should change directions at all 16:18:13 one of our considerations is how our upstream linux packagers may participate in driver support 16:18:40 so I thought it would be valuable for you to meet dustymabe and give us an opportunity to convey questions or concerns about Atomic host 16:18:51 indeed. 16:18:56 nice to meet you all 16:19:10 dusty can you take just a moment and tell us a bit about your role? 16:19:17 adrian_otto: sure 16:19:52 Currently my role is to help Fedora Atomic become more stable within Fedora (upstream from RHEL Atomic Host) 16:20:04 I've been a volunteer in Fedora Atomic for a long time 16:20:17 only recently did I take a role where I'm working on it for my $dayjob 16:20:32 I am actually a member of the Red Hat Atomic team (internal) 16:20:54 excellent. We are happy that you reached out. 16:20:54 but focus on Fedora and also trying to focus on making sure we do things upstream first (i.e. making changes there first) 16:21:59 so, Magnum team, if you're comfortable discussing your thoughts with dusty today, you are welcome to chime in. If not, you're welcome to reach him later. 16:22:25 dustymabe: what are your team's plans for PTG attendance? 16:22:32 FYI, dusty helps me to fix the issue with cloud-init in FA25 16:23:00 OH!! that's awesome. I was actually really happy that fix was so quick. 16:23:06 adrian_otto: I have not yet booked a flight but I plan to attend on that thursday (I believe I confirmed with you in the email) 16:23:22 dustymabe: awesome! 16:23:50 dustymabe: great :) 16:23:52 yes, so we'll have an agenda item on Thursday at the PTG to talk about drivers, and find out what's reasonable to expect from the upstream distros 16:23:58 strigazi: dustymabe does that mean the os-*-config and heat agents are the last hurdle? 16:24:29 randallburt: i'm actually not sure - i'm not very familiar with the exact problem 16:24:47 dustymabe: k. wondering about that and containerized k8s 16:25:00 I haven't discussed this yes (heat-agents) 16:25:07 randallburt: i.e. when will containerized k8s be something you can use? 16:25:16 dustymabe: what time do you need to take off today? I know you have contending forces on your schedule, and want to be respectful of that. 16:25:35 strigazi: k. I had a quick email chat with dustymabe last week about it and was just wondering if you guys had discussed 16:25:40 adrian_otto: maybe in 10 minutes 16:25:48 ok, thanks. 16:27:07 dustymabe: Can you describe the recommended approach for updating packages on a running host? e.g. updating to a new k8s version 16:27:32 one thing I'd like to try to do is help you guys solve problems without having to "bake" too much - obviously we want you to be happy, but we'd like to make changes to Atomic Host that generally benefit most users rather than baking a bunch of stuff that is for specific things 16:27:42 tonanhngo: I think that's very dependent on hypercube/non hypercube 16:28:18 tonanhngo: right now we have k8s packages on the host - if you want to use those packages to run kube then you are tied to the version in Fedora 16:28:30 you can also choose to run it not from those packages - as containers 16:28:30 i won't be at the PTG, but if you could cover the best way to launch containers using atomic --system (runc/systemd) it would be useful. we've had issues integrating some containers in this way, mostly coming up with a config.json that works 16:28:40 in that case you choose you're own version 16:29:14 rochapor1o: so you are using system containers? 16:29:21 trying 16:29:27 IMO, hypercube is in our best interests in this case unless we want to be building new os-trees on updates 16:29:52 got ya - so system containers are our answer to being able to run whatever version of kubernetes you want 16:30:05 DuncanT: yeah, I think so 16:30:11 however, they are relatively new and still need some vetting 16:30:13 dustymabe: at our last design summit, we focused on cluster upgrades as a feature we'd like to address in Magnum. To upgrade the COE (kubernetes, docker/swarm, mesos, etc.) we expressed a preference to try and keep the host OS as immutable as possible, and stop/start new container images that compose the various components of the COE. Adopting hypercube should advance us toward that ideal. 16:30:13 sorry DuncanT, I meant dustymabe 16:30:55 adrian_otto: yes, hypercube should advance you towards that idea 16:31:06 or hyperkube... can not remember which is the correct spelling now. 16:31:19 either way, we mean containerized k8s 16:31:33 we are working on system containers as well and hope to have that be something people like you can consume 16:31:37 randallburt: +1 16:31:52 dustymabe: awesome, that sounds like just the ticket 16:31:54 I can try to give a demo of that when we meet if you like 16:32:02 Do you work on putting container images in the ostree? 16:32:16 strigazi: me personally? 16:32:33 you and your team 16:32:34 actually I should take a step back and ask exactly what you mean by that 16:32:41 #link http://stackoverflow.com/questions/33953254/what-is-hyperkube 16:32:44 because that could possibly mean more than one thing 16:32:50 strigazi: that seems fairly straightforward assuming you have a system unit that loads them into docker on startup 16:33:08 Without pulling 16:33:15 and without loading 16:33:23 strigazi: - oh you mean bake into the image and start on boot 16:33:27 yes 16:33:29 blows up your image size tho 16:33:47 pay now or pay later 16:33:53 That's less important than the time it takes to pull or load the image on boot 16:34:01 got ya, yeah so we have tossed around the idea of being able to use anaconda to build images that also container container images in them 16:34:05 it will, but we have a big issue with our CI's poor performance 16:34:07 tonanhngo: Drago1 fair enough 16:34:12 in a large scale environment it's a way of "pre-pulling" most of what needs to start at launch time. 16:34:56 right, it all depends on who you are and what you care about 16:35:12 once we achieve containerized kube status 16:35:19 For the CI is a good option 16:35:26 then some people would benefit from having the images baked 16:35:32 for a real deployment a local registry is better 16:35:49 but then some people say, i'm not using those images at all so the extra 100s of MiB suck 16:36:22 so I think we'll probably rely on not having the images baked but making it relaly easy to set up and pull from registry 16:36:27 dustymabe: true, but in our case if the mechanism was there, we'd use the crap out of it 16:36:43 of course we can also offer tools for people to build there own image with prebaked images 16:36:58 dustymabe: I think that last part would be best IMO 16:36:59 *prebaked container images* 16:36:59 dustymabe: thanks so much for coming today. I know you've got to peel out now. We're looking forward to having you at the PTG, and maybe following up on an ML thread on the openstack-dev@lists.openstack.org list. 16:37:12 we need that last on for the ci 16:37:15 you are welcome to join our team emetings any time 16:37:23 *meetings 16:37:24 adrian_otto: :) 16:37:29 thanks dustymabe 16:37:32 strigazi: agreed, because pre-baked isn't an issue for upgrade IMO 16:37:36 thank you dustymabe :) 16:37:36 indeed, and you guys feel free to send a mail to atomic-devel@projectatomic.io 16:37:42 thanks dustymabe 16:37:47 randallburt yes 16:37:52 i hope to see you guys around more! :) 16:38:12 indeed! 16:38:14 and also that I can answer your questions favorably, or at least find a "good enough" solution 16:38:32 strigazi: on the subject of Fedora... 16:38:38 Did someone say 3rd party CI? Because that's what I'm hearing. 16:38:58 yesterday we had a breif exchange where you identified our FC25 driver work as essential for inclusion in Ocata, to which I agreed. 16:39:10 a couple of additional contributors agreed to help out. 16:40:04 The issue is to build an image with a specific fix for cloud-init 16:40:24 Drago1: yes, I think one of the things we should cover at the PTG is whether upstream linux packagers would be willing to host a 3rd party CI for the drivers that use their distros. 16:40:34 The diskimage-builder process I was using doesn't work. So I look into the imagefectory process 16:41:08 After that fix, I think we are good to go 16:41:58 strigazi: hopefully I'll have you an image with that in it to test sometime tonight 16:41:58 And the good thing is that the fix will go into fedora and in a few days it will be included by default 16:42:12 * dustymabe goes now :) 16:42:12 strigazi: on the subject of testing the FC25 image... do you have enough help for that? 16:42:21 dustymabe: o/ 16:42:24 (a few) is relative :) 16:42:38 adrian_otto yes, I'm good with that 16:43:21 I wanted to include another change for the driver field but I don't know if we can do it, there I could use some help 16:43:50 strigazi: what change? 16:43:50 for the Ocata release , we indicate to the release team which release tag we would like them to use. 16:43:55 adrian_otto: did you mean to say FA25 (fedora atomic 25)? 16:44:07 jasond yes 16:44:14 one option is to go ahead and tag the client now, and wait for the FC25 work to land to tag the server 16:44:19 randallburt https://review.openstack.org/#/q/topic:driver-field 16:44:36 strigazi: gotcha. was looking at that earlier 16:44:45 ok, I will refer to is at FA25 from this point, sorry! 16:45:03 adrian_otto, for the FA25 patch we can do that 16:45:21 another option is we can tag both the server and client today, and then tag the server again once the FA25 patch has landed 16:45:45 adrian_otto, the result would be the same I guess 16:45:55 and we'll have the option of which tag to designate as the release based on how long it's taking 16:46:09 ok 16:46:13 what do you all think is best? 16:46:41 adrian_otto: are we getting pressure from the release team? 16:46:57 randallburt: a little, but it's still civilized. 16:47:13 I could consult the release team for guidance too. 16:47:45 adrian_otto: k, sounds best IMO. Tag now if they need it now and see if we can tag again later. If they are ok with waiting, we should IMO 16:47:54 what I'm tempted to do is just cut tags today, and send an email to the ML identifying the FA25 work as a planned FFE. 16:48:15 since it will not change any strings in magnum, that should not kick other teams off track. 16:48:33 go for it 16:48:36 but we have a grey area around new features 16:49:05 for example, will our docs change because of the FA25 work? 16:49:30 if so, we'll need to have a plan for dealing with that 16:49:31 I don't think so 16:49:36 adrian_otto: ideally, no. This should be image and internal template updates IIRC 16:49:44 randallburt 16:49:47 randallburt yes 16:50:03 we will rename the image fedora-atomic-ocata anyway 16:50:12 ok, so we can do that 16:50:20 very good plan that 16:50:32 only the k8s and docker number may change in a couple of places 16:50:33 Well it does change the k8s version, so if anything refers to that, it would 16:50:35 anyone have an objection to witholding all workflow+1 actions until after Friday, with the FA25 work as an exception? 16:50:36 strigazi: os-distro too? 16:50:42 randallburt no 16:51:18 when I looked last night, we had a couple of features that were partially merged because the gate tests were timing out. 16:51:24 strigazi: potentially problematic, but not in stock deployments so nbd I guess 16:51:30 I have not looked again this morning to see if that continued. 16:51:34 adrian_otto: I have couple patches related Resource Quota that were not merged yet 16:51:55 ^^ random ci failures 16:51:58 vijendar: okay, that's one of the areas I'd like to finish up in before we officially call the freeze 16:52:28 ok 16:52:32 so, all core reviewers: Watch for the ML announcement from me for the indication of when no more workflow+1 votes should be made. Fair? 16:52:32 so the revised feature freeze date is asap, correct? 16:53:06 sounds fair 16:53:09 juggler: we have not entered freeze officially yet, but we are trying to merge in the last few features that we planned for this release cycle 16:53:30 adrian_otto: there are also 2 patch regarding OSProfiler that got your +2 this morning, please review them 16:53:38 we planned to initiate the freeze yesterday, but have some good reasons to revisit that action. 16:53:45 adrian_otto: we can call feature freeze and still get patches that are in-flight in can't we? We just say if they aren't on the list, they don't get looked at until after, correct? 16:53:52 hieulq_: yes, I'll do that, thanks. 16:54:01 adrian_otto: or do you mean tag the repo for an rc? 16:54:12 adrian_otto understood! :) 16:55:00 randallburt: the tag action is not clearly visible to all reviewers, so I'm planning to use the ML to make it clear what's happened and when, and what to do next. 16:55:49 adrian_otto: +1 16:56:01 ok, opening for open discussion now... 16:56:01 #topic Open Discussion 16:57:43 adrian_otto: gotcha 16:57:47 randallburt: the distinction we want to make during freeze is whether a patch is a bug fix or feature. I want to get features in before cutting new tags. 16:58:06 adrian_otto: ah, ok. I misunderstood. thanks! 16:58:12 then after tags are cut, I want to limit all merges to clearly defined FFE's 16:58:22 then after freeze, we go back to merging everything again. 16:58:29 k 16:58:31 and we are not required to freeze 16:59:16 this is a discipline that we should get used to if we change release status from ...intermediary... to the normal one 16:59:40 all we are required to do in our current project designation is to tag releases and inform the release team what we want them to use. 17:00:13 if anything is foggy here, I'll answer all questions in #openstack-containers after we adjourn. 17:00:17 time up 17:00:19 Thanks everyone for attending today. Our next team meeting will be 2017-01-31 at 1600 UTC. See you all then! 17:00:23 #endmeeting