14:01:26 <dprince> #startmeeting tripleo
14:01:27 <openstack> Meeting started Tue Jan  5 14:01:26 2016 UTC and is due to finish in 60 minutes.  The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:30 <openstack> The meeting name has been set to 'tripleo'
14:02:18 <adarazs> hi there o/
14:02:24 <slagle> morning
14:02:28 <dprince> o/
14:02:41 <marios> o/ happy new year
14:03:50 <dprince> #topic agenda
14:03:50 <dprince> * bugs
14:03:51 <dprince> * Projects releases or stable backports
14:03:51 <dprince> * CI
14:03:51 <dprince> * Specs
14:03:53 <dprince> * one off agenda items
14:03:55 <dprince> * open discussion
14:04:14 <dprince> no one-off agenda items this week
14:04:28 <dprince> so perhaps a short meeting? we'll see...
14:06:35 <trown> o/
14:07:22 <dprince> okay, lets move on
14:07:24 <dprince> #topic bugs
14:07:57 <dprince> I don't think I filed a bug on this issue but it is a blocker for using Delorean trunk ATM
14:08:01 <dprince> https://review.openstack.org/#/c/257522/
14:08:12 <dprince> trown: did you have an RDO bug for that?
14:08:43 <trown> I dont think so, but there is https://launchpad.net/bugs/1525957
14:08:44 <openstack> Launchpad bug 1525957 in tripleo "nova neutronclient fails due to deprecated auth options" [Critical,In progress] - Assigned to Dan Prince (dan-prince)
14:08:46 <shardy> o/
14:08:55 <shardy> (sorry I'm late)
14:09:11 <dprince> trown: oh, right. I did file it :)
14:09:39 <dprince> anyways, so yeah, Nova Mitaka since mid-december requires this fix
14:09:44 <trown> dprince: I think there are other issues blocking delorean trunk now too :(, I have not had time to look into what else is failing since RDO liberty is a bit of a mess because of our liberal backport policy
14:10:02 <dprince> trown: perhaps an etherpad to list them all again?
14:10:19 <dprince> trown: with this puppet-nova fix I had a working undercloud yesterday at least
14:10:33 <trown> dprince: ya, I think that is a good idea if we can start finding what else is broken
14:10:57 <trown> dprince: https://review.openstack.org/258140 is why I say it is broken even with the nova patch
14:11:20 <trown> s/nova/puppet-nova/
14:12:15 <trown> this bug is pretty critical for RDO https://launchpad.net/bugs/1530883
14:12:16 <openstack> Launchpad bug 1530883 in tripleo "[tripleoclient] bash deploy ramdisk 'deprecation' changed behavior" [Critical,In progress] - Assigned to John Trowbridge (trown)
14:12:18 <dprince> trown: gotcha, yeah I was using an older overcloud image I think
14:12:41 <shardy> trown: I'm aware of the deploy ramdisk thing, but if there are other issues related to the backport policy, it'd be good to enumerate them, perhaps in a ML thread, so we can refine the process if it's not working
14:13:11 <trown> I am ok with Dmitry's compromise rather than a pure revert, but really that is a pretty gross change in behavior for a "stable" branch
14:13:13 <dprince> trown: yeah, so just for stable... that is fine I think
14:13:17 <shardy> trown: the intention of the backport policy was to make life easier, not harder for downstream consumers of the branches, so if that's not working, let's figure out why :)
14:13:33 <dprince> trown: we probably shoudn't have backported the other one
14:13:33 <shardy> trown: I agree, that one was probably handled incorrectly
14:14:10 <slagle> can we just land the revert?
14:14:16 <slagle> https://review.openstack.org/#/c/263351/
14:14:21 <trown> shardy: ya, that patch broke OPNFV Apex project which is based on RDO and they have a release this week... so bad timing
14:14:45 <trown> slagle: I am +1 to land the revert, Dmitry was opposed
14:14:45 <slagle> oh that's not the revert..
14:14:57 <shardy> slagle: that's not the revert, IIRC dtantsur says a pure revert will break other things
14:15:00 <slagle> we should land the revert and argue about the proper fix later
14:15:07 <trown> I am also fine with the compromise, but I have not had a chance to test it yet
14:15:22 <dprince> slagle: revert would be fine as well, I have no oppinion really. If Dmitry prefers the compromise I'd defer to him I htink
14:15:57 <dprince> trown: thanks for pointing this out
14:16:06 <dprince> any other bugs things?
14:16:08 <slagle> the revert is https://review.openstack.org/#/c/263289/
14:16:28 <slagle> dmitry doesn't point out anything there, other than bugs that were already existing, so we'd just go back to those
14:18:04 <dprince> slagle: I think dmitry's point is that the old deploy ramdisk was already deprecated, so we shouldn't have been using it for Liberty either
14:18:48 <slagle> ok
14:19:11 <dprince> anyways, the revert needs to pass CI. If it passes I'd be fine with either of these landing
14:19:13 <shardy> And that "grub-install does not work" is a critical bug for some folks which cannot be fixed without moving to the new ramdisk, IIUC
14:20:10 <shardy> +1 I'm fine with the consensus, but it'd be good to figure out how to agree a better process for next time
14:20:13 <dprince> okay, any other bugs?
14:20:17 <trown> shardy: it is even weirder, as IPA causes a grub-install issue for others...
14:20:43 <trown> so really the compromise is probably the best option
14:21:33 <shardy> trown: k, will that's passing CI so we can land that immediately if folks are OK with it
14:21:43 <bnemec> +1 from me.
14:22:03 <trown> shardy: I do not think anyone has tested it... but I can do that in the next hour
14:23:24 <dprince> trown: cool, thanks
14:23:43 <dprince> #topic Projects releases or stable backports
14:24:22 <dprince> shardy: any updates here this time?
14:24:47 <dprince> the bug we just spoke of was sort of related to fixing stable I guess for RDO...
14:25:21 <shardy> dprince: Not really other than to request folks keep up the reviews for stable branches
14:25:35 <marios> yeah many thanks for all the review last couple days especially
14:25:46 <gfidente> and the porting :)
14:25:50 <dprince> shardy: cool, thanks
14:25:55 <marios> i added the original voters to the stable/liberty cherrypicks i made yseterday, so sorry for the review spam
14:25:59 <marios> but thanks for responding!
14:26:18 <gfidente> marios do you have a list of changes which produced conflicts?
14:26:18 <dprince> I have a question, do we have a date when we expect to stop backporting things?
14:26:32 <dprince> when do we draw the line...
14:26:39 <shardy> dprince: the way it works with other projects is you keep backporting things until a release goes EOL
14:26:43 <dprince> and just do, say security fixes
14:26:54 <shardy> but after a while it switches to critcal security fixes only
14:27:00 <marios> dprince: asap. hopefully by next week. the trouble is for this first one we are trying to get downstream and upstream as reconciled as possible
14:27:03 <dprince> shardy: right, but most projects don't backport major features
14:27:30 <shardy> dprince: Ah, yeah, defining that would be good
14:27:41 <marios> dprince: yeah sorry i misread/interpreted what you wrote
14:27:45 <dprince> shardy: reason I ask is composable roles is going to make backports painful
14:27:49 <marios> dprince: (I thought you were asking when this was to be done by)
14:27:49 <shardy> I was kinda assuming anything other than the n-1'th release would be bugfix only
14:27:58 <shardy> but we've not got to that stage yet
14:28:00 <dprince> shardy: I'd like to be mindful of this... but also be able to move forwards with composable roles too
14:29:09 <dprince> essentially, I'm wondering when is the best "window" to proceed with internal architecture changes to t-h-t that wouldn't impact backports
14:29:36 <dprince> these change wouldn't break upgrades mind you... (they are mostly internal)
14:29:46 <dprince> just make backports more painful
14:29:50 <slagle> i know that ipv6 is something that will likely be needed to be backported
14:30:01 <slagle> but that's the last major thing that hasnt yet landed that i can think of
14:30:08 <dprince> slagle: okay, thanks
14:30:15 <slagle> so maybe after that?
14:30:52 <dprince> okay, something to keep in mind. We don't have to have a hard answer for it this week I think
14:31:49 <dprince> #topic CI
14:32:18 <dprince> things seem to be running mostly fine I think the past few weeks
14:32:20 <dprince> http://tripleo.org/cistatus.html
14:32:49 <dprince> I've got no updates here since last meeting...
14:32:56 <dprince> anyone else?
14:33:25 <shardy> When I get the HA job to pass it'd be good to land https://review.openstack.org/#/c/250498/
14:33:35 <shardy> it adds CI coverage for deleting an overcloud
14:34:01 <marios> dprince: one update is i'd like to land the pingtest and then add it to ci like https://review.openstack.org/#/c/262028/
14:34:34 <marios> dprince: so comments appreciated at https://review.openstack.org/#/c/241167/ (esp. if we want 2 vms there), can bring up later in open discussion if time/interest
14:36:17 <dprince> shardy: how much wall time does deleting add? A few minutes perhaps?
14:36:43 <shardy> dprince: in my local environment it's not even a minute
14:37:02 <dprince> shardy: okay, great then. I say lets do it
14:37:09 <dprince> marios: ++ on the pingtest
14:37:14 <trown> +1 to some sort of validation... especially for the stable branch
14:37:41 <dprince> marios: probably 1 vm is fine for starters I think
14:37:53 <marios> dprince: sure at this point is a simple tweak of the template
14:37:56 <marios> so whatever
14:38:00 <shardy> dprince: we should perhaps annotate toci_instack to time each step we test, but I don't think it's a significant increase
14:38:36 <dprince> shardy: yeah, it would be nice if we had a mechanism to track metrics for each CI/test step
14:40:15 <dprince> #topic Specs
14:42:05 <dprince> perhaps not a lot of spec updates over the break
14:42:27 <dprince> I need to make changes to a few of mine anyways per comments
14:42:50 <dprince> might we go ahead and land: https://review.openstack.org/#/c/233634/
14:43:23 <slagle> yes, i think so, based on giulio's response
14:43:44 <dprince> also, I think this is close as well: https://review.openstack.org/#/c/245309/
14:45:16 <akrivoka> the GUI spec is waiting for a final +2/A https://review.openstack.org/239056
14:46:23 <bnemec> Is the GUI being developed upstream now then?
14:46:35 <bnemec> Because I'm -2 on merging a spec that won't be implemented here.
14:47:49 <akrivoka> we're waiting for the spec to be approved, then we'll move it upstream
14:48:06 <akrivoka> for now it's here https://github.com/rdo-management/rdo-director-ui
14:49:26 <apetrich> bnemec, and CIed on the midstream gates
14:49:48 <dprince> akrivoka: cool, thanks for bringing this up
14:51:15 <dprince> #topic open discussion
14:51:31 <dprince> a few minutes left, anyone have anything to bring up?
14:51:32 * bnemec would dearly love to get undercloud ssl merged
14:51:36 <bnemec> #link https://review.openstack.org/#/c/221885/
14:51:58 <slagle> i had one thing...what does anyone think about moving the orc scripts from tie into the actual codebases they're for?
14:52:07 <dprince> bnemec: one question on that
14:52:36 <slagle> for example, in regards to https://review.openstack.org/#/c/213748/, having 20-os-net-config be part of os-net-config and installed via the os-net-config package
14:52:49 <dprince> bnemec: as some of the services move to apache wsgi... long term would we perhaps be better off using Apache for SSL and avoid HA proxy
14:52:56 <slagle> there have a handful of instances of having to update the orc scripts on deployed images lately, and there's no way to do that right now via packaging
14:53:15 <slagle> that's what i'm trying to solve anyway
14:53:34 <slagle> it could also allow us to fully deprecate tie once we got everything moved
14:53:57 <shardy> slagle: personally I'd like to move away from the orc scrips altogether long term, e.g have os-net-config invoked directly via a heat hook or script
14:54:06 <dprince> slagle: wtr, to 20-os-net-config I would like that to move into a t-h-t script deployment
14:54:13 <dprince> slagle: on the list of things to do....
14:54:19 <shardy> stuff like the signalling for error path when using orc is still broken, so we could fix all that by moving away from it
14:54:20 <slagle> ok, well, that's just one example
14:54:53 <bnemec> dprince: That's fine long term, but we need this now.  Also, the overcloud ssl support is using haproxy already so we'll have to migrate at some point anyway if we decide to us
14:54:55 <slagle> but maybe heat hook scripts is the answer for all of them?
14:55:02 * bnemec fails at typing
14:55:08 <shardy> the main thing we're missing is a way to use a StructuredDeployment to lay down some json files, which could be a new heat hook
14:55:09 <bnemec> *use Apache instead.
14:55:13 <slagle> 55-heat-config is another example
14:55:16 <dprince> bnemec: I was thinking undercloud only here, just to keep it lighter
14:55:21 <slagle> i dont know how we'd get that updated via heat itself
14:55:38 <bnemec> dprince: Then we have two separate implementations of ssl.
14:55:49 <dprince> bnemec: not trying to hold up your efforts here or anything, just saying that HA proxy in the undercloud seems weird I think
14:56:04 <dprince> bnemec: since it is just a single node...
14:56:27 <shardy> slagle: If moving them allows update via packaging then +1, everything should be owned by a package one way or another
14:56:50 <shardy> can't we install 55-heat-config e.g via the heat-templates package, and just link it from the o-r-c directory?
14:56:55 <shardy> or something similar
14:57:40 <slagle> probably, there are different ways to do it
14:57:43 <bnemec> dprince: I could swear you suggested using haproxy in the first place. :-P
14:58:36 <dprince> bnemec: perhaps I did, and what you have done is fine. I've just been looking at our undercloud memory footprint grow and would like to combine things where possible
14:58:41 <dprince> bnemec: run '
14:58:52 <dprince> bnemec: run 'ps_mem' in your undercloud sometime ;)
14:58:56 <bnemec> dprince: The thing is that we're not running any of the services behind apache right now, so if we go that way it's a huge amount of work for something that people are requesting right now.
14:59:13 <dprince> bnemec: we will for keystone soon, perhaps nova-api next...
14:59:22 <bnemec> dprince: Oh, I know.  I never run less than an 8 gig vm for my undercloud. :-)
14:59:45 <bnemec> dprince: Sure, but what's the timeframe on that?  We can't wait six months for this.
15:00:00 <dprince> bnemec: agree, this was just an idea. What if?
15:00:21 <dprince> okay, we are out of time
15:00:23 <bnemec> dprince: I'm totally fine with revisiting the implementation when everything is in apache.
15:00:24 <dprince> thanks everyone
15:00:40 <dprince> #endmeeting