14:01:25 <dprince> #startmeeting tripleo
14:01:25 <trown> o/
14:01:25 <openstack> Meeting started Tue Jan 12 14:01:25 2016 UTC and is due to finish in 60 minutes.  The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:01:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:01:30 <openstack> The meeting name has been set to 'tripleo'
14:01:30 <adarazs> o/
14:01:30 <jdob> o/
14:01:37 <eliqiao> o/
14:01:40 <shardy> o/
14:01:48 <akrivoka> hello
14:01:50 <dprince> hi everyone
14:02:07 <tzumainn> hiya!
14:02:08 <derekh> hi
14:03:02 <dprince> #topic agenda
14:03:02 <dprince> * bugs
14:03:02 <dprince> * Projects releases or stable backports
14:03:02 <dprince> * CI
14:03:02 <dprince> * Specs
14:03:05 <dprince> * one off agenda items
14:03:07 <dprince> * open discussion
14:03:23 <dprince> any new on-off items to discuss this week?
14:03:44 <dprince> say them now... or just wait until open discussion :)
14:05:45 <dprince> #topic bugs
14:06:18 <dprince> I saw Ben filed this one this week: https://bugs.launchpad.net/tripleo/+bug/1532352
14:06:20 <openstack> Launchpad bug 1532352 in tripleo "Glance on the overcloud with Swift backend is broken" [Critical,Fix released] - Assigned to Ben Nemec (bnemec)
14:06:28 <dprince> nice to have that fixed again
14:07:30 <trown> is swift backend the default?
14:07:38 <dprince> trown: for the overcloud yes
14:07:57 <dprince> trown: and perhaps that explains why this was failing too https://review.openstack.org/#/c/248492/
14:08:06 <trown> dprince: cool, so again points to needing post install validation
14:08:15 <dprince> trown: Having swift as a default in the undercloud would make sense too I think
14:08:43 <trown> indeed
14:08:52 <dprince> trown: especially w/ IPA which allows you to go directly from Swift => Deployment Ramdisk
14:09:29 <dprince> trown: post install validation is critical
14:09:41 <dprince> trown: we've been without it since we switched to instack :(
14:10:42 <shardy> https://review.openstack.org/#/c/241167/ is the patch marios is working on aiming to reinstate it
14:10:59 <dprince> trown: any other bugs, RDO blockers to make note of?
14:11:25 <d0ugal> Hey all
14:11:38 <trown> dprince: not right now... I was not able to make any time to suss out what all is needed to move up the mitaka pin
14:11:48 <trown> dprince: planning on working on that today
14:11:54 <dprince> trown: okay, thanks
14:12:16 <dprince> one more bug, sort of related to TripleO w/ heatclient I'd mention
14:12:21 <trown> liberty is good... minus issues with IPA that I need to track down
14:12:23 <dprince> https://bugs.launchpad.net/python-heatclient/+bug/1532326
14:12:24 <openstack> Launchpad bug 1532326 in python-heatclient "--template-object doesn't support nested stacks" [High,Fix released] - Assigned to Dan Prince (dan-prince)
14:12:48 <dprince> I was tinkering w/ heatclient to get it to create a stack directly from a Swift container
14:13:34 <dprince> related to some of the workflow stuff we've got in tripleo-common... would like to get feedback on what if we took this approach rather than downloading files in our TripleO libraries first
14:14:30 <dprince> anyways, that is probably not a bugs discussion so lets move on
14:14:52 <dprince> #topic Projects releases or stable backports
14:15:17 <dprince> shardy: should we change this topic to be just "stable branch status updates" or something?
14:15:44 <shardy> dprince: I guess we still have projects which need releases, so seems OK to me?
14:15:57 <shardy> but whatever is fine with me :)
14:16:12 <dprince> shardy: okay, we can leave as is I guess
14:16:20 <shardy> nothing to report re stable except the CI outage caused by a nova regression, which slagle fixed via a pin/revert
14:16:27 <dprince> still lots of stable stuff getting backported
14:16:40 <dprince> I was actually surprised to see the Mistral undercloud stuff in the backports queue
14:16:45 <dprince> is that really needed?
14:17:25 <shardy> bnemec: ^^
14:18:05 <dprince> shardy: thanks
14:18:07 <slagle> shardy: the removal of the pin is up for review and has passed ci
14:18:15 <slagle> the nova revert landed over the weekend
14:18:22 <dprince> I can follow up w/ bnemec on this later perhaps
14:18:23 <shardy> slagle: ack, thanks for the update
14:18:53 <shardy> dprince: one thing you mentioned last week was the idea of a feature-freeze for stable, moving more to a bugfix mode, so we can make larger changes to master
14:19:02 <shardy> is that worth a ML thread to gain consensus?
14:19:46 <dprince> shardy: yes, I'm just surprised to see almost halfway through Mitaka we are backporting almost everything it seems
14:19:50 <trown> slagle: shardy, ya I can confirm that RDO liberty delorean is back passing
14:20:24 <dprince> shardy: and features like Mistral are really meant for the next release I think... they aren't required and default to disable for now
14:20:41 <trown> do we even have mistral packaged for liberty?
14:20:45 <trown> if not it is too late
14:20:48 <shardy> dprince: yeah, hopefully in future releases we'll be able to switch out of backport-features mode much faster, and adopt a model closer to other projects
14:20:49 <dprince> shardy: would you like me to start the thread? or would you?
14:20:57 <shardy> dprince: I can do it
14:21:05 <slagle> i already chatted with ben about the mistral backport a bit, we dont think it's needed
14:21:12 <slagle> it just got picked up automatically
14:21:18 <dprince> shardy: cool
14:21:20 <slagle> i think he was going to have a go at removing it from that series
14:21:53 <dprince> slagle: ack, automation would it explain it
14:22:15 <dprince> okay, any other stable things to discuss?
14:23:33 <dprince> #topic CI
14:24:02 <shardy> I wanted to ask about network isolation - do we need any changes to the CI rack to enable that on one job?
14:24:09 <dprince> There was some discussion on #tripleo this week about increasing the CPU could for the testenv instances I think
14:24:13 <dprince> slagle: ^^
14:24:19 <slagle> oh, yes
14:24:20 <shardy> I was hoping we could just enable it via an environment file for one of them, so we get some coverage
14:24:25 <dprince> This is perhaps causing instability in the HA jobs?
14:24:37 <dprince> derekh: thoughts?
14:24:50 <slagle> the theory is that the HA job fails so much b/c the seed vm in the testenvs has only 1 vcpu and 4gb ram
14:25:31 <dprince> shardy: I sort of lean towards just rebuilding the testenv's I think... which would essentially give you a new environment file anyways
14:25:33 <slagle> sbaker has recommended to always use at least 2 vcpu's for heat-engine
14:25:59 <derekh> dprince: slagle its technially possible, we would need to rebuild the test env hosts,
14:26:05 <trown> slagle: I have had way better success in RDOCI since switching to a beefy undercloud, I am using 16G and 4 cpus, but maybe that is overkill
14:26:05 <shardy> slagle: there is also https://review.openstack.org/#/c/259172/ which ensures we always have more than 1 worker, even with 1 CPU
14:26:25 <derekh> any changes in the test env would be for all jobs not just one type
14:26:43 <shardy> >=2 CPU's is probably good, but AIUI the main issue was throwing lots of stacks at a single engine process
14:26:58 <slagle> derekh: i didnt know if we could just script a bunch of virsh edit's or some other  manual hack for now
14:27:03 <dprince> derekh: yep, I think 2 CPU's won't hurt the other jobs anyways
14:27:09 <derekh> we also got to be carefull not to overcommit out host CPU
14:27:35 <dprince> derekh: we could get fancy, if we are close to overcommiting
14:27:55 <slagle> shardy: ok cool, that might help as well.
14:27:56 <dprince> derekh: and have each environment contain say 3 "multi-CPU" instances
14:28:09 <derekh> slagle: not easily, but I'm sure it would be possible with a bit of trickery
14:28:10 <dprince> derekh: and a few single-CPU instances for the computes
14:28:36 <slagle> we only need the undercloud to have >1 vcpus
14:28:38 <dprince> derekh: and use either manual tagging, or introspection to select the right ones via flavors
14:28:51 <derekh> I'm not sure if we are close to over committing just something we gotta be aware of
14:28:52 <slagle> it's the undercloud's heat that is the issue
14:29:26 <slagle> so we could just bump the seed_* vm's
14:29:29 <dprince> yeah, if it is just that instance I feel like we can just do it to the see
14:29:29 <derekh> tbh, I would kinnd be tempted to prioritize OVB over this, then we can have any flavors VM instances we want
14:29:32 <dprince> seed
14:30:18 <dprince> derekh: yeah, we are definately due for a rebuild
14:30:42 <dprince> okay, perhaps we ponder these things a bit and sync up on #tripleo later?
14:30:45 <derekh> just upping the CPU for the seed would be doable fairly easilly, maybe a few hour work
14:30:56 <derekh> ok
14:31:19 <dprince> It would be nice to see the HA jobs stablilize if it doesn't cost us too much though
14:31:46 <dprince> any other CI things?
14:32:21 <dprince> #topic Specs
14:33:09 <d0ugal> Is this spec ready to be merged? https://review.openstack.org/239056
14:33:40 <dprince> d0ugal: sure, I will land it if nobody has objection I think
14:33:57 <slagle> it is as far as I'm concerned :)
14:34:01 <dprince> d0ugal: the only thing unclear about it would be which API it talks to: TripleO API vs. say Mistral
14:34:07 <slagle> there are no objections on the spec, i say land it :)
14:34:13 <dprince> d0ugal: but I don't think that blocks this spec...
14:34:34 <d0ugal> dprince: agreed
14:34:47 <d0ugal> akrivoka: ^
14:35:54 <dprince> d0ugal: +A
14:35:56 <akrivoka> great, thanks d0ugal for bringing that up
14:36:10 <akrivoka> thanks dprince :)
14:37:36 <dprince> related to the TripleO API spec I started a new mailing list thread about Mistral vs. TripleO API
14:37:40 <dprince> #link  http://lists.openstack.org/pipermail/openstack-dev/2016-January/083757.html
14:38:15 <d0ugal> oh, I missed that. Will need to read and reply - thanks
14:38:29 <dprince> d0ugal: yes, please do
14:40:58 <dprince> #topic open discussion
14:41:19 <dprince> plenty of time to talk about whatever this week I think
14:41:54 <dprince> I would put a plug in to review the Sahara patches
14:41:55 <d0ugal> I was going to talk about the swift/workflow/API stuff, but I'll just catch up with the new thread and reply to it
14:41:56 <dprince> https://review.openstack.org/#/c/220863/
14:42:27 <dprince> Ethan has had sahara and trove patches posted for some time... so it would be nice to get to these
14:42:45 <dprince> d0ugal: we can talk about swift/workflow/API now if you want too, your call
14:45:11 <dprince> d0ugal, jtomasek: I'm working on prototyping a few Mistral actions as a demo of how a swift container -> heat deploy could work
14:45:31 <dprince> seems there are some questions about how that might work... so I'm tinkering
14:45:58 <d0ugal> dprince: The main new thought is: do we need to store the templates at all? Users could use the defailts in /usr/share/... or they could give us a git repo that we can deploy
14:46:25 <d0ugal> dprince: how they give us templates could be pluggable, supporting various sources.
14:46:40 <dprince> d0ugal: Perhaps. I think storing them makes sense in that it would allow end users to upload custom version of templates
14:46:55 <d0ugal> dprince: Basically this would offload the versioning etc. to whatever the user is using
14:47:07 <dprince> d0ugal: and creating a Mistral action that uploads from /usr/share (on the undercloud) if no templates have been provided would be fairly easy I think
14:47:11 <d0ugal> dprince: but wouldn't most users advanced enough to have custom templates store them in git or similar anyway?
14:47:49 <dprince> d0ugal: the git solution still seems a bit fancy to me, in that heatclient doesn't support deploying directly from Git
14:47:53 <d0ugal> and I still don't understand how some of the swift issues are going to be resolved, but I guess I need to read your thread.
14:48:00 <jdob> the biggest reason for storing them is going to be for upgrades
14:48:01 <dprince> d0ugal: but it does (almost) support deploying directly from Swift
14:48:12 <jdob> you're going to need more than one version of the templates available at a time
14:48:26 <d0ugal> jdob: ah, I didn't know about that.
14:48:31 <d0ugal> jtomasek: ^
14:48:36 <dprince> d0ugal: swifts issues?
14:49:06 <d0ugal> dprince: Mostly what was covered in the previous thread, so I'll no repeat here but I'll add them to your thread if relevant
14:49:50 <dprince> d0ugal: cool. I've taken to trying to get heatclient to deploy directly from Swift
14:50:02 <dprince> d0ugal: this works for me for example: http://paste.openstack.org/show/483595/
14:50:05 <tzumainn> I think the real question here is whether it makes sense for us to make template storage independent of the deployment process, so that we don't force the user to store the templates in a certain way
14:50:36 <jdob> just make sure it's developer friendly :)
14:50:53 <jdob> having to go through a shit ton of steps to get a one line change to the templates in a usable place really pisses people off :)
14:51:00 <d0ugal> dprince: Right, but that still doesn't guarantee a consistent view of all the files does it?
14:51:01 <jtomasek> o/
14:51:20 <tzumainn> jdob, right, so I think we're trying to get to a place where we say, "store the templates however you want, we won't deal with it"
14:51:42 <jdob> IMO, what you more want is to say "Here are some options for storing them"
14:51:51 <jdob> and provide adapters to different things
14:51:51 <tzumainn> and behind the scenes, we allow them to specify, say, the file system or git, with potential expansions
14:52:02 <tzumainn> tomato, tomato
14:52:19 <tzumainn> my wording makes it sound like we're being more accommodating : )
14:52:26 <d0ugal> jdob: We could store templates only for historical purposes
14:52:28 <dprince> tzumainn: we can add to our storage options long term
14:52:29 <jdob> ya, ok; for some reason, I read yours as "we're not doing anything"
14:52:33 <jdob> but we're saying the same thing
14:52:40 <dprince> tzumainn: but I'd like heatclient to be the thing that supports the various backends
14:52:43 <d0ugal> jdob: so, we deploy from git (for example) and always keep a copy of what was just deployed
14:52:43 <dprince> tzumainn: or perhaps Heat
14:52:52 <d0ugal> jdob: The only way to update that copy is to deploy again
14:52:56 <dprince> tzumainn: not some hand-rolled TripleO mechanism we create I think
14:52:59 <jdob> Heat, not heat client IMO
14:53:06 <jdob> we don't want that functionality tied to a python client
14:53:28 <jdob> d0ugal: i'd be curious to see dev feedback on a git option; it seems to work for the OpenShift cases
14:53:36 <dprince> jdob: right, well by wrapping a Client w/ a Mistral action we essentially give ourselves an API
14:53:39 <jdob> people not hating the idea of committing every small change to see what happens
14:53:49 <dprince> jdob: so we can use existing clients... and get an API for freee
14:53:53 <d0ugal> jdob: Yeah, I was saying to jtomasek that we need user feedback for this :)
14:54:37 <dprince> at the end of the day, the new workflow will be no different than using --templates option we have today
14:54:45 <dprince> totally transparent in fact
14:54:50 <jdob> dprince: apples and oranges; I'm not saying mistral or not, i'm saying we shouldn't add new functionality like that that's not available to non-heat client users, much less tripleo users
14:55:09 <jtomasek> d0ugal, jdob: I have the questions ready for john browning when he gets back from POC
14:55:13 <dprince> jdob: right, I'm thinking on the same lines
14:55:21 <jdob> cool
14:55:25 <jdob> jtomasek: i'm not sure who that is :)
14:55:33 <dprince> jdob: just saying that adding support to heatclient for deploying from Swift is really easy
14:55:43 <jdob> and i'm saying that's the wrong place to add it
14:55:47 <dprince> jdob: we already have the --template-object option for quite a while
14:55:49 <jtomasek> jdob: SA who helps us with testing the GUI
14:55:55 <jdob> jtomasek: ah
14:55:59 <dprince> jdob: I'm saying we use it there now, and it doesn't matter to us
14:56:09 <dprince> jdob: and if Heat gets the capability long term then we can switch to it
14:56:18 <dprince> jdob: we can't wait for Heat to add this to its API now
14:56:28 <dprince> jdob: it forces us into a really bad place I think
14:57:40 <dprince> jdob: I would like it in the API I think though, just not willing to wait for that especially when I think it doesn't matter for us right now
14:57:51 <dprince> jdob: see my "alternatives" comments here: https://review.openstack.org/#/c/265478/2/specs/mitaka/heatclient-environment-object.rst
14:58:30 <jdob> i might be missing something, isn't this just for your "tinkering"? or is this something super time sensitive?
14:58:45 <jdob> actually, we're pretty grossly off topic from the meeting, we can shelve that if you have other stuff to discuss
14:58:53 <dprince> jdob: to me it is realted to TripleO APi conversation
14:59:12 <dprince> jdob: if we can't create a deployment workflow without TripleO API and or heat API changes we are in trouble
14:59:32 <dprince> jdob: nothing else to discuss
14:59:39 <dprince> and we are out of time
14:59:40 <tzumainn> why are client changes better than api changes... ?
14:59:41 <jdob> isn't that what caused merge.py in the first place?
14:59:51 <dprince> tzumainn: not better, faster
15:00:08 <dprince> jdob: totally different than merge.py
15:00:11 <tzumainn> dprince, I confess that I'm a bit leery of the 'faster' solution
15:00:21 <dprince> times up here more in #tripleo
15:00:26 <dprince> #endmeeting