14:01:03 #startmeeting tripleo 14:01:09 o/ 14:01:10 Meeting started Tue Nov 24 14:01:03 2015 UTC and is due to finish in 60 minutes. The chair is dprince. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:10 o/ 14:01:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:13 The meeting name has been set to 'tripleo' 14:01:53 hello o/ 14:02:11 hi all 14:02:22 \o 14:02:24 o/ 14:02:40 o| 14:02:53 o/ 14:03:08 hiya \o 14:03:23 hello everyone 14:03:32 #topic agenda 14:03:33 * bugs 14:03:33 * Projects releases or stable backports 14:03:33 * CI 14:03:33 * Specs 14:03:35 * Review Priorities: https://etherpad.openstack.org/p/tripleo-review-priorities 14:03:38 * one off agenda items 14:03:40 * open discussion 14:04:01 anything else to add to the agenda this week? 14:04:14 * dprince will try to make sure we get to open discussion sooner this week 14:04:21 dprince: I added a one-off item to the wiki 14:04:29 re stable branch approval process 14:04:57 * Stable branch approval policy (shardy) 14:05:09 shardy: ack, sorry missed it. (wiki is confusing sometimes) 14:05:17 dprince: np :) 14:05:23 okay, lets go 14:05:29 #topic bugs 14:06:54 looks like we had a breakage or two but things were fixed quickly last week 14:07:00 anything to bring up of note? 14:07:42 o/ 14:07:59 please don't use 'packagename.openstack.common' outside of 14:08:02 packagename 14:08:06 :) 14:08:20 dtantsur: ack, we fixed that in tripleoclient this week 14:08:34 yeah, just bringing visibility for people 14:09:06 cool. lets move on... people can chime in later if needed on things 14:09:34 #topic Stable branch approval policy 14:09:47 shardy: you want to drive this one? 14:09:49 #link https://wiki.openstack.org/wiki/StableBranch#Gerrit 14:10:28 So, I wanted to suggest we (within common-sense) adopt the normal stable branch process, where for simple backports, one reviewer is enough, if the proposer is also a core on the the project 14:10:53 e.g if it's just a simple clean backport 14:10:53 shardy: makes sense to me, ++ 14:10:59 seems sane 14:11:09 o/ 14:11:27 the CI still isn't quite there, but when it is, I'd like to make the backport process as low overhead as possible ;) 14:12:19 sounds good to me 14:12:23 +1 14:13:16 Ok then, that is all, thanks! 14:13:27 assuming the masterpatch is already merged 14:13:41 slagle: yup 14:13:47 ok :) 14:13:59 and anything with conflicts should probably have more than one reviewer ideally 14:14:18 anything else this week related to stable branches, etc? 14:14:20 e.g leave the Conflicts: line in the commit message 14:14:29 #topic Projects releases or stable backports 14:14:43 very similar to what shardy was just speaking about... 14:14:44 dprince: no, other than sorry I still don't have the CI working, it's nearly there... 14:15:06 i released os-cloud-config yesterday to address the blocking issue 14:15:48 the issue being the ironicclient import 14:15:49 slagle: this was the openstack.common exception issue? 14:15:52 yea 14:15:56 cool 14:16:24 okay, I think we are good here then 14:16:27 #topic CI 14:17:16 CI question: do you folks plan on enabling tripleo-ci for inspector and IPA? 14:17:44 dtantsur: you mean to actually run on those projects? 14:17:48 yep 14:17:58 * derekh has nothing much on CI this week, spending time elsewhere lately 14:18:04 dtantsur: we could I guess. probably not a bad idea 14:18:45 dtantsur: My view would be we should if they agree to treat it as voting CI 14:19:16 derekh, voting = as it is now? because it doesn't give V-1 now anywhere 14:19:58 also do we have something on ironic as well? 14:20:04 dtantsur: in tripleo we don't (unless there is a good reason) approve unless we got passing CI 14:20:08 dtantsur: same thing 14:20:13 with ironic 14:20:26 could be a good thing to bring to the next meeting 14:20:39 I remember tripleo-ci was pretty useful on ironic some time ago.. 14:20:48 if we add our job to ironic then they must not merge without a tripleo pass 14:20:57 * bnemec still wants tripleo-ci to actually vote 14:21:29 I guess we could use standard first non-voting then voting procedure 14:21:38 back when they had it (and fair enough it was unreliable), I got the sense people didn't actually look at the results much 14:21:52 it was really unreliable ;) 14:22:06 if we see it ususally passing, we're going to pay attention 14:22:21 dtantsur: yes but most of the time it was unreliable BECAUSE people were merging breaking changes 14:22:37 I would like to have some opinion about the khaleesi based gate for tripleo projects that I'm working on. 14:22:41 yeah, we're also to blame, I admit 14:23:05 dtantsur: then we had to have a broken job for 3 days while we waited to get something reverted 14:23:06 1) should it build packages on the fly with delorean or just copy the code in place? is there a preference? 14:23:06 2) what kind of validation would you like to see? tempest? or just something more quick and simple? 14:23:20 dtantsur: anyways it wasn't just ironic, there are plenty of examples to go around 14:23:42 derekh, fast-forwarding reverts is something we can do for ironic and IPA, and I can guarantee it for inspector ;) 14:23:45 adarazs: would you mind sending an email to introduce this concept and ask these questions? 14:23:56 dprince: all right. 14:24:00 dtantsur: so to answer your first question I'm all for adding it back, as long as people don't ignore it 14:24:00 * dtantsur have not heard about khaleesi based gate 14:24:14 derekh, mind bringing it to the next ironic meeting? 14:24:24 i think it's worth trying as long as the intent is for the job to eventually become voting 14:24:26 dtantsur: they don't exist (upstream) yet. but we rely on them for RDO. 14:24:36 adarazs: cool. good questions just could potentially consume all of our meeting time. 14:24:42 dtantsur: sure will do, 14:24:47 thanks! 14:25:01 adarazs: I think less duplication of effort between upstream/downstream CI is a great idea, a mail with details of your plan would be excellent 14:25:09 e.g to openstack-dev 14:25:32 shardy: yes, thanks for clarifying that 14:25:39 shardy, dprince: ack, I will explain myself in email better. :) 14:26:01 okay, sounds like we might be getting back into some Ironic CI jobs... 14:26:27 * derekh will talk to the heat folks also 14:26:57 cool. 14:27:01 #topic Specs 14:27:44 I would ask for feedback on the composable roles stuff again: https://review.openstack.org/#/c/245804/ 14:28:30 derekh: +1 for heat tripleo CI, we can bring it up at the heat meeting tho 14:28:49 Crud, I haven't reviewed any specs lately. :-( 14:28:52 shardy: ack 14:28:54 I'm keen to start knocking this stuff out soon... because we already have scaling issues 14:30:41 any other specs stuff? 14:30:54 I would really like to create a "split stack" spec too 14:31:02 or perhaps co-author one w/ someone 14:31:41 the idea of splitting the stack is appealing for several reasons: one of them being potentially huge for CI in that we could run overcloud provisioning jobs on normal infra hardware 14:31:41 dprince, "split stack" is about using external resources? 14:32:20 i.e. running just a "heat overcloud-configuration" stack on normal cloud based servers... skipping the Ironic bit 14:32:21 one stack for hw, 2nd stack for config? 14:32:26 jistr: yes 14:32:39 gfidente: using external resources is part of it, yes 14:33:28 to me... the clean split for a potential faster running CI job, that could run on normal infra hardware is the most appealing part of this 14:33:35 dprince: are the servers known to nova? or not known to nova at all? 14:33:57 same question for heat 14:34:00 slagle: we could try both, although probably not known 14:34:14 slagle: heat would use a mocked server I think 14:34:24 ok 14:34:35 slagle: I think it has to be the dummy server approach atm, because the external_resource stuff hasn't yet landed in heat 14:34:37 anyways, just wanted to highlight this idea which we spoke of at the summit. 14:34:40 the "not known" use case is what i was going for with the deployed server template patch i put up 14:34:52 slagle: I can see if we can get that moving again tho 14:35:04 slagle: sure, that may be the first step if we get that working 14:35:18 shardy: honestly, the case of the servers already being known to nova isn't as interesting to me personally 14:35:19 I think asalkeld may have moved on to other things, so I can take a look at it if we need it 14:35:37 i want the option of not having to "nova boot" anything 14:35:40 slagle: ack, we can just pass the ID's and dummy them then 14:35:42 that's the bit i like 14:35:53 slagle: yes, that is what we all want I think 14:36:07 dprince: definitely agree splitting the stack would make all of this easier 14:36:36 slagle: I think all we need is a unique identifier per server, for the SoftwareDeployment we don't need the servers to be known to nova 14:36:47 (as you've already proved IIRC?) 14:36:59 shardy: yea, my patch actually worked 14:37:34 slagle: Cool, that's the proof-of-concept then :) 14:38:09 okay, perhaps lets move on for now. Maybe a few of us can continue to explore the split stack idea soon then 14:38:30 #topic review priorites 14:38:37 my ultimate goal here was/is to use Heat to configure the undercloud, w/o having to nova boot the undercloud 14:39:19 slagle: sure, what I'm talking about is quite different with the overcloud, but the first step is the same I think 14:39:20 slagle, oh the chicken/egg problem which initially forced tripleo to the seed 14:39:42 yea :), anyway, we can move on :) 14:39:45 anyways, any reviews to highlight this week? 14:40:14 * dprince thinks we might should skip this section and just let people bring things up in open discussion again 14:41:14 * bnemec would love to get undercloud ssl merged 14:41:22 #link https://review.openstack.org/#/c/221885/ 14:41:45 We've been using essentially this downstream for quite a while now. 14:42:21 i'll have another look 14:42:36 * gfidente starred 14:42:42 once we get it all merged (with the overcloud too), are we going to switch at least 1 ci job over to use ssl everywhere? 14:42:52 Yeah, that's the plan. 14:42:56 cool 14:43:32 I think Juan and/or Mark were planning to do that once all the pieces are in place. 14:43:39 oh I wonder how are we doing with the basic nova/ping test in CI? marios? 14:43:48 is there a review for that? 14:44:06 gfidente: no i is somethign i have wanted to revisit but keep being pulled into other things 14:44:13 * bnemec points at https://github.com/openstack/instack-undercloud/blob/master/scripts/instack-test-overcloud again 14:44:20 gfidente: there is a review, but it was for the original plan to add to tripleo.sh 14:45:19 gfidente: like https://review.openstack.org/#/c/241167/ i started rewriting to a heat template. anyway, still to revisit as soon as possible 14:46:31 okay, thanks for the updates. moving on 14:46:33 o/ 14:46:52 #topic open discussion 14:47:08 any other topics that need mentioning this week 14:47:29 we didn't really reach consensus on the parameters/parameter_defaults thing on the ML yet, please respond if you have an opinion 14:47:43 shardy: ack 14:47:53 gfidente: something like this http://paste.fedoraproject.org/287295/46736696/ 14:48:18 I posted an etherpad which organizes some patches to get us Mistral support: 14:48:21 https://etherpad.openstack.org/p/tripleo-undercloud-workflow 14:48:56 mostly puppet-mistral stuff... but it also gets us keystone v3 in the undercloud (which was required for mistral) 14:49:13 * dprince is interested in exporing mistral workflows for several things in TripleO 14:49:44 Yeah, I think we had talked about implementing some of the new API stuff in Mistral at some point. 14:50:50 dprince: +1 on mistral, but I'd like to see it's usage made pluggable in tripleo-common if possible 14:51:31 e.g folks keep talking about various tools in the workflow space, so it'd be nice to have a simple abstraction to enable supporting more than one 14:51:44 shardy: that is one way to go about it I guess 14:52:43 Once we have an actual API we should be able to do whatever we want with the implementation behind it. 14:53:40 shardy: I was actually wondering if simply using a mistral workflow (which already has an OpenSTack API) might supplant parts of our tripleo-common stuff altogether 14:54:12 ack, I'm just worried about alienating operators who e.g prefer ansible or whatever, but mistral seems like a good move regardless and more aligned with the TripleO mission 14:54:51 shardy: As for using tool-X I think we do need better integration with things like Ansible and the like 14:55:24 shardy: but really.. I think that is perhaps just generating a hosts file with the relevant roles. 14:55:48 shardy: one way to look at it perhaps 14:55:49 dprince: yeah, maybe that use case ends up satisfied via the split stack thing 14:56:00 anyway, something to think about 14:56:12 shardy: yep, step at a time 14:58:20 anything else this week? 14:59:37 okay, thanks everyone 14:59:59 #endmeeting