16:00:02 #startmeeting OpenStack Ansible Meeting 16:00:03 Meeting started Thu Jan 22 16:00:02 2015 UTC and is due to finish in 60 minutes. The chair is b3rnard0. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:07 The meeting name has been set to 'openstack_ansible_meeting' 16:00:52 so hey everybody 16:01:00 hi 16:01:01 #topic RollCall 16:01:12 \o 16:01:19 here 16:01:19 Hello 16:01:19 #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible 16:01:21 o/ 16:01:25 present 16:01:25 \o 16:01:25 \o 16:01:29 present 16:01:32 pre sent 16:01:38 present 16:01:41 here 16:01:54 present 16:02:01 git-harry: present 16:02:11 #topic The state of "10.1.2" and "9.0.6", loggin, bugs, ect. - (cloudnull) 16:02:40 present 16:02:52 looks like I need "apell". 16:03:01 ^see 16:03:29 so odyssey4me andymccr , how are we looking towards getting logging all happy ? 16:03:55 i think the main work that was required has gone in already - courtesy of odyssey4me 16:04:00 so we should be fine. 16:04:06 #link https://bugs.launchpad.net/openstack-ansible/+bug/1399371 16:04:26 cloudnull: one patch left before it's all hunky dorey - https://review.openstack.org/#/c/148623/ 16:05:13 essentially the only bit left is that for some reason the swift logs aren't coming through properly - the rsyslog container doesn't send them... I'm still trying to figure out the problem there 16:05:40 odyssey4me: is that something that maybe d34dh0r53 can help out with ? 16:06:05 odyssey4me: i'll look into that because i know i removed the rsyslog containers from being built on the swift storage hosts 16:06:18 sure - in truth anyone who has a working build can help to figure out what the cause is - once I know that it'll be easy to roll in the patch 16:06:51 it may be an aio specific issue, in which case we can ignore it - but I could do with assistance to triage, thanks andymccr 16:07:07 ive got a build almost completed now 16:07:17 so will be no problem 16:07:28 #action andymccr to investigate why rsyslog container doesn't send swift logs issue 16:07:33 ok - we can sort that out before COB tomorrow then 16:07:42 e-z 16:07:45 let me know if I can assist as well 16:07:55 sweet. 16:08:07 #action d34dh0r53 To help andymccr on rsyslog issue 16:08:21 i could use a coffee - milk, 1 sugar :D thanks d34dh0r53 16:08:27 haha 16:08:39 so for our open bugs within 9.0.6. 16:08:44 :) on it's way, may be a little cold when it gets there 16:09:08 #topic 9.0.6 16:09:09 between mattt odyssey4me and mancdaz everything looks "in progress" 16:09:45 odyssey4me: logging should be good in 9.0.6 16:09:54 because no swift? 16:09:54 cloudnull yeah I backported the stuff that was tagged as a potential, after assessing if it was needed 16:09:57 #link https://launchpad.net/openstack-ansible/+milestone/9.0.6 16:09:59 some of those are outstanding 16:10:29 cloudnull: yep, once I get a working aio on 9.x I'll test all the logging patches and submit them 16:11:17 next week will be backport week, pretty much 16:11:45 odyssey4me: https://bugs.launchpad.net/openstack-ansible/+bug/1399387/comments/10 is that being broken out into multiple commits? 16:12:13 or will it stay as is? 16:12:38 cloudnull: yes - that'll be reverted and submitted one at a time once openstack infra approves https://review.openstack.org/#/c/149305/ 16:12:46 kk. 16:13:06 that patch will allow me to patch both the icehouse and juno branches to a point where we have a working aio check on both those branches 16:13:29 sounds good. 16:13:45 then we'll re-enable voting on those branches for the aio check 16:14:30 is there anything that is presently in 9.0.6 that needs to have the state changed? 16:15:17 like i said, thats a lot of in progress for only one week till release? 16:15:41 ok 16:15:45 on to 10.1.2 16:16:03 #topic 10.1.2 16:16:09 #link https://launchpad.net/openstack-ansible/+milestone/10.1.2 16:16:24 lots of things 16:16:29 whats pressing there? does anyone need help on working these issues? 16:16:53 I need help in getting the cherry-pick backports approved 16:17:19 w/ 9.0.6, some of those in progress are actually fix committed no ? 16:17:43 mancdaz I've got your back there. 16:17:49 mattt, i assume yes. but i guess we'll have to go through and figure that out 16:18:03 cloudnull: i looked at one and it's been merged into icehouse, so that list may not be accurate 16:18:16 it's the wau lauinchpad tracks the status 16:18:30 mattt possibly - we can verify next week I guess 16:18:37 it's why I wanted to talk about how we do that in launchpad 16:18:49 mancdaz we'll get to that . 16:18:54 * mancdaz nods 16:19:29 so , with 10.1.2 we need to do lots of backporting . but beyond that everything else looks to be good, right? 16:19:55 I can go through that list and see if anything targetted for a series has actually merged, even if the bug shows in progress 16:20:22 kk thanks mancdaz . we'll need to do that for 9.0.6 an 10.1.2 16:20:27 i can help out with that too. 16:20:37 #action mancdaz To go through that list and see if anything targetted for a series has actually merged, even if the bug shows in progress (9.0.6 & 10.1.2) 16:20:38 cloudnull: should be fine - essentially we just need people to actually test the build over and over again 16:20:44 there's actually not that many juno outstanding with the backport tag 16:20:52 I did most of the backports last week 16:21:02 just need them to be approved/reviewed 16:21:10 once the aio check is fixed then it'll be automatically tested in principle per commit, which will help 16:21:15 #action cloudnull To help mancdaz with merging help 16:21:28 we need help with that help 16:21:35 haha 16:22:02 #topic Prioritizing the "next" milestone for 10.x and 9.x. - (cloudnull) 16:22:52 we've got quite a few items in the "next" milestone. 16:22:59 link https://launchpad.net/openstack-ansible/+milestone/next 16:23:12 #link https://launchpad.net/openstack-ansible/+milestone/next 16:23:28 i'd like to see if we can prioritize them for what will be 10.1.3 and 9.0.7 16:24:18 starting with what will be a hot topic item , "F5 Pool Monitoring in Kilo" 16:24:40 presently the f5 monitoring script uses xml for token parsing , 16:24:48 this is gone come kilo. 16:24:49 #link https://bugs.launchpad.net/bugs/1399382 16:25:11 i'd like to see if we can do something better/different . 16:25:34 Apsu: you worked on that, any insight ? 16:25:51 surely this belong in the set of things which need to be extracted from the generalised open build, as it's specific to the rax deployments 16:26:04 cloudnull: Life is hard trying to process data on an F5, given the CPU limitations and poor forking performance 16:26:18 So making use of JSON will be interesting. Also the python version is quite old. 16:26:18 some service that runs on the host and checks status locally? It's how galera recommend it's done 16:26:27 the f5 then only needs to do a standard http keyword check 16:26:28 Might look into jshon, if we're still going to do API level checks 16:26:56 odyssey4me, it is true that rax does use f5 but that is not specific to rax, anyone that uses an f5 will not be able to in short order. 16:27:30 At least as far as pool monitoring based on API availability/modest authenticated requests 16:28:07 Apsu can we spike within the next milestone to see how we make that better? 16:28:18 Sure. 16:28:41 i know that its a wishlist item, but it will be something that we/the project is going to have to deal with sooner than later. 16:28:42 I'll see about statically-compiled jshon in case it's a quick win and we can parse JSON from the bash script directly 16:28:59 A thing I'll toss out - does that check have to be done on the F5? 16:29:00 Should incur the least penalty and not require a custom python version or such 16:29:13 palendae yeah that's what I was saying 16:29:21 palendae: i was typing the same question 16:29:25 We should definitely look into not 16:29:29 Given the limitations, would it be easier to move that monitoring action? 16:29:32 #action Apsu Spike on F5 monitoring 16:29:38 voluntold 16:29:39 The F5 needs to know, but it doesn't have to be the place that does token procurement. 16:29:40 the f5 can do a dumb poll, the results of which are generated by a more complex monitor on the host 16:29:47 Cool 16:29:49 I don't really like the idea of the F5 doing anything other than port level monitoring either 16:29:52 ^ 16:29:55 d34dh0r53: ding 16:30:02 Doesn't sound like anyone does 16:30:03 Well that depends on what level of LB intelligence we want. 16:30:22 Port monitoring is great unless the API is spitting out 500s 16:30:24 Can probably dig deeper later, but something to think about 16:30:25 Apsu: true that 16:30:28 Port's open! but broken 16:30:36 Anyhow, I'll look into it 16:30:46 as for the rest of the wishlist items targeted for "next" , do we want to bring any of them in for the proposed 10.1.3 / 9.0.7 ? 16:30:48 I actually don't care where the intelligence lives, except that clearly in this case we can't do what we want on the f5, so do it somewher else 16:30:57 Apsu: in it's current state the script is closing a lot of ports that are functioning correctly 16:31:24 d34dh0r53: That's unrelated to the concept of availability monitoring, just the particular way the script skeleton was filled out so far :P 16:31:29 Will have that fixed up too, lol 16:31:41 Apsu: cool 16:31:53 cloudnull as for other wishlist items, I doubt they'd make it into icehouse 16:32:06 since we are treating that as a maintenance branch/version now, right? 16:32:30 cloudnull: i'll probably work https://bugs.launchpad.net/openstack-ansible/+bug/1412762 into 10.1.3 16:32:31 mancdaz we are. 16:32:47 we expose a whole bunch of non-function heat resource types which is a bit janky 16:32:55 *non-functional 16:33:08 kk . sounds good 16:33:19 if nobody has anything specifically that they want in, ill go through them and file them away accordingly. 16:33:30 #action matt To backport https://bugs.launchpad.net/openstack-ansible/+bug/1412762 into 10.1.3 16:34:36 ok moving on. 16:34:36 #action cloudnull To target greater than wishlist next bugs into future milestones 16:34:51 #topic Triaging bugs, targeting at correct series, targeting series fixes at milestones - getting consensus on how we do it. mancdaz 16:35:08 mancdaz you have the floor . 16:35:27 ok so the way we file/triage/target bugs etc 16:35:39 when a bug is first filed, it is auto targetted to trunk 16:35:58 but we often need to backport it to icehouse and juno, or just juno 16:36:14 so what I tend to do is add the juno/trunk/icehouse series 16:36:25 and then for the milestones, target the relevant series at that 16:36:38 trunk would never have a milestone, as we don't actually release anything from master 16:37:03 I also see bugs being filed, and then a milestone being added to the default (trunk) series 16:37:15 just wanted to get consensus on how we do this 16:37:25 and also at what point we actually target a bug 16:37:49 ie can I decide right now if something is going in to 10.1.2, or do we do the backport now, and later target a handful at a milestone 16:38:03 I don't know the 'correct' way 16:38:12 but just as long as we're all doing it the same way 16:38:21 am I making sense? 16:39:07 example of what I do: https://bugs.launchpad.net/openstack-ansible/+bug/1408608 16:39:34 fyi - if you follow the methodology outlined by mancdaz, then the openstack-infra will also auto-update the bug status when your patch merges 16:39:45 only for trunk odyssey4me 16:40:10 imo once its fixed in trunk it should be backported to its appropriate series. 16:40:12 we have to manually change the status of the series 16:40:28 cloudnull yeah I'm not arguing about wherther we sdhould backport or not 16:40:36 just how we track it in launchapd 16:41:24 i would think that if the item was fixed outside of the already set milestone, as long as its not a new feature, it should be it should be added to the upcoming milestone. 16:42:41 and that would be tracked as such within launchpad. 16:43:01 cloudnull: the advatnage of adding the series is that you can target the series to the particular milestone 16:43:20 rather than the trunk fix to a milestone, which means you can't target two milestones 16:43:40 ie 9.0.x and 10.0.x 16:43:47 ok so look at my example versus this one https://bugs.launchpad.net/openstack-ansible/+bug/1402028 16:43:58 this was not targeted to a series 16:44:10 but the fix was against trunk 16:44:15 ah, i see now what you're talking about. 16:44:27 i like the mancdaz. 16:44:31 it was targeted at a milestone for the trunk 16:45:10 in the way I've been doing it, you'd never target the trunk fix at a milestone, only the series 16:45:17 as that's where we release 16:45:18 so, as we triage issues, add series target to milestone . 16:45:26 i like it 16:45:31 do we have agreement on this issue? 16:45:56 got that #agreed ready b3rnard0? 16:46:01 (also please add appropriate backport-potential tags tags) 16:46:05 * odyssey4me likes it - milestones targeted at series makes the backporting tracking easier 16:46:11 then release manager can come along and work through the backports 16:46:35 this is what we get for letting b3rnard0 chair a meeting... 16:46:49 all opposed ? 16:46:54 cloudnull: It's because he has that standing desk. If he had an actual chair, we'd be golden 16:47:00 #startvote 16:47:01 Unable to parse vote topic and options. 16:47:08 the motion passes... 16:47:10 moving on . 16:47:15 The eyes have it 16:47:21 aye 16:47:24 #agreed triage issues, add series target to milestone 16:47:54 #topic Airing of grievances 16:47:57 just make sure you also add trunk, as otherwise it looks weird 16:48:26 so . do we have anything that anyone wants to talk about regarding the project? 16:48:39 we didnt have too many agenda items 16:49:02 so i thought it would be nice to open it up to folks if they wanted to discus something. 16:49:20 and had neglected to add an agenda item. 16:49:27 Can we skip to the Feats of Strenght? 16:49:38 cloudnull: I've got a lot of problems with you people. 16:49:54 lol 16:49:56 The de-raxification is saved for 'next' correct? 16:50:13 Which is the first step in galaxification 16:50:22 palendae: kilo, i think 16:50:34 moving on to the feats of strength 16:50:47 Log Toss competitors, form a line on the left 16:50:48 however irc wrestling may be tough. 16:51:01 ok. if nothing then moving on. 16:51:32 #topic More on genericising deployment roles and code. - (cloudnull) 16:51:32 lets move to generizing things for the last 10 minutes 16:51:46 which opens up to palendae 16:52:53 so far the plan has been that we will not target 10 or 9 for the removal of the rax bits, moving to kilo will be the first release that has rax bits stripped 16:52:55 I was more asking about what we decided last time - we had deferred to next 16:53:44 which will also include the roles as made galaxy compatible though all within the same structure. 16:54:34 as we progress through the making of our stack more generic, we will hopfully get to the point where the roles become separate repos. 16:54:35 cloudnull: and using external galaxy roles for infrastructure (non-openstack) where applicable and possible, I guess? 16:55:33 odyssey4me: i'd like to say yes, though from what I've seen so far that may be difficult. 16:55:49 granted i've not look everywhere. 16:55:56 May be room to improve the existing ones, then. 16:56:08 definitely . 16:56:09 Merge in our work to improve both user communities 16:57:12 the rack_roles have been labeled as "abandon ware" so we could try to pick up where they left off or move to other role maintainers. 16:57:39 but i like the idea of being able to consume external roles. 16:57:44 cloudnull: So the os_cloud repo's going to remain as a sketch? 16:58:10 And I'd agree that long term upstream galaxy roles would be great, but I think we'd have to participate in contribs/maintenance then, too 16:58:36 as soon as we extract juno from master into the branches ill put through a collapsed pr as WIP. 16:58:55 cloudnull: Of all the roles getting split out? 16:59:03 It sounded like there was some resistance to that 16:59:09 Since it's one big chunk 17:00:06 presently it would be a big chunk. 17:00:17 lets move this convo into the channel because we're out of time. 17:00:21 it may be impractical to have a stackforge repo for every role, at least immediately - that was my resistance. But I like the idea of using galaxy roles 17:00:22 ok 17:00:35 good meeting y'all 17:00:41 don't forget to end the meeting :) 17:00:43 #endmeeting