16:00:07 #startmeeting openstack_ansible_meeting 16:00:09 Meeting started Tue Oct 31 16:00:07 2017 UTC and is due to finish in 60 minutes. The chair is evrardjp. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:12 The meeting name has been set to 'openstack_ansible_meeting' 16:00:30 #topic rollcall 16:00:56 as usual, the agenda is here: https://wiki.openstack.org/wiki/Meetings/openstack-ansible 16:01:00 waiting for pppl to show up 16:01:06 i am here 16:01:19 :) 16:01:24 * mhayden wanders in 16:01:39 yeah! 16:01:53 logan- spotz cloudnull are you there? 16:02:00 odyssey4me ? 16:02:09 in my sorta wayL( 16:02:11 :) 16:02:13 cloudnull and promethianfire are conversing 16:02:25 o/ 16:02:45 cloudnull is almost last on the agenda... 16:02:51 let's start then! 16:03:05 #topic agenda 16:03:29 everyone agrees with the agenda I guess? 16:03:36 Oh yeah it's not bug but regular meeting 16:03:39 yo! 16:03:46 yep 16:03:58 yeah we are now in application 16:04:02 of the change of meetings 16:04:15 this is the last week of the month so... community meeting. 16:04:24 #topic review action items 16:04:43 I am not sure what were the last action items, it's been so long 16:05:34 hwoarang: http://logs.openstack.org/31/516331/3/check/openstack-ansible-functional-opensuse-423/7d43077/job-output.txt.gz -- recent failure 16:05:44 * evrardjp ask RDO/SUSE/Ubuntu packager contacts to reply "OK" on 16:05:52 so yes I did that 16:06:01 can setup-everything be run on a prod infrastructure? :D 16:06:04 so we are good 16:06:13 there is no other action item for this meeting 16:06:14 let's move on 16:06:22 #topic Liaisons feedback 16:06:24 gunix: yes 16:06:38 logan-: anything to report for stable branch? 16:06:48 are you fine with what's going on for now? 16:07:16 mhayden: anything to report for vulnerabilities management? 16:07:19 qq- is there any interest to get the lxc storage options back into stable/pike (maybe stable/ocata)? 16:07:21 nothing to report currently, dont know of any major patches in flight for stable currently 16:07:26 evrardjp: nothing here, sir 16:07:49 oh, cloudnull, you are here! cool! check this out: https://bpaste.net/raw/9f4e5cd582b3 16:07:54 ok so we are good for now. I don't think there is any vulnerability around or any major patch in pike or below 16:08:01 let's move to other liaisons then 16:08:20 #topic infra liaison - zuulv3 16:08:22 odyssey4me: ? 16:08:32 cloudnull: (for opensuse failures i have submitted a patch which may fix stuff) 16:08:38 could you talk a little more about Zuulv3 migration: Status report, how the jobs are implemented 16:08:39 ? 16:08:51 I can do it for you if you're busy 16:09:32 so... 16:09:44 gunix: as soon as I can sacrifice the requisite number of goats to make the gate gods happy https://review.openstack.org/#/c/516331/ should merge and that issue will be fixed. 16:09:49 evrardjp: have u tested the monasca-agent pluggin on the lxc containers anytime 16:10:01 Neptu_: cloudnull could we talk about all of that after the meeting? 16:10:08 yes. 16:10:14 That would be easier for the logs :) 16:10:15 * cloudnull goes sits back down 16:10:19 cloudnull: ok, i will just ignore that 16:10:47 so for zuul v3, odyssey4me logan- and many other ppl here have done a tremendous work 16:11:00 in fact, i am commenting uot line 99 to 107 in /etc/ansible/roles/os_swift/tasks/main.yml 16:11:04 *out 16:11:35 so master branch should have almost everything in place, for both the IRR and the integrated jobs 16:11:50 for lower branches, we are still actively working on it 16:12:07 apologies - only just saw the pings now 16:12:11 * odyssey4me turns sound back on 16:12:23 (forgot to mention hwoarang and mhayden who did a lot of work too) 16:12:38 but I meant it was a very good community effort, and I thank you all. 16:12:54 so the status is that there is now a zuul.d folder in each repo and branch 16:13:35 for each roles, we are basically having the "project.yaml" explaining which jobs run, and the jobs are defined into the tests repo (and optionally extra jobs are defined in repo directly) 16:14:03 for the integrated, it's the same spirit, a zuul.d folder containing project and jobs. 16:14:21 odyssey4me: if you want to add anything on the current status, that would be cool 16:14:36 (I just thought it was good to have a summary down somewhere) 16:14:46 thanks for picking that up - and apologies for being late 16:14:48 for ppl that didn't got the chance to look at those job 16:14:54 get* 16:15:03 no worries odyssey4me! that's why we are a team! 16:15:05 one more thing coming in to try and finalise the implementation for cross-repo testing: https://review.openstack.org/#/c/516723/ 16:15:22 that's blocking cloudnull's patch, so eyes on it please 16:15:41 * hwoarang noted 16:15:45 oh, also https://review.openstack.org/516689 which is a pike backport for the last fix 16:16:21 anything else to discuss about infra? 16:16:36 I have a suspicion that we're going to have a little fun with the upgrade tests, but I'll look into that once the main tests are good... the upgrade tests will still work and be green, but they might not actually be *upgrading* 16:17:04 nothing else at this time - thanks logan- for working in parallel to figure out the integrated build tests and jobs - they're subtley different 16:17:07 mmm. That sounds bad. Thanks for having a look 16:17:51 ok let's move to the next liaison then 16:17:53 ++ likewise for IRR 16:18:14 #topic docs liaison feedback and news 16:18:47 We are finally fixed on https://docs.openstack.org/pike/deploy/ :) 16:18:53 spotz: anything worth noticing on the docs world? What was the "retention policy changes" ? 16:19:08 spotz: yeah I fixed the docs in a series of different phases 16:19:45 ccha: ^ 16:19:56 From what I've been seeing we're no longer going to remove the older docs. I hope to get more information on the watermarking dhellman has been posting on next week in Sydney 16:19:57 on that topic, the deploy landing page for each branch is now pointing to project-deploy-guide 16:20:22 spotz: ok great 16:21:08 if anyone has any question about how docs are organized, feel free to contact spotz or me 16:21:12 let's move on then 16:21:34 #topic releases liaison feedback and news 16:21:49 for releases, we should have released last week. 16:22:13 should is the very important word here 16:22:19 we didn't. 16:22:41 because previous release of two weeks ago only got released... last week on thursday. 16:23:08 so I considered the train missed, and we'll continue the release train as usual. 16:23:46 that is all due to Zuul v3 changes and the impact on all the actors of a release. 16:23:49 choo choo 16:23:53 * mhayden chugga chuggas 16:23:53 choo choo 16:24:04 ah fun times 16:24:09 I am very sorry for the skip. and no choo choo can make me smile enough but that's life :) 16:24:18 they're two they're four they're six they're eight 16:24:25 haulin trucks and pushing freight 16:24:27 * mhayden is done 16:24:33 Can't help infra changes affected things 16:24:34 :D 16:24:39 anyway 16:24:59 the next release is still on mid month, as usual, so it will land on the 10th. 16:25:13 From summit! 16:25:30 last of the month will be the 24th. 16:25:49 anyone has something to ask for releases? 16:25:54 any urgent bump needed? 16:26:01 if not let's move on 16:26:28 ok let's move on 16:26:31 #topic Feedback on this month bug triages 16:26:41 anyone up to do this section? 16:27:00 ok I will hit it up then :) 16:27:39 we are currently okayish for bugs. Critical bugs were fixed just between zuulv3 fires. We may be lagging behind some bugs 16:28:08 please don't hesitate to fix bugs outside bug smashes :D 16:28:31 (I mean old bugs, because I know you're already fixing bugs!) 16:28:54 I think mhayden would have put a nice gif here. 16:29:10 ok next topic 16:29:13 #topic Forum @ Sydney Summit 16:29:39 so for the summit, we'll have a few things, all listed there: 16:29:41 #link https://etherpad.openstack.org/p/osa-sydney-summit-planning 16:30:14 if you are coming or if you want us to eavesdrop on some conversations that are OSA related, tell us! 16:30:30 (I don't know what else to say) 16:30:42 yup! 16:30:52 hunt us down, lets go grab a beer 16:31:07 :D 16:31:30 Or something tastier... beer blech! 16:31:42 mmmm 16:31:44 spotz ++ 16:31:53 next topic? 16:31:59 * cloudnull not against cloud scotch either 16:31:59 #topic Blueprints 16:32:10 anything here? 16:32:29 mhayden: https://review.openstack.org/#/c/479415/ 16:32:36 zuul has unfortunately dominated my time so far, never mind my day work... so I've unfortunately got no updates for mine 16:32:39 cloudnull: maybe you want to speak about the work on hyper converged? 16:32:41 cloudnull: there's a meeting going on 16:32:49 oh 16:32:51 nvm 16:32:53 :) 16:32:59 :D 16:33:04 that was priceless :D 16:33:05 also given the time i think it'd be great to see this get worked on 16:33:07 https://review.openstack.org/#/c/458595/ 16:33:19 Merged openstack/openstack-ansible-os_barbican stable/pike: Initial OSA zuul v3 role jobs https://review.openstack.org/516587 16:33:38 cloudnull: I think that's for logan- jmccrory and I 16:33:47 evrardjp: gotta depart a little early unfortunately 16:33:47 at least that's what we've discussed during the ptg 16:34:10 mhayden: oh you'll miss the fun part with the open discussion ? :'( 16:34:32 mhayden: no worries :p thanks for being there! 16:34:49 cloudnull: I didn't got the chance to work on that at the moment 16:34:51 yea my goal with that was to get an haproxy role replacement piped in. i have a lot of work done on the role but have not had time to work on the osa side of integrating it 16:35:07 I think zuul v3 has been quite time consuming 16:35:17 ++ 16:35:39 cloudnull: if you need anything for the hyper converged, tell us 16:35:54 will do 16:36:01 I have to write the upgrade playbook 16:36:27 I also need insight on https://review.openstack.org/#/c/454450/ 16:36:28 the upgrade jobs should be back up so we can use that for results 16:36:45 thanks! will review 16:37:09 if anyone else have cycles for it that would be great! 16:37:29 next topic then 16:37:32 #topic Role maturity handling 16:38:02 according to our documentation, the following roles should change of maturity: 16:38:04 monasca, octavia, searchlight, freezer 16:38:27 octavia found some love by xgerman_ 16:38:45 so let's deal with the monasca, searchlight, and freezer repositories 16:40:08 let's vote on moving them to unmaintained 16:40:17 ++ 16:40:24 ++ 16:40:36 ++ 16:40:54 ++ 16:40:56 I guess there is no need for a formal vote? 16:41:26 ++ 16:41:29 nobody has shown up in this conversation to maintain it, nobody contacted me or answered the email. this is done then. 16:41:40 make it so 16:41:51 #action evrardjp mark the roles for monasca, searchlight and freezer as unmaintained 16:42:05 regarding monasca I will talk with the comunity tomorrow on the meeting and I get back to you I can only go that far 16:42:15 before moving to another topic 16:42:27 Neptu_: that's nice of you 16:42:45 sweet! 16:42:47 a role can be moved back from unmaintained to maintained state. 16:42:53 let's see how it goes. 16:43:02 I did know know you had a meeting so you got me a bit on the loose 16:43:46 not sure what you meant, but everything is logged, if that can help you? 16:44:12 anyway, before moving to another topic, let's discuss the telemetry case 16:44:41 currently ceilometer/gnocchi/aodh passed master testing, so we couldn't deprecate it. 16:45:13 But it doesn't mean our telemetry stack is at the right place 16:45:36 ocata/pike is definitely broken, and I don't know if the functional testing of master is really working. 16:45:48 so if there is someone interested by this, please contact us 16:46:12 ok let's move to a different topic 16:46:25 #topic Helping infra reducing their inodes outage: Log discussion 16:46:30 cloudnull: ? 16:46:48 this topic appeared on the agenda after a series of events that brought infra down 16:46:56 yes, what do we want to collect? 16:47:07 we disabled our general etc collection 16:47:22 it makes sense for me to collect the /var/log/ in each container 16:47:31 and for me the /etc/ 16:47:39 however I think it's useful for us to enable it again but with more focus this time 16:47:45 maybe we can also do the package management configuration? 16:47:45 i agree 16:47:58 cloudnull: ++ 16:48:03 maybe we simply create a list object in tests and folks can add to it as we deem needed? 16:48:08 yeah we already collect enabled repos, but maybe the conf files could be useful too 16:48:13 cloudnull: that sounds good 16:48:35 do we have a general list of things we want to have now to get us started? 16:49:00 maybe it's also a good idea to work it out directly in the post job too, use ansible directly? 16:49:11 ++ 16:49:25 I'll start off with /etc/ and /var/log/ 16:49:26 cloudnull: I do not have that, except what was in the shell script 16:49:34 and we can go from there. 16:49:37 yeah I think that sounds reasonable 16:49:44 maybe /etc/apt/ and /etc/yum on top 16:49:50 zypper :D 16:50:07 yes :) 16:50:08 *zypp 16:50:15 and dnf or whatever 16:50:26 archiving them maybe? 16:50:34 ++ 16:50:35 ok. I'll get to it. 16:50:48 oh, that would be lovely 16:51:16 anyone has anything to add to this list? 16:51:42 fine by me 16:52:03 ok 16:52:07 let's move to the last topic 16:52:12 #topic open discussion! 16:52:19 anything goes! 16:52:23 on the topic of gating I'd like to propose modifying the integrated testing - https://review.openstack.org/#/c/516002 16:52:28 cats photos are allowed. 16:52:33 videos too. 16:52:58 i have nothing to add 16:53:17 I think we keep the AIO job for periodic tests, however we test more scenarios across the various OS's as a standard 16:53:24 that's a lot of things to trigger cloudnull do you want them on each, or on periodics? 16:54:20 its more things to trigger for sure 16:54:20 cloudnull: I like when they have a good name 16:54:39 but its a lot less than most other projects 16:54:44 I am worried that we are repeating the IRR there 16:55:09 for example compute could just get designate on top 16:55:15 and we don't repeat ourselves 16:55:25 and we make designate a good compute citizen 16:55:32 there would be some overlap, but example - I found https://review.openstack.org/#/c/516331/ by testing from osa a stand alone swift deployment 16:56:24 also I think we should have tests for things like cinder w/ rbd w/ compute etc 16:56:26 cloudnull: I like the idea of having standalone things, but I think it's in the role tests that it should be defined then 16:56:37 cloudnull: I agree there 16:56:55 which sould be something deployed from with in osa 16:57:18 so maybe the current scenaio breakdown can be tuned up 16:57:33 cloudnull: I like the idea 16:57:40 but I think there are gaps in the integrated testing that we should address , 16:57:57 cloudnull: Yeah I saw your patch go by yesterday for removing aio and replacing it? 16:57:58 I like the idea of having these multiple scenario 16:58:06 thinks that come to mind for me is, swift + glance, glance + nfs, glance + rbd, etc 16:58:09 and we use these scenario as "user stories" for the documentation 16:58:20 hej where the openstack-ansible wrapper puts the logs for the playbook executions? 16:58:31 cloudnull: but that sounds like a thing for swift testing 16:58:41 glance testing* 16:58:51 but those things all tie back to compute 16:58:56 oh ok. 16:59:20 so nova + glance + swift // nova + glance + rbd // 16:59:26 so maybe we need to address these things in the roles themselves and maybe we can do that better now with cross repo testing ? 16:59:39 cloudnull: maybe we can have a scenario like live migration with nova glance cinder all rbd 16:59:58 then a scenario named all objects or something like that, that has compute relying on swift 17:00:05 or nfs scenario ... 17:00:08 yeah 17:00:22 cloudnull: I think we can indeed 17:00:28 but I like where you're headed. 17:00:40 I am just worried on the impact on infra right now. 17:00:59 it is more jobs, but the jobs should be a lot shorter . 17:01:10 I like this idea. 17:01:11 * cloudnull gotta run 17:01:14 have another meeting 17:01:16 sorry 17:01:19 I wanted to have a metal job too. 17:01:21 ok 17:01:24 but more on this soon. 17:01:26 take care all 17:01:27 thanks for your time! 17:01:32 take care everyone 17:01:39 anyone last words for the meeting? 17:01:46 else I wrap this up 17:01:57 I am already two minutes late, sorry. 17:01:58 im good 17:02:05 thanks everyone then! 17:02:06 #endmeeting