15:01:30 #startmeeting puppet-openstack 15:01:30 Meeting started Tue Nov 22 15:01:30 2016 UTC and is due to finish in 60 minutes. The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:34 The meeting name has been set to 'puppet_openstack' 15:01:36 o/ 15:01:37 o/ 15:01:43 #link https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20161122 15:01:47 Hi 15:01:47 ahoy peoples 15:01:56 why google calendar says it's in one hour for me 15:02:05 daylight savings 15:02:07 or lack there of 15:02:12 we switched 2 weeks ago 15:02:13 awesome 15:02:16 you know when you were on PTO :D 15:02:25 o/ 15:02:29 I was probably sleeping 15:02:44 which is terrible cause the tripleo meeting is at 7am now for me 15:02:47 anyway 15:02:56 #topic past action items 15:03:05 EmilienM to collect openstack deprecations and file launchpad bugs: need to be postponed 15:03:41 yeah this one is on my list 15:03:47 mwhahaha to propose a virtual midcycle for next year - any one have any thoughts on when they'd like to do something? Feb/march? there weren't any sprints listed on the page last time i checked for Ocata 15:03:48 but honestly I won't do it soonish 15:04:02 or Jan might be better 15:04:10 mwhahaha: before PTG maybe? 15:04:22 k maybe i'll just pick a date 15:04:27 or after, dunno 15:04:39 EmilienM sync with RDO about puppet upgrade for packstack/tripleo: blocker in packaging, facter3 (can't find the BZ) 15:04:53 it was some boost package, i saw the bz yesterday 15:05:10 ok 15:05:15 so we'll just have to keep pushing on that 15:05:41 i'm on it 15:05:42 #topic Moving version bumps to the beginning of the cycle 15:05:49 Unfortunately due to various CI issues, attempting to land just a simple version bump took over 4 days. Since the milestones are date based (and assume versioning after tag), I propose that we pre-land all the metadata.json changes either immediately after the tag or several weeks in advance. Thoughts? 15:06:01 +1 15:06:14 not sure which is better, but waiting to the last minute was terrible this last release 15:06:15 we could even automatize it with the bot 15:06:28 like a job that does it, in the pipeline of release 15:06:42 dhellmann wanted to chat about stopping the manual update, so i need to sync with him about that 15:06:42 so everytime we push for the tag, right after we send update into metadata 15:06:51 but it's hard to predic a new tag 15:06:56 predict 15:06:58 +1 15:07:10 well i've got scripts to do the minor/major version bump so maybe we can just hand those off 15:07:44 #action mwhahaha to sync with openstack release team about metadata.json updates 15:07:58 for now i'll probably propose them earlier than the final week 15:08:03 yeah but how do you predict the tag 15:08:15 it sounds like a first good iteration 15:08:30 yea i think it'll require some more work on the release side 15:08:33 we'll see 15:08:47 we can use a file that set the tags / dates 15:08:49 and query it, etc 15:09:34 well i guess techincally the release job could propose the version bumps after we tag the current stuff 15:09:56 there would be a slight period of desync between the tarbals and the repos 15:10:00 right, but at some point we need to bump the major tag 15:10:28 why that? 15:10:42 tarballs are names -master anyway 15:10:49 it ignores the metadata.json 15:10:50 if you have the release job do the metadata.json update, you have to propose where you want the tag to be 15:11:03 they aren't -master, we use version numbers i thought 15:11:03 let me find the script I wrote 15:11:10 yes, only for tags 15:11:13 a sec 15:11:26 right i'm not talking about the intermediate tarbals, just the release ones 15:11:32 https://github.com/openstack-infra/project-config/blob/master/jenkins/scripts/run-tarball.sh#L28 15:12:17 just to be clear in case I misunderstood: what I mean is that whatever when you update metadata.json, the master tarball will be created at each new commit merged in master 15:12:27 and the tagged tarball will only be created at the release time 15:12:39 because of ZUUL_REFNAME in the tag job 15:13:10 yea i'm not sure of the exact ordering, i'm trying to figure out when we do the metadata.json update vs what's being released and what impact that has to the end user 15:13:27 it shouldn't change anything 15:13:31 maybe we just need to get it so after a release we always update to the next number 15:13:35 so we're always ahead 15:13:38 if you update metadata.json now versus later 15:13:40 i think it can if people are using librarian 15:13:54 instead of r10k 15:14:08 anyway I'll work on this a bit over the next couple of weeks 15:14:26 but yeah, I would love a job that update this file for us 15:14:32 moving on 15:14:33 (and also the releasenote conf file) 15:14:38 yes 15:14:42 #topic CI scenarios 15:14:53 so we're hitting the 1h timeout on some of the scenarios 15:15:11 it seems better yesterday but we're getting up there on normal runs like 45-55 mins 15:15:27 it's not directly related but stackviz could help us to see what tempest tests take more time than before 15:15:32 do we move some functionality out of the existing ones or up the timeouts 15:15:40 and stackviz is broken on centos7 because npm can't be found when buidling the image in nodepool 15:15:47 it's in my list of things to do (npm/centos) 15:16:19 yea that'll be helpful 15:16:31 before moving features out, I would like to spend time on comparing a CI job from now and 3 weeks ago 15:16:39 and see what is taking more time 15:16:48 it could be a networking thing with RDO mirror 15:16:54 or a tempest test that takes longer 15:16:57 etc 15:17:03 it's also in my list 15:17:12 yea i tried to look into it a bit and didn't notice anything that really stuck out 15:17:17 sometimes the puppetrun would take longer 15:17:23 sometimes the tests would take longer 15:17:26 wasn't really consistent 15:17:41 also we need to keep in mind our scenarios are really busy 15:17:48 comparing to regular devstack jobs 15:18:17 we run a lot of things and we activate ssl everywhere, etc... So we might have increased the load slowly over the last months 15:18:30 and now reaching the limit randomly because we didn't see it growing 15:18:45 maybe we could move some stuffs to scenario004? 15:18:51 iiuc only 001 and 002 are timeouting? 15:18:52 +1 15:19:05 yea so far i think i've only seen it on 1/2 15:19:16 also new things should go to 004 i think 15:19:19 and primarily centos 15:19:23 iurygregory: yes 15:19:47 mwhahaha: right, so 2 areas to investigate: is it a mirror issue on centos? or just the fact we run more services on centos7 nodes 15:19:59 yup 15:20:00 1) is easy to find out 15:20:06 2) is more tricky 15:20:08 i'll take some actions 15:20:30 #action EmilienM to investigate scenario001/002 timeouts (mirror issue? too much services?) 15:20:35 cool thanks 15:20:42 moving on 15:20:43 I don't think it's a networki ngissue 15:20:49 otherwise scenario003 would hit it too 15:20:54 yea 15:20:55 and afik nothing changed in networking 15:21:01 or dmsimard would have told us 15:21:09 yeah let's move on 15:21:11 #topic Liberty EOL 15:21:14 #link http://lists.openstack.org/pipermail/openstack-dev/2016-November/107717.html 15:21:25 so we've got some modules not on the eol list (aodh/barbican)? 15:21:31 FYI we're EOLing TripleO Liberty 15:21:36 iberezovskiy: what about Fuel? ^ 15:21:48 mwhahaha: yeah, I saw... why that?? 15:21:48 #link https://gist.github.com/tbreeds/93cd346c37aa46269456f56649f0a4ac#file-liberty_eol_data-txt-L308-L310 15:21:59 no idea, i don't know where that list is generated from 15:22:00 mwhahaha: barbican? lol 15:22:07 mwhahaha: ask tonyb 15:22:25 but puppet-aodh is valid, we ned to EOL it too 15:22:39 they might not have had a release for liberty 15:22:44 which maybe why it's on the list 15:22:55 if we have a branch we should EOL 15:22:58 ah, just a branch? 15:23:03 so i'm going to check into it a bit more, i added it a few mins before the meeting 15:23:06 EmilienM, we didn't switch liberty Fuel on liberty puppets, so it's ok 15:23:15 https://github.com/openstack/puppet-aodh/releases/tag/7.0.0 15:23:18 it has a release ^ 15:23:26 iberezovskiy: ack 15:23:42 ok so i'll check those out and get them EOL'ed as needed 15:23:53 just wanted to make sure that there weren't any outstanding liberty issues 15:24:03 looks like there might be two liberty reviews out there 15:24:09 so i'll take a look at those as well 15:24:20 mwhahaha: feel free to share them 15:24:24 #link https://gist.github.com/tbreeds/93cd346c37aa46269456f56649f0a4ac#file-liberty_eol_data-txt-L118 15:24:27 looks like puppet-neutron 15:24:53 https://review.openstack.org/#/q/branch:stable/liberty+project:%22%255Eopenstack/puppet-.*%2524%22+status:open 15:24:56 #link https://review.openstack.org/#/q/branch:stable/liberty+project:%22%255Eopenstack/puppet-.*%2524%22+status:open 15:25:10 i'll ping Lukas 15:25:22 ok cool 15:25:26 done on #puppet-openstack 15:25:32 anyway that's all i have on that, just a FYI 15:25:34 the other one can be dropped 15:25:48 #topic Open Discussion, Bug and Review triage 15:26:00 anyone have any general things they would like to chat about? 15:26:07 it's snowing here 15:26:11 ditto 15:26:17 guys wanted to ask help with this strange error http://logs.openstack.org/01/397701/1/check/gate-puppet-ceph-puppet-unit-4.5-centos-7/1b52838/console.html.gz#_2016-11-15_11_52_36_640441 15:26:35 mkarpin: bad variable 15:26:54 something is using Package[$variable] and it's unset 15:27:00 it's after switch to rspec-puppet-facts... 15:27:17 we might not be setting the package name in params for fedora22 15:27:24 mkarpin: nice work (the switch) 15:27:39 why fedora22? 15:28:03 all supported os 15:28:05 https://github.com/openstack/puppet-ceph/blob/master/manifests/osd.pp#L138 15:28:15 be it's ceph::params::pkg_policycoreutils 15:28:22 i'd drop the fedora support 15:28:44 or might be https://github.com/openstack/puppet-ceph/blob/master/manifests/osd.pp#L142 15:29:06 it looks like set https://github.com/openstack/puppet-ceph/blob/master/manifests/params.pp#L67 15:29:19 for all redhat family 15:30:07 hmm we can look further, but my thoughts would be it's not being set for some reason 15:30:36 ok. will try to dig these deeper 15:31:38 ok cool, anything else? 15:32:03 np 15:32:12 i'm digging https://review.openstack.org/#/c/400760/ 15:32:36 we haven't got packaging promotion since... long 15:32:42 oh and I pinged canonical this morning 15:32:49 they provide ocata packages by next week or so 15:32:55 nice 15:32:55 =D 15:33:00 i'm done 15:33:13 cool thanks everyone 15:33:20 #endmeeting