15:00:16 #startmeeting puppet-openstack 15:00:17 Meeting started Tue Jun 23 15:00:16 2015 UTC and is due to finish in 60 minutes. The chair is EmilienM. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:20 The meeting name has been set to 'puppet_openstack' 15:00:27 who is here today? 15:00:28 o/ 15:00:32 <_ody> o/ 15:00:36 o/ 15:00:44 o/ 15:00:49 o/ 15:00:59 #link agenda: https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20150623 15:00:59 #link last meeting: http://eavesdrop.openstack.org/meetings/puppet_openstack/2015/puppet_openstack.2015-06-16-15.00.html 15:01:04 o/ 15:01:14 #topic Actions from last meeting 15:01:28 spredzy,clayton create a POC and send an email to the ML about parameter default policy. Not Done, will be postponed 15:01:34 cody to finish neutron patch to support puppet4 - DONE 15:01:39 thx _ody btw ^ 15:01:47 spredzy Send a patch as an implementation reference (dbsync exec). Not Done, will be postponed 15:01:47 spredzy Send an email on the ML to let community know the final word on that thread (dbsync exec). Not Done, will be postponed 15:01:54 Emilien to run a thread on ML about some stable branches deprecations (grizzly+havana) DONE 15:02:04 here 15:02:05 no negative feedback on this one ^ 15:02:10 I guess we can go for it 15:02:14 and 15:02:16 mdorman to work more on https://review.openstack.org/192009, DONE, merged 15:02:22 #join #openstack-meeting 15:02:25 sorry 15:02:35 #topic Announcements 15:02:35 <_ody> thx for the review. social and I'll will finish up the other one that wasn't caught by tests...and I'll get myself a better acceptance environment today 15:02:56 #info 5.1.0 released! 15:03:05 thanks everyone for the help on this one 15:03:08 \o/ 15:03:24 #info beaker jobs are now voting 15:03:36 #info puppet4 jobs are now voting 15:03:36 <_ody> Yeah. Great work on the release. 15:04:00 #topic CI status 15:04:26 beaker: working now on 'experimental' upgrade job 15:04:37 #link experimental puppet upgrade job: https://review.openstack.org/194293 15:04:46 any feedback is welcome in the patch ^ 15:05:18 _ody, sbadia: do we have blockers / work to do for Puppet4 ? 15:05:23 <_ody> Oh. A question about CI status that I keep meaning to ask the group. Any what for us to get a snapsho it time view of everything in master that is passing? 15:05:36 <_ody> EmilienM: Besides it being hard to do that ^ 15:05:43 #link CI status https://docs.google.com/spreadsheets/d/1i2z5QyvukHCWU_JjkWrTpn-PexPBnt3bWPlQfiDGsj8/edit#gid=0 15:05:53 _ody: I have this file now ^ 15:06:06 _ody: but we might need a webpage auto-generated 15:06:21 <_ody> Ok. That will due...was hoping gerrit and zuul could tell me. 15:06:38 _ody: we can follow-up that later, it's a good point 15:06:54 <_ody> Only other puppet4 thing is to get everything on rspec-puppet 2.2.0 15:06:59 yep 15:07:03 #action _ody and EmilienM to figure out about a CI status page 15:07:12 sbadia, _ody: cool. Thanks! 15:07:15 and a waiting patch from social (on puppet-nbeutron) 15:07:28 a last question 15:07:36 should we use Puppet4 for beaker jobs? 15:07:45 <_ody> Would be nice. 15:07:56 (2.2.0 is related to msync) 15:07:56 * _ody only because I am a puppet4 fanboy 15:08:23 well, I would like to use what distros ship 15:08:25 EmilienM: hum it's too bliding edge :) 15:08:36 question about Puppet4: do both Ubuntu and RedHat have Puppet4 available? Are they the default? I ask because Solaris is still on 3.6.2 and if needed I can start the legal process to bump to Puppet 4.latest 15:08:43 yes, it's better to use the distro version 15:09:02 it's 3.7 on debian/ubuntu 15:09:04 I think for now we should stay on 3.x for beaker jobs and wait to see the package in the distros landed 15:09:12 is anyone against that ? 15:09:33 well, let's go ahead then 15:09:45 is there a timetable for moving to puppet 4 in debian/ubuntu? 15:09:49 it will be a long time before the distros ship puppet 4 15:10:09 crinkle: good point 15:10:12 we should start testing beaker with puppet 4 when beaker is stable with puppet 4 15:10:13 ok. just wanted to see if I needed to start that process on my end. 15:10:18 thanks 15:10:41 dfisher: most people on rhel/ubuntu install puppet from the puppet labs repos, not the distro 15:10:48 i don't know what the story is for solaris 15:10:55 we have a native package for it 15:11:00 to include 25 solaris resource types 15:11:10 which i desperately need to push back to puppetlabs 15:11:18 alas … no tests :( 15:11:19 +1 to have at least separate job for puppet4 15:11:30 ok, so we will think about testing puppet4 when beaker will be stable with puppet4 15:11:32 and what the impact on the CI ? 15:11:38 we have some metrics ? 15:11:48 paramite: could be an option, and maybe set the job experimental? 15:12:01 EmilienM, yup, exactly 15:12:01 sbadia: what kind of metrics? 15:12:29 ressources of the CI are not infinite no :) ? 15:12:54 what the impact of duplicate beaker jobs (3.x + 4.x) on each patch 15:13:03 sbadia: I can ask infra 15:13:09 thanks! 15:13:31 #action EmilienM to investigate a new experimental job to test beaker against puppet4 15:13:43 crinkle: where can I see if beaker is stable with puppet4 ? 15:13:50 or the other way around 15:14:18 EmilienM: i don't know how well it supports installing puppet 4 and the all-in-one package yet 15:14:32 so we'd have to consult the documentation or ask the devs 15:14:33 <_ody> I can check with PL QE. 15:14:40 thanks _ody 15:14:54 #action _ody consults PL QE to know more about beaker/puppet4 15:14:57 cool 15:15:00 anything else? 15:15:16 ok let's go ahead then 15:15:24 #topic Kilo release 15:15:37 a lot of people is asking :-) 15:15:48 #link status of kilo release: https://etherpad.openstack.org/p/puppet-kilo-release 15:16:02 we still miss some patches merged and also Keystone v3 15:16:22 I +2+A some patchs, I'll update the pad 15:16:45 richm, crinkle: when keystone patches will be merged, can we do the release or should we also wait that all puppet openstack modules support new auth method? 15:16:57 we should go ahead when keystone is done 15:17:04 ok 15:17:36 there is not much to say about kilo release, I would just ask to reviewers to look at the etherpad and make sure patch are reviewed 15:18:12 #topic Zaqar module 15:18:16 Richard is not here 15:18:25 but I think things are moved, I approved the repo in governance 15:18:52 #link puppet-zaqar in governance: https://review.openstack.org/#/c/191946/ 15:19:09 #link puppet-zaqar in os-infra: https://review.openstack.org/#/c/191942/ 15:19:48 #topic Keystone v3 resource naming 15:19:51 richm: hello 15:19:54 hello 15:20:03 #link keystone v3 resource naming thread: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067650.html 15:20:35 After going around and around with this - I think the right thing to do is to leave it up to the composition layer code to specify the full name of the resource 15:20:54 there is feedback on this thread 15:21:07 That is, puppet-keystone will _not_ try to "normalize" user/project names in keystone_user_role 15:21:08 (sorry, late) 15:21:30 richm: if the composition layer is backward compatible, I'm fine with that. 15:21:55 If the composition layer specifies username => 'glance', user_domain_name => 'services', then there had better be one, and only one, user named 'glance' in all domains 15:22:06 richm: people should not have to change their composition layer because of this feature, otherwise that means we don't do backward compatibility 15:22:11 right 15:22:32 Backwards compatibility is a high priority 15:22:42 awesome 15:22:56 richm: do you have any blocker you want to discuss now? (about keystone v3) 15:23:04 I do _not_ want to be harassed by an angry mob for breaking their setups 15:23:24 * mfisch puts down his pitchfork 15:23:50 EmilienM: I'm still trying to figure out how make self.instances and self.prefetch work in all cases 15:24:17 richm: ok, can you help us by giving some patches URL ? 15:24:22 I think I'm close - I should have another round of reviews later today 15:24:35 let me test what I have first 15:24:41 richm: good work. I'll catch-up on beaker tests when you're done 15:24:41 in case you want to try something out 15:24:56 anything else on keystone v3? 15:25:28 seems like not 15:25:34 #topic Move usage questions to openstack-operators mailing list 15:25:34 no 15:25:45 So Richard ran this thread 15:26:00 mfisch: what do you think ^ ? 15:26:09 and all other operators here 15:26:11 I'm +1 with it 15:26:21 I think encouraging the tag use would be nice 15:26:29 also +1 because it's consistent with how Openstack does today 15:26:40 does the ops list official support tags? 15:26:41 <_ody> +1, seems logical 15:26:49 if anyone has objection, please raise your hand 15:26:52 mfisch: good question 15:27:07 maybe we need to ask TomF? 15:27:39 mfisch: do you want to take that point? 15:28:07 sure 15:28:11 #action mfisch ask to TomF about usage of tags in Operators ML 15:28:27 #topic Open Discussion, Bug and Review triage 15:28:40 #link https://bugs.launchpad.net/puppet-nova/+bug/1467667 15:28:41 Launchpad bug 1467667 in puppet-nova "openstack-puppet modules should support configuring RabbitMQ heartbeat" [Undecided,In progress] - Assigned to Mike Dorman (mdorman-m) 15:28:42 mdorman: o/ 15:29:18 so the main question on this is what mfisch brought up on https://review.openstack.org/#/c/194399/ 15:29:32 we are facing the situation we were talking at the summit 15:29:33 with oslo 15:29:34 :) 15:29:41 the default value for rabbit_heartbeat_timeout_threshold changed between oslo.messaging 1.10.0 and 1.11.0 15:29:46 this is an excellent question 15:29:48 old default was 60, new default is 0 15:30:04 my opinion is we make it default to 0 in puppet, since that’s what it is going forward 15:30:11 does 0 mean off? 15:30:12 mdorman: if you patch nova, you have to watch which version of oslo it uses, and use its default value 15:30:28 mdorman: at least that how we proposed to solve this issue in VC 15:30:47 mfisch: 0 does mean off. but operators already using the feature (by default), are eventually going to have to do some reconfiguration, anyway, b/c once you go to oslo.messaging 1.11.0, the default is off 15:31:20 EmilienM: i don’t think i was there for that convo. not opposed to tracking the version of oslo.messaging, but unsure what the best day to do that is 15:31:26 mdorman: I think we need to use openstack defaults 15:31:38 and let operators change them 15:31:47 EmilienM: right, agreed, but _which_ default is the question 15:31:47 thats my vote 15:31:52 because if we start to change them, operators will hate us one day 15:31:55 from 1.10.0 and earlier, or 1.11.0 and later? 15:32:33 mdorman: the version of the project you're patching 15:33:02 we will probably have inconsistent default params across modules but well, if openstack is not consistent in oslo usage, we can't fix it. 15:33:17 if the default changed in openstack, there might be a reason in the code 15:33:22 well kilo keystone has oslo.messaging>=1.8.0, so for kilo you could conceivably have either version (new or old default) 15:33:27 and puppet does not deal with that 15:33:35 oh I see 15:33:54 mdorman: well, in that case I'll believe your OPS experience 15:33:58 this feature did not exsit before kilo, so we are not talking about having to backport it or anthing like that. 15:34:06 so where do the defaults come from? 15:34:06 no we won't backport that 15:34:15 oslo.messaging, let me find the link real quick 15:34:19 the config is in keystone.conf (for example), but the default is really in the library 15:34:25 we should also see what packaging provides :-) 15:34:35 yeah that’s a good point. 15:34:35 in ubuntu/fedora at leasr 15:34:38 least* 15:34:41 https://github.com/openstack/oslo.messaging/blob/1.10.0/oslo_messaging/_drivers/impl_rabbit.py#L125-L126 1.10.0 version 15:34:51 mdorman: can you take the action? 15:34:59 https://github.com/openstack/oslo.messaging/blob/1.11.0/oslo_messaging/_drivers/impl_rabbit.py#L126-L127 1.11.0 version 15:35:27 EmilienM: yeah i can figure out what packages have. i suspect once all the inter-project deps are worked out, we’ll end up with a newer version of oslo.messaging anyway 15:35:32 #action mdorman follows up https://review.openstack.org/#/c/194399/ in investigating oslo messaging version shipped by packaging and set the right default values 15:35:35 mdorman: thanks 15:35:47 lets keep up to date on this via the ML 15:35:50 this is a very important patch 15:36:03 i’ll reserach and then post to ML 15:36:06 mfisch: +1 15:36:12 cool, we have a plan 15:36:16 #link keystone.py patch https://review.openstack.org/#/c/193679/ 15:36:19 mfisch: o/ 15:36:32 yeah 15:36:38 I think the problem is pretty clear 15:36:44 this is not a complicated patch but I wanted to let everyone know that we still need to sync this file 15:36:47 at least for this cycle 15:36:52 hopefully it will start getting packaged 15:36:59 1/ we have to continue to ship it until Ubuntu fix it (ship only on ubuntu systems) 15:37:06 Also wanted to point out the keystone.py is also broken for icehouse - have a review for it here: https://review.openstack.org/#/c/191212/ 15:37:12 2/ make sure with jamespage it's WIP on his side 15:37:33 dmsimard: good point, it would be great to be merged 15:37:52 I spoke to zul about it, I can ping james 15:38:14 is that an action? Probably talking to chuck was enough 15:38:39 mfisch: cool, your patch is a good link to keep 15:38:54 #action mfisch follows-up keystone.py issue on https://review.openstack.org/#/c/193679/ 15:39:14 #link nova/rbd: https://review.openstack.org/#/c/119093/ 15:39:15 dmsimard: o/ 15:39:34 dmsimard: did you just sync that in from the icehouse branch? 15:39:43 mfisch: yes 15:39:52 ok merged 15:40:07 Would love to have the RBD ephemeral toggle merged is all, been outstanding since 2014 :) 15:40:25 Otherwise nova::compute::rbd assumes Ceph is used for both Cinder volumes AND ephemeral storage 15:40:48 I've participated to the patch, I'm not sure I can +2 15:40:50 So if you use nova::compute::rbd with actual local storage, local storage VMs will fail miserably at spawning 15:40:57 dmsimard: +1 15:41:26 I don't see any blocker on this one 15:41:33 I'm using that patchset in our deployments, works well for us. 15:41:55 cool, good to know 15:42:13 dmsimard: do you want to talk about "Lack of a proper Cinder provider for volume types, extra specs, qos settings" ? 15:42:45 EmilienM: Sure, just wanted to have a general discussion around the lack of providers for volume types, extra specs, qos and the likes in Cinder 15:42:49 dmsimard: do you mean to replace our wonderful execs by puppet providers ? :-) 15:43:11 EmilienM: Ah, yes, indeed, https://github.com/openstack/puppet-cinder/blob/master/manifests/type_set.pp is not super cool 15:43:16 yes +1 15:43:32 I think nobody will reject this kind of patch 15:43:48 which will make our module more stable & Puppetish than using Exec for that. 15:43:49 yes, Please no one like it. I wish I had the time to fix it =/ 15:43:50 I tried to look at doing something with Cinderclient but it's too much work, it's all prettytables. Openstackclient provides csv output and could provide a mean to do something 15:44:18 I just wanted to have your opinion on what module has the "best" implementation of a provider using Openstackclient, I'll try and take example from it 15:44:21 dmsimard: osclient is the way to go, consistent with what we do in keystone already 15:44:30 dmsimard: and you have the util code in openstacklib 15:44:55 xarses: No hate, we've had to do something similar in our composition layer :( 15:44:59 dmsimard: check out how the keystone providers are done as well as https://review.openstack.org/#/c/172580/ for examples 15:44:59 dmsimard: https://github.com/openstack/puppet-openstacklib/blob/master/lib/puppet/provider/openstack.rb 15:45:36 EmilienM: I have one more thing wrt keystone when this topic is done 15:45:46 mfisch: ack 15:45:56 crinkle, EmilienM: Thanks, I'll check it out. This is all in my free time so no ETA :) 15:46:00 cool 15:46:04 mfisch: go ahead 15:46:25 I just want to close out the topic, so I filed an Ubuntu bug: https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1467985 15:46:27 Launchpad bug 1467985 in keystone (Ubuntu) "ubuntu package needs to ship httpd/keystone.py" [Undecided,New] 15:46:31 that will track the status 15:46:43 If I dont get traction I'll file a support ticket 15:46:55 nice 15:47:08 #link bug to track keystone.py missing in ubuntu packaging https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1467985 15:47:46 crinkle: I'm curious, can you give us a status on the work you're doing with integration module? 15:47:58 EmilienM: waiting on the repo to be created 15:47:59 and if you have any blocker 15:48:15 cool, after that I think we will be able to move fast 15:48:21 i think so 15:48:28 crinkle: thanks for this work 15:48:32 yup 15:48:52 can we discuss puppet-monasca? 15:48:54 we did all our agenda, does anyone has anything else to add? we have 10 min left 15:49:05 mfisch: go ahead 15:49:08 yeah, I want crinkle to use here infra juju to get this merged: https://review.openstack.org/#/c/194263/ 15:49:16 that adds puppet-manager-core to monasca 15:49:25 i have no special powers 15:49:30 mfedosin: just ping infra folks 15:49:34 they are quite responsive 15:49:49 sure 15:50:00 I'll give it a couple more days before i bug people 15:50:19 they use to review every day so I'm confident it's merged this week. 15:50:29 anything else folks? 15:50:34 nope 15:50:39 good for me 15:50:49 That's All Folks! 15:50:54 have a great day/evening 15:50:56 #endmeeting