17:02:03 <tinwood> #startmeeting charms
17:02:03 <openstack> Meeting started Mon Aug  7 17:02:03 2017 UTC and is due to finish in 60 minutes.  The chair is tinwood. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:02:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:02:06 <openstack> The meeting name has been set to 'charms'
17:02:22 <tinwood> hi everyone, welcome to the OpenStack charms two-weekly meeting at this time.
17:02:28 <tinwood> okay, let's kick off.
17:02:41 <tinwood> #topic Review ACTION points from previous meeting
17:03:04 <tinwood> there's nothing in the agenda on this topic: https://etherpad.openstack.org/p/openstack-charms-weekly-meeting-20170807
17:03:25 <tinwood> Anyone having any back actions they want to bring up?
17:03:55 <tinwood> okay, in that case ...
17:04:05 <tinwood> #topic State of Development for next Charm Release
17:04:24 <tinwood> So, it's getting close to the next release:
17:04:33 <tinwood> Aug 24 - OpenStack Charms Feature Freeze
17:04:33 <tinwood> Sept 1 - Upstream OpenStack Pike Release
17:04:33 <tinwood> Sept 7 - OpenStack Charms Release
17:04:59 <tinwood> beisner, as you're here, any thoughts on this? :)
17:05:11 * beisner likes
17:05:56 <beisner> We need to make sure high/crit bugs have traction now in order to make freeze.
17:06:21 <gnuoy> Is gate running pike tests atm?
17:06:29 <jamespage> not yet
17:06:31 <tinwood> Isn't the freeze a feature-freeze; we continue on any high priority bugs?
17:06:34 <gnuoy> ack
17:06:36 <beisner> It is not yet.
17:06:39 <jamespage> pike b3 is in -staging
17:06:43 <beisner> yes tinwood we can still do crit bugfixes in freeze
17:06:56 <beisner> we just have to understand the impact to the test/revalidation matrix each time something lands during freeze.
17:07:42 <tinwood> jamespage, another subtopic here is the gnocchi charm.  Any updates for the meeting?
17:08:04 <jamespage> aiming to get all existing charm touches done before wednesday
17:08:19 <jamespage> most things are ready - need to retro my ceilometer changes but I think they are nearly there
17:08:31 <tinwood> excellent!
17:08:35 <jamespage> only thing I've not managed to touch is the radosgw support
17:08:55 <jamespage> but that was kinda blocked on the service-discovery spec which remains un-implemented
17:09:13 <jamespage> so I'll suggest we defer that for this cycle
17:09:27 <jamespage> project-config and goverance changes are up for gnocchi
17:09:27 <gnuoy> un-implemented in charms or un-implemented upstream in radosgw ?
17:09:32 <tinwood> so we'll leave it as an early-release charm?
17:09:37 <jamespage> gnuoy: un-implemetned in the charms
17:09:45 <gnuoy> ack
17:10:08 <jamespage> for reference - http://specs.openstack.org/openstack/charm-specs/specs/pike/approved/service-discovery.html
17:10:22 <tinwood> thanks jamespage
17:10:40 <tinwood> The other sub-topic is snap integration.
17:11:03 <thedac> snap integration has been put on hold for this release cycle
17:11:06 <tinwood> Any updates there w.r.t. to charm release
17:11:10 <thedac> One more for state of development:
17:11:12 <thedac> Dualstack IPv4 and IPv6 is making progress. Working out SSL bits this week.
17:11:40 <thedac> I am reviewing a jamespage charmhelpers change today which will be related
17:11:56 <jamespage> thedac: ta
17:12:05 <tinwood> thanks for the update, thedac
17:12:12 <jamespage> haproxy bits are as well - backend/frontend mapping needs some work
17:12:12 <coreycb> i'm in the midst of dropping the cinder v1 api for pike.  still testing b3 of pike with the charms so there may be other updates needed.
17:12:29 <jamespage> coreycb: that ceph fix should be through soon
17:13:05 <coreycb> jamespage: great. i'm backporting ceph now.  seems like that may finish building by the end of august. :)
17:13:26 <tinwood> Also, from me, worker multipler and ipv4 memcache are now in the reactive charms.
17:13:26 <jamespage> yeah its slow - I have to limit parallel execution otherwise lp builders run out of ram
17:13:35 <jamespage> tinwood: ooo good
17:13:40 <jamespage> options.workers  right?
17:13:44 <tinwood> yup
17:14:10 <tinwood> and a more extensive options.<something> (I'd have to look it up) for the apache worker config.
17:14:17 <jamespage> tinwood: thanks for picking that one up
17:15:00 <tinwood> And, hopefully, landing this week, some refinements in designate in how much 'work' it does during an update-status hook.
17:15:24 <tinwood> Any more for this topic?
17:15:29 <jamespage> a write up of that would be good for those that follow
17:15:36 <jamespage> so we don't re-make the same errors...
17:15:56 <tinwood> Blog post or doc page?
17:16:15 <jamespage> blog would be nice
17:16:22 <tinwood> okay :)
17:16:22 <beisner> doc page with a blog post pointing to it would be good imo
17:16:49 <tinwood> #action tinwood write up how to 'quieten' down a reactive charm during update-status, et al.
17:16:59 <beisner> that way we have some doc thing under rev control, and a blog post where ppl can get the extra fluff/context.
17:17:04 <beisner> :-)
17:17:21 <gnuoy> Is the any science on how widely deployed the designate charm is ooi?
17:17:26 <tinwood> Okay, I'll follow that lead.
17:17:26 <gnuoy> s/the/there/
17:17:56 <tinwood> gnuoy, it's being tested by CPE at one of the customer's sites at the moment.
17:18:22 <gnuoy> ack, people are using it, cool
17:18:36 <tinwood> gnuoy, indeed, and finding bugs :)
17:18:46 <gnuoy> there are no bugs in that charm
17:18:50 <gnuoy> :)
17:18:54 <thedac> :)
17:18:57 <tinwood> That's right; I'm not fixing them.
17:19:00 <tinwood> :)
17:19:23 <tinwood> okay ... any more, as otherwise we're on to bugs ...
17:19:36 <tinwood> #topic High Priority Bugs
17:19:48 <tinwood> For review: https://tinyurl.com/osc-high-priority
17:20:01 <tinwood> and https://tinyurl.com/osc-critical-bugs
17:20:49 <tinwood> TWO new critical's on ceph/ceph-mon?
17:21:03 <tinwood> Plus, a very old hacluster bug?
17:21:19 <jamespage> tinwood: yeah I raised those today - needed for pike UCA
17:21:26 <jamespage> otherwise you don't get stats of any sort
17:21:34 <jamespage> tinwood: fairly trivial changes to the charms tbh
17:21:39 <jamespage> I'll prob get to those
17:21:53 <tinwood> Okay, thanks for the clarification! :)
17:21:54 <gnuoy> hmm, that hacluster bug seems strangely familiar
17:22:15 <gnuoy> I'll ping the guy who rqaised it
17:22:17 <gnuoy> * raised
17:22:18 <jamespage> I disagree on that being critical so I've dropped it to high
17:22:49 <gnuoy> +1
17:22:54 <thedac> wolsen discussed tackling that bug a while back.
17:23:15 <jamespage> yeah
17:23:20 <jamespage> its annoying
17:23:24 <tinwood> jamespage, oddly, is it still on the old hacluster charms project? https://bugs.launchpad.net/charm-hacluster/+bug/1478980
17:23:24 <openstack> Launchpad bug 1478980 in OpenStack hacluster charm "If the principle updates a resource parameter of an already configured resource hacluster ignores it" [High,Triaged] - Assigned to Billy Olsen (billy-olsen)
17:23:33 <jamespage> nope
17:23:39 <jamespage> charm-hacluster is right
17:23:53 <tinwood> Oh, yeah, it says 'invalid'
17:24:01 <jamespage> anyway good to switch focus to bugs as features complete folks
17:24:13 <tinwood> Absolutely.
17:24:36 <tinwood> we're running short on time, so can we move on?
17:24:57 <tinwood> #topic Openstack Events
17:25:11 <tinwood> PTG in September: any updates?
17:25:31 <jamespage> on my list for this week to get the etherpad circulated for ideas for the room
17:25:43 <jamespage> probably less design conversations, more sprint of features early
17:25:47 <tinwood> Would you like an action?
17:25:51 <jamespage> I'd like to nail the service-discovery one
17:25:58 <jamespage> tinwood: nah I'll do it anyway
17:26:03 <tinwood> 'kk
17:26:29 <tinwood> So, finally:
17:26:37 <tinwood> #topic Open Discussion
17:26:41 <tinwood> the floor is open:
17:26:57 <tinwood> (also good to see gnuoy) :)
17:27:14 <gnuoy> thanks :)
17:27:46 <tinwood> Any comments for Open Discussion?
17:28:47 <wolsen> yeah
17:29:06 <wolsen> the barbican charm - its not been promulgated into the general space, still in openstack-charmers space - so its beta?
17:29:42 <tinwood> wolsen, Ill take that ..
17:30:11 <tinwood> So the barbican charm is still marked 'experimental' as it doesn't really have a back end.
17:30:31 <wolsen> ah
17:30:33 <tinwood> Although it's looking like there might be a hashicorp vault one coming in Q
17:30:37 <wolsen> ok
17:30:39 <tinwood> (for production)
17:30:49 <beisner> agree - i think we decided to wait to promulgate until there was an actual hardware story to test with.
17:30:53 <tinwood> However, we definitely want to know if people do want to use it.
17:30:56 <jamespage> have to drop - thanks for hosting tinwood
17:31:03 <tinwood> np jamespage
17:31:07 <beisner> indeed, thanks tinwood
17:31:08 <wolsen> I think that makes sense - I've seen a request to use encrypted cinder volumes today
17:31:15 <wolsen> which would require the barbican charm
17:31:30 <tinwood> It either needs an HVM or vault (queens)
17:32:19 <gnuoy> s/HVM/HSM/ ?
17:32:20 <admcleod_> hsm
17:32:23 <tinwood> wolsen, it's also relatively easy to work up a config backend subordinate for barbican -- just need the usecase
17:32:29 * tinwood yes, HSM
17:32:39 <wolsen> tinwood: makes sense
17:33:09 <wolsen> I was just gathering the current state of it in order to understand the request as it relates to the use case presented by the user (encrypted volumes)
17:33:48 <tinwood> wolsen, okay; it'd probably need a little work on the HA side too - but most of that's already supported in charms.openstack.
17:34:04 <wolsen> yep
17:34:06 <wolsen> makes sense
17:34:36 <tinwood> Excellent; would love to get that charm out into the wild :)
17:35:03 <tinwood> I think we're slightly over time, but any more?
17:35:12 <wolsen> nope, all good here - thank you
17:35:26 <tinwood> In that case: thanks everybody for coming and contributions.  And see you at the next one: details are at https://etherpad.openstack.org/p/openstack-charms-weekly-meeting
17:35:33 <tinwood> #endmeeting