16:01:26 <mnaser> #startmeeting openstack_ansible_meeting
16:01:27 <openstack> Meeting started Tue Sep 25 16:01:26 2018 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:29 <mnaser> round 2
16:01:30 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:01:32 <mnaser> #topic rollcall
16:02:13 <mnaser> o/ ? :)
16:02:18 <hwoarang> o/
16:02:21 <guilhermesp> o/
16:02:40 <odyssey4me> o/
16:02:49 <openstackgerrit> weizj proposed openstack/openstack-ansible-os_tacker master: Remove the unnecessary vao  https://review.openstack.org/605117
16:03:03 <mnaser> ok cool lets get rolling
16:03:11 <mnaser> #topic Last week highlights
16:03:19 <mnaser> i see some stale items there, i think the only relevant one is evrardjp's
16:03:24 <mnaser> evrardjp: Making Ocata "extended maintenance" (This would add ocata-em tag, not closing the branch). See also: https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html . No real value for us, what do we point to for upstream repos (IMO, the ocata-em upstream tags when available, else the stable branch it is attached to).
16:03:39 <openstackgerrit> weizj proposed openstack/openstack-ansible-os_tacker master: Remove the unnecessary verbose defined  https://review.openstack.org/605117
16:03:54 <mnaser> evrardjp: would you like to expand on that?
16:04:47 <evrardjp> yes
16:05:13 <evrardjp> well I guess we discussed this today with odyssey4me and it was discussed during PTG (sorry to have missed that conversation)
16:05:36 <_d34dh0r53_> o/
16:05:38 <evrardjp> There are discussions on the ML about moving projects to EM for ocata
16:05:39 <mnaser> it was a very brief discussion
16:06:01 <evrardjp> I suggest we move said projects to EM as soon as possible, instead of continuing to track stable/ocata
16:06:31 <spotz> For the ops side of the house are we going to be a tag vs branch?
16:06:39 <mnaser> it looks like most maintainers/users dont seem to care
16:06:46 <evrardjp> indeed
16:07:13 <evrardjp> so the advantage of doing so is that we have a reference that's stable and meaningfull - never bumped until someone definitely wants to
16:07:26 <mnaser> so do we just tag ocata-em and call it a day?
16:07:40 <evrardjp> and it won't break anytime should EOL appear and the branch be deleted
16:07:42 <odyssey4me> For me, I'd just like to reach a stage where we update the docs to indicate the status (and provide reference to what it means) - and stop releasing it.
16:08:10 <evrardjp> mnaser: oh you mean just mark OSA as EM?
16:08:15 <spotz> Yeah my concern is folks looking for the wrong location so definitely want to make sure our docs are good
16:08:17 <evrardjp> and not care about the branches?
16:08:37 <mnaser> no, tag ocata-em and run jobs against ocata-em only
16:08:42 <mnaser> it will break, eventually, inevitebly
16:08:54 <mnaser> my question is
16:08:59 <mnaser> (i should know this answer)
16:09:03 <odyssey4me> It's nice to have the branch there for those of us who'll still need to do patch updates. We have a ton of newton forks thanks to the EOL, and had to do a ton of tooling changes to switch where things pointed.
16:09:05 <evrardjp> you mean for upstream projects or OSA?
16:09:15 <mnaser> if we don't follow stable policy
16:09:21 <mnaser> it's kinda up to us to do what we want
16:09:30 <mnaser> i dont think stable policy works well or makes sense for a deployment project
16:09:40 <odyssey4me> Agreed, which is why we don't follow it.
16:09:41 <mnaser> i really hope no one is doing ocata greenfield deployments :X
16:09:41 <evrardjp> agreed mnaser
16:09:59 <openstackgerrit> Merged openstack/openstack-ansible-ops master: Change VM definition in deploy-vm mnaio playbook  https://review.openstack.org/602832
16:09:59 <evrardjp> mnaser: there are ppl doing it though, but we can't do anything for them :p
16:10:00 <openstackgerrit> Merged openstack/openstack-ansible-ops master: Use group vars to reduce redundancy in host vars  https://review.openstack.org/602834
16:10:01 <openstackgerrit> Merged openstack/openstack-ansible-ops master: Enable setting mnaio disk size by pxe server group  https://review.openstack.org/602835
16:10:01 <openstackgerrit> Merged openstack/openstack-ansible-ops master: Make root partition size configurable  https://review.openstack.org/602859
16:10:02 <evrardjp> ahahah
16:10:42 <odyssey4me> The integrated build is pinned, so we don't need to do anything there. We can leave the roles as-is and if they break, we can fix them as needed.
16:11:02 <odyssey4me> I'd suggest removing the periodic tests though.
16:11:05 <evrardjp> ok I will follow odyssey4me /mnaser 's proposition: Next release will be marked as EM instead of usual tag, and the rest stays the same. If someone wants to fix, (s)he can
16:11:26 <evrardjp> I will clean things up: removing periodic testing -- changing the version name
16:11:35 <evrardjp> marking jobs as non voting
16:11:40 <spotz> Ok and give me a heads up and I'll try to get the docs patch up at the same time
16:11:46 <evrardjp> we can discuss things in the review
16:12:04 <evrardjp> spotz: you can push a patch already without the date I'd say.
16:12:20 <odyssey4me> well, as for non-voting jobs - we know that no-one bothers to check them, so why not just remove them all and make patches just do doc builds and nothing more?
16:12:32 <mnaser> i am in favour of dropping nv jobs.
16:12:36 <mnaser> lets not waste resources
16:12:37 <mnaser> maybe just linters
16:12:45 <odyssey4me> Anyone using an EM project should be doing their own testing.
16:12:50 <mnaser> ++
16:12:58 <mnaser> well actually
16:13:01 <mnaser> things should still be backported so
16:13:02 <evrardjp> that should be written in a release note then
16:13:04 <mnaser> patches should be good
16:13:16 <evrardjp> maybe we keep the testing up until we can't?
16:13:22 <evrardjp> the branch would still be up so it's fine
16:13:26 <hwoarang> well 'em' gives the impression that someone maintains it. if you drop the jobs, then things start failing so the 'maintenance' is not sure accurate anymore
16:13:37 <mnaser> drop the testing till we cant
16:13:37 <odyssey4me> yeah, the EM should definitely include a release note saying that it's EM and therefore no longer actively tested or developed
16:13:48 <mnaser> means someone has to go out and write a patch to drop the jobs
16:13:50 <mnaser> when hey break
16:13:52 <mnaser> they*
16:14:13 <odyssey4me> yeah, I'm fine with leaving them as-is and removing them piecemeal as they break
16:14:14 <mnaser> i dunno
16:14:25 <hwoarang> there is no good reason why things start failing when the openstack components are pinned
16:14:25 <odyssey4me> I've already done some :p
16:14:26 <evrardjp> hwoarang: that's kinda the idea of EM though: until we can't merge things anymore, we keep branch open, else we close the branch
16:14:34 <mnaser> hwoarang: we've seen plenty of things happen
16:14:42 <mnaser> external dependencies changing url (ms + github stuff)
16:14:46 <mnaser> mariadb changing urls
16:14:49 <mnaser> aaaaaall sort of things happen
16:14:51 <hwoarang> yeah but if the branch is open you need some jobs to get an idea of how terrible the situation is :)
16:15:04 <spotz> evrardjp: We've got docs where it mentions the location which is why if the branch goes away and we become a tag I'd rather wait
16:15:04 <evrardjp> hwoarang: :)
16:15:13 <mnaser> well see the thing is
16:15:17 <mnaser> we dont follow stable policy
16:15:22 <evrardjp> that's true
16:15:27 <mnaser> so we have zero obligation of doing this
16:15:32 <evrardjp> indeed
16:15:37 <mnaser> my question is: is there someone here willing to step up to handle this
16:15:44 <evrardjp> we can close the branch right now if you want : p
16:15:45 <mnaser> because it provides value for their organization
16:16:07 <mnaser> if someone steps up to take care of the jobs, tagging em, etc, i'm all for it
16:16:22 <mnaser> but i dont think a deployment project being in EM makes a whole lot of sense, it does for the actual openstack services
16:16:23 <evrardjp> I would prefer not work on this anymore :p
16:16:37 <hwoarang> then lets not do it :)
16:16:41 <mnaser> we could draft up a small email to ML
16:16:46 <mnaser> and ask if someone wants to step up
16:17:05 <evrardjp> I am not in agreement there -- deployers probably want their branch to live FOREVER
16:17:20 <evrardjp> but they will never get an incentive to upgrade -- which is a bad message :p
16:17:30 <mnaser> against sending an email to ML?
16:17:37 <evrardjp> killing the branch is the ultimate incentive
16:17:41 <mnaser> if someone wants it to live forever, they can pick up the work :-)
16:17:41 <evrardjp> oh no I am for it : )
16:17:51 <evrardjp> mnaser: well that was the idea of EM :D
16:17:55 <odyssey4me> mnaser: we (rax) need it, and will handle it
16:17:55 <evrardjp> anyway
16:18:04 <mnaser> well there, no need of an ML post
16:18:12 <evrardjp> we spent much time for this, sorry for taking the time
16:18:20 <mnaser> nah, it's fine.
16:18:33 <spotz> Good covos are worth the time
16:18:34 <odyssey4me> we only need a small subset of the roles
16:18:37 <spotz> convos even...
16:18:52 <mnaser> in that case, we can go ahead with EM and given odyssey4me (or someone at rax) will be leading it then i'll leave it up to him/them to decide how they'd like to maintain it
16:18:56 <logan-> i think we could leave gating in place for now and remove nv and periodics
16:19:01 <mnaser> (as long as we all agree on it)
16:19:02 <evrardjp> at least it's discussed and we know the interest of people is there -- we'll clarify the situation outside this meeting, summarize in the ML and take actions.
16:19:02 <odyssey4me> but we would prefer to keep it upstream rather than have to manage a downstream forks, right d34dh0r53 prometheanfire cjloader ?
16:19:25 <cjloader> yes, upstream is better
16:19:26 <evrardjp> mnaser: agreed
16:19:33 <d34dh0r53> absolutely
16:19:41 <prometheanfire> always
16:19:54 <mnaser> awesome
16:19:58 <evrardjp> understood -- so no EOL -- a path towards EM is better :)
16:20:09 <evrardjp> or no EM but keeping things okayish
16:20:15 <mnaser> so in that case, it's your call to see how best we deal with EM (and where  you want to maintain jobs or not)
16:20:23 <mnaser> as long as we have some form of ownership, i'm happy to have it
16:20:54 <mnaser> (shared responsibility is still a thing, we all help each other land stuff, but the bulk of removing any jobs or what not)
16:21:55 <mnaser> cool
16:21:59 <mnaser> ont other fun stuff
16:22:09 <mnaser> any other comments about this?
16:22:56 <evrardjp> none
16:22:56 <mnaser> i guess not!
16:23:06 <mnaser> #topic bug triage
16:23:18 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1793781
16:23:19 <openstack> Launchpad bug 1793781 in openstack-ansible "Queens -> Rocky Upgrade breaks cinder/glance" [Undecided,New]
16:24:02 <hwoarang> based on last comment seems like a user config issue?
16:24:20 <mnaser> indeed
16:24:25 <spotz> Yeah so DNF
16:24:28 <spotz> or invalid
16:24:52 <mnaser> quorum on invalid?
16:25:02 <hwoarang> ok
16:25:22 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1793389
16:25:22 <openstack> Launchpad bug 1793389 in openstack-ansible "Upgrade to Ocata: Keystone Intermittent Missing 'options' Key" [Undecided,New] - Assigned to Alex Redinger (rexredinger)
16:26:01 <mnaser> hmm
16:26:04 <mnaser> looks like they pushed a fix
16:26:07 <mnaser> but nothing on master
16:26:22 <mnaser> is alex on irc?
16:26:30 <evrardjp> EM on ocata, it's time:p
16:26:32 <evrardjp> hahah
16:26:44 <mnaser> its a patch that went straight to master
16:26:50 <mnaser> err
16:26:53 <mnaser> straight to stable branches
16:26:55 <mnaser> which means it will regress
16:27:22 <hwoarang> true. we should ask for proper backports
16:27:23 <mnaser> d34dh0r53: https://review.openstack.org/#/c/604846/ mind taking off the +2 here because there is no master patch?
16:27:39 <evrardjp> it started in queens apparently: https://review.openstack.org/#/c/604804/1
16:27:54 <mnaser> and that looks like a keystone bug too
16:27:54 <spotz> Well unless there's no issue with master and it is only a backport?
16:27:59 <d34dh0r53> my bad, should have looked at that closer
16:28:10 <mnaser> no its an upgrade issue with keystone
16:28:18 <evrardjp> well master is different now
16:28:29 <evrardjp> but it doesn't prevent a master implementation
16:29:37 <mnaser> confirmed/medium and ask for an impl. in master?
16:29:40 <evrardjp> also it's very weird: why only the service tenant, and not the other keystone commands?
16:29:50 <mnaser> it looks like a keystone bug to me honestly
16:30:17 <mnaser> im tempted to put this down as invalid
16:30:19 <evrardjp> or a conf issue
16:30:21 <mnaser> OH
16:30:24 <mnaser> you know what this might be?!
16:30:29 <mnaser> memcache not being flushed
16:30:30 <mnaser> in the upgrade
16:30:44 <mnaser> so the dict it pulls is invalid
16:30:45 <evrardjp> mmm
16:30:51 <evrardjp> we have a restart in it
16:31:04 <mnaser> not during the keystone play
16:31:16 <evrardjp> oh wait
16:31:31 <evrardjp> https://github.com/openstack/openstack-ansible/blob/stable/ocata/scripts/run-upgrade.sh#L185
16:31:46 <evrardjp> it is before keystone play
16:32:18 <evrardjp> mnaser: I suggest we ask if memcached was properly flushed in the bug, see how it goes?
16:32:20 <odyssey4me> but is the bug reporter using the same upgrade tooling?
16:32:32 <evrardjp> yeah my point odyssey4me
16:32:42 <evrardjp> mark it incomplete until we know how the upgrade was triggered
16:32:49 <d34dh0r53> I just asked him to join
16:32:56 <openstackgerrit> Merged openstack/openstack-ansible-ops master: MNAIO: Correct README regarding DATA_DISK_DEVICE  https://review.openstack.org/604064
16:33:23 <mnaser> hmm
16:33:29 <cjloader> he's on his way
16:33:37 <mnaser> we dont have much bugs
16:34:04 <d34dh0r53> o/ Alex_Red1nger
16:34:10 <d34dh0r53> do you need scrollback?
16:34:11 <spotz> o/
16:34:13 <Alex_Red1nger> \\o
16:34:25 <Alex_Red1nger> Yes please
16:34:29 <mnaser> Alex_Red1nger: we were talking about https://bugs.launchpad.net/openstack-ansible/+bug/1793389
16:34:29 <openstack> Launchpad bug 1793389 in openstack-ansible "Upgrade to Ocata: Keystone Intermittent Missing 'options' Key" [Undecided,New] - Assigned to Alex Redinger (rexredinger)
16:34:33 <evrardjp> hey Alex_Red1nger , it's fun with bugs time currently : )
16:34:35 <d34dh0r53> Alex_Red1nger: pasted it in slack
16:34:46 <spotz> d34dh0r53: boo slack!
16:34:53 <d34dh0r53> yep, boo slack
16:35:00 <d34dh0r53> but didn't want to spam this changel
16:35:03 <d34dh0r53> chanel
16:35:04 <mnaser> cut him some.......slack
16:35:09 <d34dh0r53> ba dum cha
16:35:17 <Alex_Red1nger> Haha
16:35:18 <cjloader> d34dh0r53: spamalot much?
16:35:32 <d34dh0r53> that's sir spamalot to you
16:35:34 <Alex_Red1nger> Anyway, this is definitely a keystone bug.
16:35:54 <spotz> hehehe
16:35:55 <Alex_Red1nger> My patch is meant as a work-around to expidite upgrades.
16:36:01 <skiedude> Might I ask what it means when my bug ticket is moved from open to INvalid with no comment of why?
16:36:14 * spotz notes no nano in our containers is a major bug:)
16:36:33 <cjloader> spotz: +1
16:36:42 <mnaser> skiedude: got a link to one?
16:36:53 <skiedude> https://bugs.launchpad.net/openstack-ansible/+bug/1793781
16:36:53 <openstack> Launchpad bug 1793781 in openstack-ansible "Queens -> Rocky Upgrade breaks cinder/glance" [Undecided,Invalid]
16:36:54 <mnaser> Alex_Red1nger: you were using the upgrade tooling?
16:37:16 <openstackgerrit> Merged openstack/openstack-ansible-ops master: MNAIO: Ensure consistent defaults  https://review.openstack.org/604059
16:37:17 <openstackgerrit> Merged openstack/openstack-ansible-ops master: MNAIO: Disable metering services by default  https://review.openstack.org/604034
16:37:18 <mnaser> skiedude: sorry, i should have added a comment, it looks like you mentioned that it was something with your config so we assumed that it was just something to figure out in your config.
16:37:57 <skiedude> ah ok, well I'm just assuming it is my config, I'm using the basic config options in the test production example, so nothing special
16:38:06 <Alex_Red1nger> mnaser: Yeah, I ran into the bug during execution of run-upgrade.sh
16:38:09 <odyssey4me> Alex_Red1nger: was part of the upgrade process to flush memcache, as done in every stable branch upgrade script? https://github.com/openstack/openstack-ansible/blob/stable/pike/scripts/run-upgrade.sh#L192
16:38:09 <skiedude> guess its not technically a bug, just wasn't sure where to put for support
16:38:12 <antonym> mnaser: yeah, the error pops up when going from ocata to pike and it happens randomly, sometimes it passes successfully, other times, it just needs another run of the keystone playbook and it goes away
16:38:24 <antonym> (using the run-upgrade.sh)
16:38:26 <mnaser> im assuming the "it needs another run" really is
16:38:32 <odyssey4me> aha, ok thanks for clarifying
16:38:33 <mnaser> stuff getting cleared out of memcache
16:38:57 <odyssey4me> maybe there needs to be some sort of check until it's cleared task?
16:39:01 <antonym> yeah, it was kind of random
16:39:14 <Alex_Red1nger> cacheing does seem like a likely cuplrit, since nothing in the code seems susceptible to a race.
16:39:26 <spotz> skiedude: Best thing it so ask here with the understanding someone might not be available or the ML
16:39:30 <mnaser> yeah im 99% sure its the memcache
16:39:49 <skiedude> understood, wheres a link to the ML?
16:40:04 <spotz> skiedude: I'll PM you so as not to spam:)
16:40:10 <skiedude> thanks!
16:41:04 <odyssey4me> maybe the memcache flush needs to happen within the keystone role rather, because that flush happens while the old system is running
16:41:23 <mnaser> exactly, i'd classify that as a bug in our deployment tooling
16:41:29 <mnaser> more retries is just hoping that luck expires memcache stuff
16:41:32 <odyssey4me> could just have a flag to set, and use that flag in the upgrade script
16:41:44 <mnaser> maybe as a handler
16:41:59 <mnaser> with a delegate_to
16:42:07 <odyssey4me> yep, probably as a handler because that'll fire when the old one is shut down and the new one starts
16:42:15 <mnaser> anytime we restart keystone
16:42:18 <mnaser> we do that
16:42:33 <odyssey4me> that sounds like it'd be too much - and break rolling upgrades
16:42:48 <mnaser> i dunno how much it'd break rolling upgrades
16:42:52 <mnaser> memcache is just stateless
16:42:53 <mnaser> it'll repopulate
16:43:06 <odyssey4me> ok, but doesn't it hold all the tokens and such?
16:43:45 <mnaser> nope, with fernet tokens
16:43:49 <odyssey4me> we've only ever needed this done on a major upgrade as far as I know, so it seems prudent to me to just have a flag and use it during the major upgrade
16:43:50 <mnaser> they dont get stored anywhere actually
16:44:05 <mnaser> with fernet tokens it validates it using the credential files
16:44:05 <odyssey4me> anyway, those are details which can get sorted out in review
16:44:12 <mnaser> yeah.
16:44:18 <mnaser> im not opposed to adding more retries if you think it helps
16:44:19 <mnaser> it wont hurt
16:44:23 <mnaser> Alex_Red1nger: just need a patch for master too
16:44:29 <mnaser> if you can do that then we'll be able to push the rest
16:45:14 <odyssey4me> given the cache is for performance, it seems prudent to me not to flush the cache every time you restart keystone - but rather only when needed, otherwise we'd just be slowing the environment down
16:45:31 <mnaser> not opposed to that either, or if we know its a major upgrade, we can figure out the details
16:45:43 <mnaser> Alex_Red1nger: ill mark it as confirmed and we'll take a patch for master before we can land the backported ones
16:46:03 <openstackgerrit> Arx Cruz proposed openstack/openstack-ansible-os_tempest master: WIP - Enable stackviz support  https://review.openstack.org/603100
16:46:09 <mnaser> skiedude: mailing list or irc (outside meeting hours) will be the best place to gather help :)
16:46:16 <mnaser> aka in less than 15 minutes
16:46:36 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1791085
16:46:36 <openstack> Launchpad bug 1791085 in openstack-ansible "OVN Metadata Service Broken" [Undecided,New] - Assigned to James Denton (james-denton)
16:46:40 <Alex_Red1nger> mnaser: OK, I'll move things over.
16:46:59 <mnaser> jamesdenton: do we need anytihng about that or just there to track progress?
16:47:13 <jamesdenton> eventually, yes.
16:47:26 <jamesdenton> it's #22 of #100
16:47:50 <spotz> heehe
16:47:55 <mnaser> so do we mark as wishlist? how do we track wip items usualy?
16:48:12 <jamesdenton> good question. certainly not urgent
16:48:32 <mnaser> i guess confirmed
16:48:35 <odyssey4me> confirm/in-progress I guess - priority probably low/medium
16:48:37 <mnaser> because its confirmed broken.
16:48:48 <mnaser> confirmed/low it is
16:48:53 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1782388
16:48:53 <openstack> Launchpad bug 1782388 in openstack-ansible "Installing Multipath But Not Enabling In Nova Causes Volume Attachment Failures" [Undecided,New]
16:48:54 <jamesdenton> thanks
16:48:56 <mnaser> our trail
16:49:08 <mnaser> i dont see byron around
16:50:47 <mnaser> i guess i can leavei t as is
16:50:49 <mnaser> #topic open discussion
16:50:58 <mnaser> 10 fun minutes if anyone has anything fun
16:51:13 <cjloader> Can I possibly get some eyes on https://review.openstack.org/#/c/556586/?
16:51:23 <cjloader> The designate spec
16:51:26 <odyssey4me> mnaser evrardjp did we want to chat about rocky releasing?
16:51:36 <evrardjp> we can
16:51:38 <mnaser> oh sure
16:51:46 <mnaser> i'll look at that spec soon cjloader
16:52:01 <mnaser> lets see the state of rocky
16:52:20 <mnaser> https://review.openstack.org/#/q/project:%255Eopenstack/openstack-ansible.*+is:open+branch:stable/rocky
16:52:22 <mnaser> we have some open things
16:52:25 <odyssey4me> We chatted a bit earlier. I suggested that we do the final RC tag tomorrow (Wednesday) and ask for the release based on it. JP's on holiday friday.
16:52:37 <evrardjp> mnaser: when I proposed rocky was successfully building in periodics.
16:52:52 <mnaser> im just trying to see if there is something really important to merge
16:53:08 <evrardjp> mnaser: makes sense
16:53:11 <mnaser> but for the most part
16:53:15 <mnaser> they all seem mostly like cleanups
16:53:24 <odyssey4me> All the galera_client deps are gone from rocky, so I'm happy to go ahead
16:53:34 <mnaser> nothing too fundamental and important
16:53:37 <spotz> After the meeting I need to know if we have anache conf overrides:)
16:53:41 <odyssey4me> there are always bugs, and we release often, so I think it's time we released it into the wild
16:53:43 <spotz> apache even
16:53:51 <mnaser> i agree, it's in a good state, nothing pressing
16:53:51 <evrardjp> odyssey4me: I agree there
16:54:01 <evrardjp> just for confirmation
16:54:08 <odyssey4me> So we can check periodics in the morning, and if it's good - we go for it
16:54:15 <evrardjp> another final rc, right? not reusing what's up
16:54:16 <spotz> +1
16:54:17 <mnaser> ++ i agree
16:54:26 <evrardjp> ok sounds good
16:54:31 <mnaser> yes, whats up is pretty far behind
16:54:34 <mnaser> doesnt have all the galera_client stuff
16:54:47 <mnaser> i think the galera_client stuff has helped drive deploy times so much and made it more stable
16:54:48 <mnaser> very happy about that one
16:54:51 <spotz> mnaser: But it's working well in SJC so far?
16:55:03 <mnaser> wonderfully so
16:55:08 <evrardjp> ok
16:55:08 <spotz> Sweet:)
16:55:13 <mnaser> all my rocky-deploy patches were merged too
16:55:31 <mnaser> any other subjects? :)
16:55:35 <d34dh0r53> pike
16:55:42 <mnaser> pike release?
16:55:57 <d34dh0r53> need to get https://review.openstack.org/#/c/604623/ going so that we can get pike updated
16:56:18 <openstackgerrit> Alex Redinger proposed openstack/openstack-ansible-os_keystone master: Increase retries to mitigate intermittent failure  https://review.openstack.org/605146
16:56:37 <mnaser> i think release is on halt right now
16:56:41 <mnaser> because of some issues with ssl?
16:56:45 <mnaser> doug posted something about it a few days ago on ML
16:56:56 <d34dh0r53> ohh, must have missed that, I'll go look
16:56:59 <mnaser> or maybe not, im not sure
16:57:00 <mnaser> https://review.openstack.org/#/q/project:openstack/releases
16:57:03 <mnaser> i see things merging
16:57:16 <mnaser> d34dh0r53: you can ask in #openstack-release
16:57:32 <spotz> I thought the comment there was specific with us and I was gonna blame mhayden
16:57:32 <d34dh0r53> mnaser: yeah
16:57:46 <odyssey4me> I asked earlier - no response yet
16:58:07 <evrardjp> I think it's fixed now
16:58:28 <evrardjp> the releases issue I mean
16:58:44 <evrardjp> sorry I haven't got the chance to continue on that -- a lot of fire still
16:58:46 <spotz> I can poke Sean
16:58:48 <mnaser> cool
16:58:56 <openstackgerrit> Alex Redinger proposed openstack/openstack-ansible-os_keystone stable/rocky: Increase retries to mitigate intermittent failure  https://review.openstack.org/605148
16:59:46 <mnaser> thats all folks?
17:00:04 <odyssey4me> that's all from me
17:00:11 <d34dh0r53> yep, I'm good
17:00:28 <mnaser> thanks everyone
17:00:31 <mnaser> #endmeeting