16:00:20 <mnaser> #startmeeting openstack_ansible_meeting
16:00:21 <openstack> Meeting started Tue Jul  3 16:00:20 2018 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:24 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:00:27 <mnaser> #topic roll call
16:00:30 <mnaser> o/
16:00:45 <spotz> o/
16:01:16 <mnaser> quiet day today eh
16:01:35 <evrardjp> o/
16:02:00 <mnaser> alright, we can get started now.  hopefully people hop on shortly
16:02:14 <evrardjp> yup
16:02:18 <evrardjp> thanks for leading this mnaser
16:02:21 <mnaser> :)
16:02:26 <mnaser> so xenial has no rocky packages
16:02:43 <mnaser> bionic has rocky packages
16:03:10 <evrardjp> bionic should be queens indeed
16:03:12 <mnaser> i'm wondering what is the best way to do go about this
16:03:26 <mnaser> will xenial have rocky packages
16:03:28 <evrardjp> I feel like we should stay close to canonical's way
16:03:36 <evrardjp> mnaser: no
16:03:39 <evrardjp> well
16:03:42 <evrardjp> not that I am aware
16:03:43 <mnaser> even on release?
16:04:15 <mnaser> https://wiki.ubuntu.com/OpenStack/CloudArchive
16:04:25 <mnaser> "Starting with the Ubuntu Server 16.10 release, newer releases of OpenStack are available via Cloud archive for the Ubuntu Server 16.04 LTS release. Newton and Pike will be supported for 18 months each, and Ocata for 36 months. Queens, 18.04's OpenStack version, is supported in the Cloud Archive for 3 years, i.e. until the end of the Ubuntu 16.04 LTS lifecycle."
16:04:26 <evrardjp> that's what we heard in the past, but we can maybe nicely ask
16:04:46 <mnaser> so it looks like queens seems to be the latest release
16:04:54 <evrardjp> yeah that's what I said above
16:05:02 <evrardjp> bionic should be queens
16:05:12 <evrardjp> and rocky
16:05:19 <mnaser> so we need to add bionic support for queens
16:05:21 <mnaser> and then
16:05:23 <evrardjp> xenial should stop at queens
16:05:25 <mnaser> drop xenial in rocky
16:05:31 <odyssey4me> no, xenial will not have rocky packages
16:05:32 <spotz> Have we even started testing 18.04?
16:05:37 <mnaser> spotz: kinda.
16:05:40 <odyssey4me> we have asked, and it's been answered
16:05:49 <mnaser> ok, so what do we think about
16:05:56 <evrardjp> I have started the work, but I got 0 cycles for it.
16:06:01 <mnaser> pushing work to bring 18.04 in rocky right now
16:06:10 <mnaser> and then backporting the 18.04 patches to stable/queens
16:06:21 <evrardjp> some others took over, so thanks everyone joining the effort : )
16:06:41 <mnaser> it does seem like a big undertaking though
16:06:59 <mnaser> cores: do we feel comfortable backporting 18.04 support to stable/queens ?
16:07:00 <evrardjp> I am not sure it's hard, but it takes time and manpower
16:07:14 <odyssey4me> if we agree that this is important, then we can ask everyone to stop feature work for a week and focus just on bionic enablement for master
16:07:27 <mnaser> i think it's mostly going to be backports of vars/ubuntu-18.04.yml and adding jobs
16:07:33 <odyssey4me> I did see d34dh0r53also volunteer himself to get onto doing bionic work
16:07:48 <evrardjp> d34dh0r53: what's the status there then?
16:07:54 <evrardjp> maybe mattt can help?
16:08:05 <odyssey4me> mnaser: yep, if we can get master working for roles, porting back will be reasonably trivial
16:08:13 <d34dh0r53> evrardjp: just started looking at it, networking is going to be a challenge
16:08:21 <mnaser> okay, so we're okay with backporting 18.04 support so queens
16:08:23 <mnaser> oh no, why's that?
16:08:49 <spotz> I think that might be the best plan of action
16:08:57 <odyssey4me> oh dear, would we have to also port back the systemd-network patches too?
16:09:23 <d34dh0r53> odyssey4me: yes
16:09:25 <mnaser> i dont see why that would be necessary?  is it because systemd_networkd is required for bionic?
16:09:44 <odyssey4me> bionic no longer uses the old network config system IIRC
16:09:48 <mnaser> ugh
16:09:56 <mnaser> that starts becoming a very heavy backport
16:10:11 <odyssey4me> it's one of the reasons we standardised on networkd instead - one method across all distributions
16:10:24 <odyssey4me> and yes that makes it super painful in terms of backporting
16:10:31 <mnaser> as a non-ubuntu user, i dont want to impede on progress but that's a pretty fundamental change across releases i guess
16:10:44 <evrardjp> git-deps should help on the backport
16:10:50 <odyssey4me> we'd probably have to issue a .1 release to signify the change
16:10:56 <evrardjp> no doubt.
16:11:04 <cloudnull> o/ sorry late
16:11:04 <odyssey4me> but it also means that we need to prep them all and do it in a single release
16:11:06 * cloudnull reading back
16:11:19 <mnaser> it also means that we have to pay more attention to those roles
16:11:23 <mnaser> do they even have a stable/queens branch?
16:11:30 <odyssey4me> we can't afford it to take more time than a single release
16:11:36 <mnaser> ++
16:11:39 <openstackgerrit> Andy Smith proposed openstack/openstack-ansible-os_ceilometer master: Update to use oslo.messaging service for RPC and Notify  https://review.openstack.org/579909
16:11:54 <mnaser> it sucks that ubuntu put us in this situation
16:11:58 <odyssey4me> mnaser: those roles won't need a stable/queens branch I don't think - although we could make one if we need it
16:12:09 <mnaser> no amount of planning would have made us avoid this
16:12:18 <mnaser> cause 18.04 didn't exist in the queens cycle i think
16:12:26 <odyssey4me> it did, at the end
16:12:38 <evrardjp> I am not sure we _need_ to backport if we take a gutsy decision to not backport
16:12:39 <mnaser> ah
16:12:41 <odyssey4me> but yes it does suck a bit
16:13:01 <evrardjp> at the very end yes.
16:13:09 <mnaser> if we don't backport.. could we ask that users upgrade to bionic before upgrading to rocky?
16:13:11 <evrardjp> well no
16:13:15 <odyssey4me> ok, if we decide to carry rocky on xeinal ourselves then (much like we did for trusty for newton), then we do that at our own risk
16:13:18 <evrardjp> I am wrong
16:13:25 <evrardjp> bionic arrived after queens
16:13:27 <odyssey4me> but then we can't do distro installs for xenial
16:13:46 <logan-> o/ late
16:13:47 <mnaser> distro installs in xenial is probably 'not supported' and shouldn't be anyways
16:13:50 <odyssey4me> we do then need bionic for rocky asap so that the distro install work can be done for it
16:13:54 <logan-> but we have another option than backporting bionic.....
16:13:55 <mnaser> distro installs should be for the supported os
16:14:01 <mnaser> logan-: oh?
16:14:05 <logan-> we could think about doing what we did in newton:
16:14:12 <logan-> https://github.com/openstack/openstack-ansible-pip_install/blob/4e708955be6675af8195c98d1cf543bb40f7757e/vars/ubuntu-14.04.yml#L17
16:14:12 <logan-> https://github.com/openstack/openstack-ansible-pip_install/blob/4e708955be6675af8195c98d1cf543bb40f7757e/vars/ubuntu-16.04.yml#L17
16:14:13 <spotz> Ok so can we have the cutoff for Xenial be Queens evn then say upgrade to Bionic before upgrade to Rocky?
16:14:19 <cloudnull> +1 logan-
16:14:28 <evrardjp> yeah that's what I just said above :)
16:14:29 <mnaser> i think that was the idea that odyssey4me had suggested too?
16:14:33 <mnaser> and evrardjp
16:14:38 <logan-> oh sorry evrardjp behind on backscroll :(
16:14:39 <odyssey4me> logan- yep, so we take the risk of being the only project doing rocky on xenial
16:14:44 <mnaser> so rocky bionic packages in xenial
16:14:55 <odyssey4me> not too bad, but then we still need to get bionic going so that the distro install work can be tested and working on it
16:14:57 <logan-> odyssey4me: yeah for newton trusty was really just a jumping off point into xenial it seemed like
16:14:57 <cloudnull> just getting caught  up, but backporting bionic to queens would be a massive undertaking.
16:15:00 <evrardjp> it's a decision we need to take
16:15:00 <mnaser> and cross our fingers for no package conflicts i guess
16:15:12 <mnaser> ++ odyssey4me yes
16:15:20 <logan-> for newton I was only running trusty for about 2-3 days
16:15:21 <mnaser> distro install or source install, bionic needs to be done
16:15:26 <cloudnull> mnaser on source based installs its less of an issue.
16:15:28 <evrardjp> for me, I'd say, the easiest (as we discussed this at a ptg), is to bring bionic on rocky, and make rocky the plateform for jumping
16:15:43 <evrardjp> we'll have quirks
16:15:53 <mnaser> question for operators
16:16:01 <mnaser> how hard is it to upgrade to bionic
16:16:06 <odyssey4me> ok, so then we're saying that we're going to do source-installs of rocky on xenial & bionic, and distro installs for ubuntu will be bionic only
16:16:06 <mnaser> then upgrade queens to rocky
16:16:20 <mnaser> spotz: had a point but im not an ubuntu user
16:16:28 <spotz> I always do fresh installs of operating system and OpenStack
16:16:29 <mnaser> so i dont want to decide/speak on behalf of those operators
16:16:36 <spotz> I'm a bad use case:)
16:16:44 <openstackgerrit> Merged openstack/openstack-ansible stable/queens: Fix loop variable name for nested loop  https://review.openstack.org/579489
16:16:51 <logan-> mnaser: the release upgrade script for trusty->xenial was horribly broken. the early recommendation from the osa community was reimage your systems
16:16:52 <mnaser> so something that literally blocks OSA from running if containers are not bionic
16:16:52 <cloudnull> odyssey4me I think this will almost always be the case with distro installs.
16:17:25 <mnaser> logan-: ah i see... i'm just wondering if maybe we're trying to solve an issue that can be delegated down to the operator
16:17:34 <cloudnull> we're kinda stuck on that front given the packagers will dictate what OS a given release can be deployed on
16:17:44 <mnaser> or with a simple OS upgrade
16:17:54 <odyssey4me> at the PTG we should discuss with ubuntu how we prevent getting into this situation again - we'd need access to bionic earlier so that we can do the same transition releases
16:18:02 <mnaser> simple is a simplification but generally ubuntu seems painless to uprade
16:18:17 <mnaser> odyssey4me: maybe we should test using 17.10 or intermediary releases
16:18:28 <mnaser> dunno if those had testing packages at the time
16:18:34 <cloudnull> the main pain point with trusty xenial was the conversion from upstart to system
16:18:34 <evrardjp> I suggest for source installs Q (xenial) - R (xenial & bionic) // distro packages R (bionic)
16:18:38 <cloudnull> **systemd
16:19:00 <cloudnull> I have not done a X > B upgrade but I suspect it'll be a lot better expereience
16:19:01 <odyssey4me> evrardjp: yep, that's my conclusion from this discussion
16:19:18 <mnaser> so does everyone seem to be in sorts of agreement that
16:19:21 <odyssey4me> we're too deep right now, so we just move forward - but the urgency for having bionic is raised
16:19:25 <spotz> +1 evrardjp
16:19:25 <logan-> ++
16:19:26 <evrardjp> odyssey4me: I am sorry, but that's why jamespage talked to us during the PTG. He helped on the discussion.
16:19:42 <cloudnull> ++
16:19:50 <mnaser> source installations in queens will be xenial only, rocky will use bionic packages in xenial + bionic support
16:19:58 <mnaser> and distro deploys won't support xenial because there are no packages
16:20:02 <odyssey4me> evrardjp: yes, I know - and that was helpful, but it was also too late for us to get anything in for bionic/queens for us
16:20:17 <mnaser> does that summarize a conclusion we mostly all agree to? ^
16:20:17 <evrardjp> he raised the changes to us, and we had to deal with that -- we all are running with limited resources.
16:20:27 <logan-> mnaser: *queens usa packages in xenial i think is what you meant there
16:20:31 <logan-> uca*
16:20:42 <spotz> Out of curiousity, do we h=get a heads up for SUSE and RedHat, or is that fedora and opensuse?
16:20:46 <odyssey4me> mnaser: nope
16:20:56 <mnaser> logan-: oh okay i see, so we're pinning those deps back rather than leaping forwards
16:20:59 <odyssey4me> mnaser: rocky distro installs will not be done at all
16:21:00 <spotz> yes I switched order:)
16:21:29 <odyssey4me> mnaser: sorry, that was wrong :/
16:21:50 <mnaser> so rocky: distro install - only bionic, source install - xenial using queens packages, bionic using rocky packages
16:21:54 <odyssey4me> mnaser: we're going to do source-installs of rocky on xenial & bionic, and distro installs for ubuntu will be bionic only
16:21:55 <mnaser> is that correct for rocky?
16:22:00 <logan-> yup
16:22:12 <odyssey4me> mnaser: yep
16:22:14 <cloudnull> ++
16:22:21 <odyssey4me> that's how we did newton
16:22:35 <mnaser> #agreed rocky: distro install - only bionic, source install - xenial using queens packages, bionic using rocky packages
16:22:43 <evrardjp> we'll have quirks, but that's fine for me. Else I'd not have proposed it 15 lines above :p
16:22:46 <mnaser> took a while but this was important
16:23:01 <mnaser> i can have some efforts on my side to push basic support here for this
16:23:01 <evrardjp> agreed mnaser
16:23:13 <mnaser> but let's try and focus on this, especially if this is important for your use case
16:23:24 <evrardjp> could rax bring more ppl to this? As this is very important to them.
16:23:31 <mnaser> (and help each other out too)
16:23:52 <mnaser> i think we can discuss later in terms of resources and follow up next week on status, im hoping we all see the importance of this
16:23:58 <evrardjp> yeah
16:23:59 <mnaser> is it ok to move onto 2nd item?
16:24:06 <evrardjp> fine for me
16:24:13 <openstackgerrit> Andy Smith proposed openstack/openstack-ansible-os_magnum master: Update to use oslo.messaging service for RPC and Notify  https://review.openstack.org/579645
16:24:18 <mnaser> evrardjp: TC has assigned two liaisons ppl for OSA: smcginnis and mnaser. If we have any issue we should raise to them, please tell evrardjp as he is currently prepping a mail for it.
16:24:27 * mnaser puts tc hat on
16:24:49 <cloudnull> ORLY?
16:24:52 <mnaser> if there are any project issues, anything that you need advice with, or any info with the tc, i'm here.  and smcginnis is my pair so you can contact him too
16:25:00 <evrardjp> mnaser: as you can see, you'll have an email :p
16:25:09 <mnaser> so if there's anything, please feel free to reach out.  and yes, i look forward for that
16:25:19 <mnaser> the tc is trying to be reaching out more to hear what's going on in projects :)
16:25:43 <mnaser> cloudnull: YARLY
16:25:55 <evrardjp> I don't want to be the bottleneck of conversations. If ppl want to reach the TC directly, that is fine for me. Just keep me informed at least.
16:26:02 <evrardjp> Hope the TC is fine with that too.
16:26:04 <mnaser> so, that was just an update, you can contact me, evrardjp or smcginnis
16:26:06 <evrardjp> :p
16:26:08 <mnaser> yeppers, anytime anyone
16:26:15 <evrardjp> cool
16:26:15 <mnaser> now at the cost of time, i'll move on to next
16:26:17 <mnaser> hwoarang: working on aio_distro_basekit scenario to test the combined result of distro installation. Little progress on Leap 15 enablement. Awaiting mariadb upstream to act on https://jira.mariadb.org/browse/MDEV-16444
16:26:18 <mnaser> (yay!)
16:26:31 <mnaser> i believe this is the check https://review.openstack.org/#/q/project:%5Eopenstack/openstack-ansible.*+AND+is:open,25
16:26:31 <evrardjp> thanks hwoarang
16:26:33 <mnaser> err oops
16:26:35 <mnaser> https://review.openstack.org/#/c/579770/
16:26:58 <mnaser> so hopefully more work can be done to get through with this
16:27:20 <mnaser> but we are so close, i am excited that this work can be done this cycle, ill review that change on the centos side, looks like some nginx related failures
16:27:30 <mnaser> but ill be looking into that to hopefully be able to land this in time :)
16:28:04 <evrardjp> I'd love to see it land in time.
16:28:07 <mnaser> and also on a side cool note
16:28:12 <mnaser> aio_metal centos-7 seems to complete
16:28:18 <mnaser> it looks like it failed on some volume backup stuff in tempest
16:28:22 <mnaser> but we at least get a full deployment!!!
16:28:26 <evrardjp> woot
16:28:31 <mnaser> so we are so so so close to getting an actual green again
16:28:41 <mnaser> next-up
16:28:43 <mnaser> evrardjp: working on independant inventories
16:28:45 <mnaser> evrardjp: wanna talk about that^ ?
16:29:12 <evrardjp> nothing new, but it could become important for my employer.
16:29:29 <mnaser> can i know what independent inventories is
16:29:31 <mnaser> i'm not sure what it is
16:29:40 <evrardjp> if patches need to happen, I'd love to get them included in time.
16:29:57 <evrardjp> mnaser: not using our dynamic inventory for integrated builds
16:30:05 <d34dh0r53> sorry, internet outage
16:30:24 <mnaser> evrardjp: i see, so that will help 'scenario' builds right
16:30:24 <evrardjp> using anything as source of inventory, so basically be explicit about the contract of what needs to be defined.
16:30:35 <evrardjp> it will help many things
16:30:53 <mnaser> wonderful
16:30:59 <mnaser> i think we'd all gladly do those reviews :)
16:31:03 <evrardjp> scenarios indeed, but I also hope for increase adoption of osa with an easier inventory.
16:31:19 <mnaser> indeed
16:31:25 <evrardjp> it's the first step of a spec that was up
16:31:54 <evrardjp> that's all I had to say
16:31:57 <mnaser> cool, i'm happy to see that progress
16:32:01 <mnaser> looking forward to see more of it coming :D
16:32:05 <mnaser> now last thing
16:32:07 <mnaser> evrardjp: no bump on master until https://review.openstack.org/#/c/574006/ is solved (on its way).
16:32:27 <mnaser> so i see the designate change merged 24 minutes ago
16:32:31 <mnaser> does that mean we're good for a recheck?
16:32:44 <evrardjp> mnaser: I need to rebounce the shas in the patch
16:32:44 <cloudnull> I t hink the SHA's all need to be updated.
16:32:46 <openstackgerrit> Merged openstack/openstack-ansible-os_magnum master: Add systemd tags to include role  https://review.openstack.org/578604
16:32:58 <mnaser> cool, so evrardjp will hopefully get that done and we'll be back in action :)
16:33:06 <evrardjp> mnaser: yes.
16:33:14 <mnaser> cores please look out for that change so we can land that bump well no time
16:33:25 <mnaser> i'll dive into bug triage next-up
16:33:28 <mnaser> #topic bug triage
16:33:32 <odyssey4me> evrardjp: still waiting to see any f those inventory patches ;)
16:33:40 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1779707
16:33:40 <openstack> Launchpad bug 1779707 in openstack-ansible "After manual upgrade Pike to Queens cinder does not work" [Undecided,New]
16:34:10 <evrardjp> haha
16:34:10 <logan-> at first i was thinking maybe he needs to cinder-manage update the host of the volumes
16:34:20 <logan-> but creates fail.. so i'm not sure
16:34:33 <odyssey4me> I did ask this guy to provide more info
16:34:35 <mnaser> but it also looks like
16:34:40 <evrardjp> I don't know for this bug. It sounds legit, odyssey4me you worked on this right?
16:34:42 <mnaser> the create actually happens in cinder-volume too
16:34:56 <odyssey4me> were his new service up or not, and where did the transactions stop
16:35:22 <odyssey4me> all he's done is regster the same thing he said in channel, with no new info
16:35:28 <odyssey4me> lemme find the eavesdrop link
16:35:46 <logan-> yea but assuming your new cinder-volumes came online it should schedule the create to one of them and the offline ones should only affect existing volumes right? so idk.. need more info like a cinder service list so we can see the states and stuff
16:36:04 <tosky> found the issue for sahara, it's a (legitimate) change in openstack-ansible-os_nova but sahara does not support that yet
16:36:09 <mnaser> logan-: the note says the logs showed volumes actually being created
16:36:16 <logan-> oh
16:36:34 <mnaser> oh wait
16:36:37 <mnaser> that might be cinder-api
16:36:39 <mnaser> dispatching
16:36:43 <evrardjp> tosky: (we are curently in meeting, triaging bugs, but that sounds lovely, I am glad you find it out!)
16:36:44 <odyssey4me> or scheduler
16:36:45 <mnaser> yes, sorry
16:37:11 <mnaser> honestly this is not enough information to figure it out, we upgraded cinder to queens and i didnt really see issues :X
16:37:16 <odyssey4me> I mean - the fact is that we have no information about what agents were showing as running and have no information about whether the services were communicating or had errors.
16:37:25 <tosky> uh, sorry
16:37:40 <mnaser> i guess now that the state is fixed we cant really 'do much'
16:37:43 <evrardjp> let's move on, marking incomplete
16:37:54 <evrardjp> it will reopen if answers are provided
16:38:01 <odyssey4me> yepm unless someone wants to test it out to try and replicate the issue
16:38:20 <mnaser> commented
16:38:21 <mnaser> incomplete
16:38:27 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1779633
16:38:27 <openstack> Launchpad bug 1779633 in openstack-ansible "basic lxc host setup playbook fails on remove requiretty for sudo on centos" [Undecided,New]
16:38:29 <evrardjp> sadly I don't have my long running queens anymore
16:39:13 <evrardjp> we should probably, for that bug, check if sudo is installed first?
16:39:26 <evrardjp> or install it by default.
16:39:40 <mnaser> check the comment
16:39:48 <mnaser> we try to install it
16:39:58 <mnaser> but it fails because the user seems to have some centos pinning stuff going on apparently
16:39:58 <evrardjp> yeah
16:40:00 <evrardjp> sorry
16:40:21 <openstackgerrit> Merged openstack/ansible-role-systemd_mount master: Add release note link in README  https://review.openstack.org/579072
16:40:38 <mnaser> i dunno how the user is creating centos 7.4 containers
16:40:47 <openstackgerrit> Kevin Carter (cloudnull) proposed openstack/openstack-ansible-os_cinder master: Revert "Revert "Convert role to use a common systemd service role""  https://review.openstack.org/574817
16:41:00 <mnaser> i would like to put incomplete and ask the user to provide their user_variables
16:41:09 <mnaser> because they are clearly doing something weird around using older versions
16:41:13 <mnaser> and that is the root cause of this issue
16:41:50 <mnaser> thoughts?
16:42:13 <evrardjp> mmm
16:42:26 <mnaser> me takes silence as approval :D
16:42:27 <evrardjp> two things
16:42:44 <cloudnull> mnaser ++ incomplete.
16:42:45 <evrardjp> let's say the deployer has done something bad, the process still succeded.
16:42:55 <mnaser> hmm
16:42:57 <mnaser> that is a bug
16:43:01 <evrardjp> only Remove requiretty for sudo on centos failed
16:43:08 <evrardjp> that is the bug
16:43:12 <mnaser> should we add
16:43:21 <evrardjp> a set
16:43:26 <mnaser> set -e
16:43:28 <evrardjp> errorfail or something like that
16:43:29 <mnaser> to the prep cache
16:43:31 <evrardjp> yeah
16:43:57 <openstackgerrit> Andy Smith proposed openstack/openstack-ansible-os_trove master: Update to use oslo.messaging service for RPC and Notify  https://review.openstack.org/574789
16:44:00 <mnaser> ok
16:44:09 <evrardjp> mnaser: https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/prep-scripts/centos_7_prep.sh.j2#L2
16:44:37 <mnaser> wait
16:44:37 <mnaser> what
16:44:56 <mnaser> ooo
16:45:00 <mnaser> https://github.com/openstack/openstack-ansible-lxc_hosts/commit/0d8fa41d32667ea309947f6ef4643192570fd0b1
16:45:14 <mnaser> this was fixed in queens
16:45:19 <mnaser> but very difficult to backport
16:45:44 <evrardjp> thanks for the hidden feature in cloudnull 's patch! :p
16:45:53 <evrardjp> fixing da bug!
16:46:02 <mnaser> ok
16:46:03 <cloudnull> ?
16:46:09 <evrardjp> yeah worth backporting to me
16:46:09 <mnaser> fix resolved?
16:46:15 <mnaser> do we want to backport that whole thing?
16:46:17 <evrardjp> which branch was it?
16:46:20 <mnaser> it would have to be a branch only change
16:46:29 <evrardjp> well it's a bug
16:46:39 <evrardjp> we can reimplement in queens though
16:46:39 <cloudnull> did i make a booboo ?
16:46:43 <evrardjp> that would be simpler
16:46:44 <mnaser> cloudnull: no you fixed one
16:46:49 <evrardjp> cloudnull: no you did great
16:46:51 <cloudnull> ha! that's a first
16:46:52 <mnaser> that's what i mentioned
16:47:01 <mnaser> in the bug
16:47:10 <mnaser> so confirmed low?
16:47:13 <evrardjp> cloudnull: well. You hide a feature in a patch, so the commit message isn't great. But the fix is there! :p
16:47:25 <evrardjp> mnaser: confirmed low
16:47:46 <cloudnull> winning?
16:47:47 <evrardjp> I meant I agree, not merely repeting
16:47:48 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1779534
16:47:48 <openstack> Launchpad bug 1779534 in openstack-ansible "pip install offline fails with new version of get-pip.py" [Undecided,New]
16:47:51 <evrardjp> cloudnull: indeed
16:48:06 <evrardjp> sadly I expected this.
16:48:18 <evrardjp> get-pip is not meant for production and we don't pin it.
16:48:25 <evrardjp> Everybody said "it's fine"
16:48:50 <evrardjp> I haven't confirmed this though.
16:48:56 <evrardjp> but this could well happen.
16:49:34 <mnaser> https://pip.pypa.io/en/stable/user_guide/#installing-from-local-packages
16:49:44 <mnaser> it does look like its still a thing?
16:50:33 <odyssey4me> for rocky onwards this will be much less of a thing, because we use get-pip a lot less
16:50:33 <evrardjp> maybe get-pip isn't tested with that and broke it in its packaging? I don't know. I haven't tested it myself
16:50:38 <mnaser> weoird
16:50:47 <evrardjp> odyssey4me: yeah, distro packages.
16:50:56 <mnaser> curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py -d => "no such option: -d"
16:51:07 <logan-> wow, weird that they removed that arg :/
16:51:08 <odyssey4me> if this really is an issue, it's easy enough for us to pin to an older version as suggested for the stable branches
16:51:10 <evrardjp> mnaser: so confirmed
16:51:26 <evrardjp> I'd prefer if we pin
16:51:36 <evrardjp> I'd say it's confirmed and medium
16:51:39 <mnaser> i think a pin makes our life easier
16:51:39 <evrardjp> or high
16:51:50 <evrardjp> because it prevents deploying
16:51:55 <evrardjp> sets bad expectations
16:52:07 <evrardjp> pinning or vendoring it in
16:52:09 <mnaser> anyone wanna push up the quick 1 minute patch for the pin?
16:52:13 <mnaser> looks like it's already provided too
16:52:30 <evrardjp> the vendor in is even easier for the offline installs, as ppl don't have to mirror the file.
16:52:38 <mnaser> but not as backportable though
16:52:46 <odyssey4me> I'd rather we didn't vendor
16:52:55 <mnaser> yeah
16:53:00 <mnaser> that's a whole another mess (imho)
16:53:01 <evrardjp> we already vendor ssl certs
16:53:29 <odyssey4me> we're moving away from it already - so let's continue to do that and do the simplest workaround for older releases which is to pin
16:53:36 <odyssey4me> I'll do those patches
16:53:38 <evrardjp> I am fine with a pin.
16:53:42 <mnaser> odyssey4me: thank you.  may i assign to you
16:53:47 <mnaser> ?
16:53:48 <odyssey4me> yep
16:53:54 <mnaser> done
16:53:54 <mnaser> thanks
16:53:55 <evrardjp> as long as it gets fixed, we are already better :)
16:54:00 <mnaser> ++
16:54:00 <evrardjp> thanks odyssey4me !
16:54:02 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1778914
16:54:02 <openstack> Launchpad bug 1778914 in openstack-ansible "galera server binds to 0.0.0.0 address" [Undecided,New]
16:54:21 <mnaser> i ran into this a while back
16:54:34 <mnaser> i started doing some patches but i got tired cause everything listens to 0.0.0.0, not just galera.
16:54:57 <mnaser> and this stops potential 3-node aio metal deploys
16:54:59 <evrardjp> yeah I ran into it too when introducing the on metal testing
16:55:17 <mnaser> honestly, i think we should listen on specific interfaces
16:55:22 <evrardjp> I agree
16:55:32 <mnaser> we already know the ip addresses of br-mgmt or whatever the ip we'll access things on
16:55:33 <evrardjp> I don't think the proposed solution is a bad idea there.
16:55:36 <mnaser> and it's just better practice
16:55:53 <evrardjp> but this won't fix this issue
16:56:08 <mnaser> why not, isnt galera_wsrep_address the address that the server runs on
16:56:10 <mnaser> so it will be
16:56:14 <mnaser> bind-address = 1.1.1.1
16:56:16 <evrardjp> yeah that's fine
16:56:24 <evrardjp> but if you run haproxy too on the same node?
16:56:36 <mnaser> tbh haproxy should listen on the vip only
16:56:37 <evrardjp> what will be the internal vip address ip?
16:56:53 <evrardjp> ok
16:57:10 <mnaser> if vip == galera_wsrep_address then galera_wsrep_address == 127.0.0.1 and haproxy listen on vip
16:57:10 <evrardjp> so on your 3 node metal that would be a fix
16:57:22 <mnaser> if vip != galera_wsrep_address then galera_wsrep_address = galera_wsrep_address and haproxy listen on vip
16:57:23 <evrardjp> because the vip would be set to a different address
16:57:55 <evrardjp> mnaser: yeah I'd say that's kinda what I wrote
16:58:00 <evrardjp> in the comment
16:58:08 <mnaser> okay, confirmed low and we can suggest the fix?
16:58:12 * mnaser has no time to actually implement it
16:58:13 <evrardjp> mmm
16:58:31 <evrardjp> you are now bringing haproxy vars into a different role
16:58:33 <evrardjp> that's ugly
16:58:39 <evrardjp> but that's fine we've done that in the past
16:58:44 <mnaser> this would happen in integrated repo
16:58:51 <mnaser> the galera_wsrep_address stuff
16:58:54 <evrardjp> that would be fine as group var indeed
16:58:54 <mnaser> and haproxy will listen on vip regardless
16:59:24 <mnaser> ok
16:59:31 <mnaser> confirmed low posted a comment with a recommended solution
16:59:42 <evrardjp> ok great
16:59:44 <evrardjp> I have to go
16:59:50 <evrardjp> thanks everyone
16:59:54 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1778586
16:59:54 <openstack> Launchpad bug 1778586 in openstack-ansible "aio_lxc fails on openSUSE Leap 42.3: package conflict between gettext and gettext-runtime" [Undecided,New] - Assigned to Jean-Philippe Evrard (jean-philippe-evrard)
17:00:19 <mnaser> looks like that one is already assigned
17:00:27 <mnaser> and it is being discussed with hwoarang and evrardjp :)
17:00:42 <mnaser> we're at time but if anyone has any super pertinent stuff we can quickly talk over?
17:01:36 <mnaser> ETIMEOUT
17:01:42 <mnaser> thanks everyone
17:01:44 <mnaser> #endmeeting