16:00:20 #startmeeting openstack_ansible_meeting 16:00:21 Meeting started Tue Jul 3 16:00:20 2018 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:22 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:24 The meeting name has been set to 'openstack_ansible_meeting' 16:00:27 #topic roll call 16:00:30 o/ 16:00:45 o/ 16:01:16 quiet day today eh 16:01:35 o/ 16:02:00 alright, we can get started now. hopefully people hop on shortly 16:02:14 yup 16:02:18 thanks for leading this mnaser 16:02:21 :) 16:02:26 so xenial has no rocky packages 16:02:43 bionic has rocky packages 16:03:10 bionic should be queens indeed 16:03:12 i'm wondering what is the best way to do go about this 16:03:26 will xenial have rocky packages 16:03:28 I feel like we should stay close to canonical's way 16:03:36 mnaser: no 16:03:39 well 16:03:42 not that I am aware 16:03:43 even on release? 16:04:15 https://wiki.ubuntu.com/OpenStack/CloudArchive 16:04:25 "Starting with the Ubuntu Server 16.10 release, newer releases of OpenStack are available via Cloud archive for the Ubuntu Server 16.04 LTS release. Newton and Pike will be supported for 18 months each, and Ocata for 36 months. Queens, 18.04's OpenStack version, is supported in the Cloud Archive for 3 years, i.e. until the end of the Ubuntu 16.04 LTS lifecycle." 16:04:26 that's what we heard in the past, but we can maybe nicely ask 16:04:46 so it looks like queens seems to be the latest release 16:04:54 yeah that's what I said above 16:05:02 bionic should be queens 16:05:12 and rocky 16:05:19 so we need to add bionic support for queens 16:05:21 and then 16:05:23 xenial should stop at queens 16:05:25 drop xenial in rocky 16:05:31 no, xenial will not have rocky packages 16:05:32 Have we even started testing 18.04? 16:05:37 spotz: kinda. 16:05:40 we have asked, and it's been answered 16:05:49 ok, so what do we think about 16:05:56 I have started the work, but I got 0 cycles for it. 16:06:01 pushing work to bring 18.04 in rocky right now 16:06:10 and then backporting the 18.04 patches to stable/queens 16:06:21 some others took over, so thanks everyone joining the effort : ) 16:06:41 it does seem like a big undertaking though 16:06:59 cores: do we feel comfortable backporting 18.04 support to stable/queens ? 16:07:00 I am not sure it's hard, but it takes time and manpower 16:07:14 if we agree that this is important, then we can ask everyone to stop feature work for a week and focus just on bionic enablement for master 16:07:27 i think it's mostly going to be backports of vars/ubuntu-18.04.yml and adding jobs 16:07:33 I did see d34dh0r53also volunteer himself to get onto doing bionic work 16:07:48 d34dh0r53: what's the status there then? 16:07:54 maybe mattt can help? 16:08:05 mnaser: yep, if we can get master working for roles, porting back will be reasonably trivial 16:08:13 evrardjp: just started looking at it, networking is going to be a challenge 16:08:21 okay, so we're okay with backporting 18.04 support so queens 16:08:23 oh no, why's that? 16:08:49 I think that might be the best plan of action 16:08:57 oh dear, would we have to also port back the systemd-network patches too? 16:09:23 odyssey4me: yes 16:09:25 i dont see why that would be necessary? is it because systemd_networkd is required for bionic? 16:09:44 bionic no longer uses the old network config system IIRC 16:09:48 ugh 16:09:56 that starts becoming a very heavy backport 16:10:11 it's one of the reasons we standardised on networkd instead - one method across all distributions 16:10:24 and yes that makes it super painful in terms of backporting 16:10:31 as a non-ubuntu user, i dont want to impede on progress but that's a pretty fundamental change across releases i guess 16:10:44 git-deps should help on the backport 16:10:50 we'd probably have to issue a .1 release to signify the change 16:10:56 no doubt. 16:11:04 o/ sorry late 16:11:04 but it also means that we need to prep them all and do it in a single release 16:11:06 * cloudnull reading back 16:11:19 it also means that we have to pay more attention to those roles 16:11:23 do they even have a stable/queens branch? 16:11:30 we can't afford it to take more time than a single release 16:11:36 ++ 16:11:39 Andy Smith proposed openstack/openstack-ansible-os_ceilometer master: Update to use oslo.messaging service for RPC and Notify https://review.openstack.org/579909 16:11:54 it sucks that ubuntu put us in this situation 16:11:58 mnaser: those roles won't need a stable/queens branch I don't think - although we could make one if we need it 16:12:09 no amount of planning would have made us avoid this 16:12:18 cause 18.04 didn't exist in the queens cycle i think 16:12:26 it did, at the end 16:12:38 I am not sure we _need_ to backport if we take a gutsy decision to not backport 16:12:39 ah 16:12:41 but yes it does suck a bit 16:13:01 at the very end yes. 16:13:09 if we don't backport.. could we ask that users upgrade to bionic before upgrading to rocky? 16:13:11 well no 16:13:15 ok, if we decide to carry rocky on xeinal ourselves then (much like we did for trusty for newton), then we do that at our own risk 16:13:18 I am wrong 16:13:25 bionic arrived after queens 16:13:27 but then we can't do distro installs for xenial 16:13:46 o/ late 16:13:47 distro installs in xenial is probably 'not supported' and shouldn't be anyways 16:13:50 we do then need bionic for rocky asap so that the distro install work can be done for it 16:13:54 but we have another option than backporting bionic..... 16:13:55 distro installs should be for the supported os 16:14:01 logan-: oh? 16:14:05 we could think about doing what we did in newton: 16:14:12 https://github.com/openstack/openstack-ansible-pip_install/blob/4e708955be6675af8195c98d1cf543bb40f7757e/vars/ubuntu-14.04.yml#L17 16:14:12 https://github.com/openstack/openstack-ansible-pip_install/blob/4e708955be6675af8195c98d1cf543bb40f7757e/vars/ubuntu-16.04.yml#L17 16:14:13 Ok so can we have the cutoff for Xenial be Queens evn then say upgrade to Bionic before upgrade to Rocky? 16:14:19 +1 logan- 16:14:28 yeah that's what I just said above :) 16:14:29 i think that was the idea that odyssey4me had suggested too? 16:14:33 and evrardjp 16:14:38 oh sorry evrardjp behind on backscroll :( 16:14:39 logan- yep, so we take the risk of being the only project doing rocky on xenial 16:14:44 so rocky bionic packages in xenial 16:14:55 not too bad, but then we still need to get bionic going so that the distro install work can be tested and working on it 16:14:57 odyssey4me: yeah for newton trusty was really just a jumping off point into xenial it seemed like 16:14:57 just getting caught up, but backporting bionic to queens would be a massive undertaking. 16:15:00 it's a decision we need to take 16:15:00 and cross our fingers for no package conflicts i guess 16:15:12 ++ odyssey4me yes 16:15:20 for newton I was only running trusty for about 2-3 days 16:15:21 distro install or source install, bionic needs to be done 16:15:26 mnaser on source based installs its less of an issue. 16:15:28 for me, I'd say, the easiest (as we discussed this at a ptg), is to bring bionic on rocky, and make rocky the plateform for jumping 16:15:43 we'll have quirks 16:15:53 question for operators 16:16:01 how hard is it to upgrade to bionic 16:16:06 ok, so then we're saying that we're going to do source-installs of rocky on xenial & bionic, and distro installs for ubuntu will be bionic only 16:16:06 then upgrade queens to rocky 16:16:20 spotz: had a point but im not an ubuntu user 16:16:28 I always do fresh installs of operating system and OpenStack 16:16:29 so i dont want to decide/speak on behalf of those operators 16:16:36 I'm a bad use case:) 16:16:44 Merged openstack/openstack-ansible stable/queens: Fix loop variable name for nested loop https://review.openstack.org/579489 16:16:51 mnaser: the release upgrade script for trusty->xenial was horribly broken. the early recommendation from the osa community was reimage your systems 16:16:52 so something that literally blocks OSA from running if containers are not bionic 16:16:52 odyssey4me I think this will almost always be the case with distro installs. 16:17:25 logan-: ah i see... i'm just wondering if maybe we're trying to solve an issue that can be delegated down to the operator 16:17:34 we're kinda stuck on that front given the packagers will dictate what OS a given release can be deployed on 16:17:44 or with a simple OS upgrade 16:17:54 at the PTG we should discuss with ubuntu how we prevent getting into this situation again - we'd need access to bionic earlier so that we can do the same transition releases 16:18:02 simple is a simplification but generally ubuntu seems painless to uprade 16:18:17 odyssey4me: maybe we should test using 17.10 or intermediary releases 16:18:28 dunno if those had testing packages at the time 16:18:34 the main pain point with trusty xenial was the conversion from upstart to system 16:18:34 I suggest for source installs Q (xenial) - R (xenial & bionic) // distro packages R (bionic) 16:18:38 **systemd 16:19:00 I have not done a X > B upgrade but I suspect it'll be a lot better expereience 16:19:01 evrardjp: yep, that's my conclusion from this discussion 16:19:18 so does everyone seem to be in sorts of agreement that 16:19:21 we're too deep right now, so we just move forward - but the urgency for having bionic is raised 16:19:25 +1 evrardjp 16:19:25 ++ 16:19:26 odyssey4me: I am sorry, but that's why jamespage talked to us during the PTG. He helped on the discussion. 16:19:42 ++ 16:19:50 source installations in queens will be xenial only, rocky will use bionic packages in xenial + bionic support 16:19:58 and distro deploys won't support xenial because there are no packages 16:20:02 evrardjp: yes, I know - and that was helpful, but it was also too late for us to get anything in for bionic/queens for us 16:20:17 does that summarize a conclusion we mostly all agree to? ^ 16:20:17 he raised the changes to us, and we had to deal with that -- we all are running with limited resources. 16:20:27 mnaser: *queens usa packages in xenial i think is what you meant there 16:20:31 uca* 16:20:42 Out of curiousity, do we h=get a heads up for SUSE and RedHat, or is that fedora and opensuse? 16:20:46 mnaser: nope 16:20:56 logan-: oh okay i see, so we're pinning those deps back rather than leaping forwards 16:20:59 mnaser: rocky distro installs will not be done at all 16:21:00 yes I switched order:) 16:21:29 mnaser: sorry, that was wrong :/ 16:21:50 so rocky: distro install - only bionic, source install - xenial using queens packages, bionic using rocky packages 16:21:54 mnaser: we're going to do source-installs of rocky on xenial & bionic, and distro installs for ubuntu will be bionic only 16:21:55 is that correct for rocky? 16:22:00 yup 16:22:12 mnaser: yep 16:22:14 ++ 16:22:21 that's how we did newton 16:22:35 #agreed rocky: distro install - only bionic, source install - xenial using queens packages, bionic using rocky packages 16:22:43 we'll have quirks, but that's fine for me. Else I'd not have proposed it 15 lines above :p 16:22:46 took a while but this was important 16:23:01 i can have some efforts on my side to push basic support here for this 16:23:01 agreed mnaser 16:23:13 but let's try and focus on this, especially if this is important for your use case 16:23:24 could rax bring more ppl to this? As this is very important to them. 16:23:31 (and help each other out too) 16:23:52 i think we can discuss later in terms of resources and follow up next week on status, im hoping we all see the importance of this 16:23:58 yeah 16:23:59 is it ok to move onto 2nd item? 16:24:06 fine for me 16:24:13 Andy Smith proposed openstack/openstack-ansible-os_magnum master: Update to use oslo.messaging service for RPC and Notify https://review.openstack.org/579645 16:24:18 evrardjp: TC has assigned two liaisons ppl for OSA: smcginnis and mnaser. If we have any issue we should raise to them, please tell evrardjp as he is currently prepping a mail for it. 16:24:27 * mnaser puts tc hat on 16:24:49 ORLY? 16:24:52 if there are any project issues, anything that you need advice with, or any info with the tc, i'm here. and smcginnis is my pair so you can contact him too 16:25:00 mnaser: as you can see, you'll have an email :p 16:25:09 so if there's anything, please feel free to reach out. and yes, i look forward for that 16:25:19 the tc is trying to be reaching out more to hear what's going on in projects :) 16:25:43 cloudnull: YARLY 16:25:55 I don't want to be the bottleneck of conversations. If ppl want to reach the TC directly, that is fine for me. Just keep me informed at least. 16:26:02 Hope the TC is fine with that too. 16:26:04 so, that was just an update, you can contact me, evrardjp or smcginnis 16:26:06 :p 16:26:08 yeppers, anytime anyone 16:26:15 cool 16:26:15 now at the cost of time, i'll move on to next 16:26:17 hwoarang: working on aio_distro_basekit scenario to test the combined result of distro installation. Little progress on Leap 15 enablement. Awaiting mariadb upstream to act on https://jira.mariadb.org/browse/MDEV-16444 16:26:18 (yay!) 16:26:31 i believe this is the check https://review.openstack.org/#/q/project:%5Eopenstack/openstack-ansible.*+AND+is:open,25 16:26:31 thanks hwoarang 16:26:33 err oops 16:26:35 https://review.openstack.org/#/c/579770/ 16:26:58 so hopefully more work can be done to get through with this 16:27:20 but we are so close, i am excited that this work can be done this cycle, ill review that change on the centos side, looks like some nginx related failures 16:27:30 but ill be looking into that to hopefully be able to land this in time :) 16:28:04 I'd love to see it land in time. 16:28:07 and also on a side cool note 16:28:12 aio_metal centos-7 seems to complete 16:28:18 it looks like it failed on some volume backup stuff in tempest 16:28:22 but we at least get a full deployment!!! 16:28:26 woot 16:28:31 so we are so so so close to getting an actual green again 16:28:41 next-up 16:28:43 evrardjp: working on independant inventories 16:28:45 evrardjp: wanna talk about that^ ? 16:29:12 nothing new, but it could become important for my employer. 16:29:29 can i know what independent inventories is 16:29:31 i'm not sure what it is 16:29:40 if patches need to happen, I'd love to get them included in time. 16:29:57 mnaser: not using our dynamic inventory for integrated builds 16:30:05 sorry, internet outage 16:30:24 evrardjp: i see, so that will help 'scenario' builds right 16:30:24 using anything as source of inventory, so basically be explicit about the contract of what needs to be defined. 16:30:35 it will help many things 16:30:53 wonderful 16:30:59 i think we'd all gladly do those reviews :) 16:31:03 scenarios indeed, but I also hope for increase adoption of osa with an easier inventory. 16:31:19 indeed 16:31:25 it's the first step of a spec that was up 16:31:54 that's all I had to say 16:31:57 cool, i'm happy to see that progress 16:32:01 looking forward to see more of it coming :D 16:32:05 now last thing 16:32:07 evrardjp: no bump on master until https://review.openstack.org/#/c/574006/ is solved (on its way). 16:32:27 so i see the designate change merged 24 minutes ago 16:32:31 does that mean we're good for a recheck? 16:32:44 mnaser: I need to rebounce the shas in the patch 16:32:44 I t hink the SHA's all need to be updated. 16:32:46 Merged openstack/openstack-ansible-os_magnum master: Add systemd tags to include role https://review.openstack.org/578604 16:32:58 cool, so evrardjp will hopefully get that done and we'll be back in action :) 16:33:06 mnaser: yes. 16:33:14 cores please look out for that change so we can land that bump well no time 16:33:25 i'll dive into bug triage next-up 16:33:28 #topic bug triage 16:33:32 evrardjp: still waiting to see any f those inventory patches ;) 16:33:40 #link https://bugs.launchpad.net/openstack-ansible/+bug/1779707 16:33:40 Launchpad bug 1779707 in openstack-ansible "After manual upgrade Pike to Queens cinder does not work" [Undecided,New] 16:34:10 haha 16:34:10 at first i was thinking maybe he needs to cinder-manage update the host of the volumes 16:34:20 but creates fail.. so i'm not sure 16:34:33 I did ask this guy to provide more info 16:34:35 but it also looks like 16:34:40 I don't know for this bug. It sounds legit, odyssey4me you worked on this right? 16:34:42 the create actually happens in cinder-volume too 16:34:56 were his new service up or not, and where did the transactions stop 16:35:22 all he's done is regster the same thing he said in channel, with no new info 16:35:28 lemme find the eavesdrop link 16:35:46 yea but assuming your new cinder-volumes came online it should schedule the create to one of them and the offline ones should only affect existing volumes right? so idk.. need more info like a cinder service list so we can see the states and stuff 16:36:04 found the issue for sahara, it's a (legitimate) change in openstack-ansible-os_nova but sahara does not support that yet 16:36:09 logan-: the note says the logs showed volumes actually being created 16:36:16 oh 16:36:34 oh wait 16:36:37 that might be cinder-api 16:36:39 dispatching 16:36:43 tosky: (we are curently in meeting, triaging bugs, but that sounds lovely, I am glad you find it out!) 16:36:44 or scheduler 16:36:45 yes, sorry 16:37:11 honestly this is not enough information to figure it out, we upgraded cinder to queens and i didnt really see issues :X 16:37:16 I mean - the fact is that we have no information about what agents were showing as running and have no information about whether the services were communicating or had errors. 16:37:25 uh, sorry 16:37:40 i guess now that the state is fixed we cant really 'do much' 16:37:43 let's move on, marking incomplete 16:37:54 it will reopen if answers are provided 16:38:01 yepm unless someone wants to test it out to try and replicate the issue 16:38:20 commented 16:38:21 incomplete 16:38:27 #link https://bugs.launchpad.net/openstack-ansible/+bug/1779633 16:38:27 Launchpad bug 1779633 in openstack-ansible "basic lxc host setup playbook fails on remove requiretty for sudo on centos" [Undecided,New] 16:38:29 sadly I don't have my long running queens anymore 16:39:13 we should probably, for that bug, check if sudo is installed first? 16:39:26 or install it by default. 16:39:40 check the comment 16:39:48 we try to install it 16:39:58 but it fails because the user seems to have some centos pinning stuff going on apparently 16:39:58 yeah 16:40:00 sorry 16:40:21 Merged openstack/ansible-role-systemd_mount master: Add release note link in README https://review.openstack.org/579072 16:40:38 i dunno how the user is creating centos 7.4 containers 16:40:47 Kevin Carter (cloudnull) proposed openstack/openstack-ansible-os_cinder master: Revert "Revert "Convert role to use a common systemd service role"" https://review.openstack.org/574817 16:41:00 i would like to put incomplete and ask the user to provide their user_variables 16:41:09 because they are clearly doing something weird around using older versions 16:41:13 and that is the root cause of this issue 16:41:50 thoughts? 16:42:13 mmm 16:42:26 me takes silence as approval :D 16:42:27 two things 16:42:44 mnaser ++ incomplete. 16:42:45 let's say the deployer has done something bad, the process still succeded. 16:42:55 hmm 16:42:57 that is a bug 16:43:01 only Remove requiretty for sudo on centos failed 16:43:08 that is the bug 16:43:12 should we add 16:43:21 a set 16:43:26 set -e 16:43:28 errorfail or something like that 16:43:29 to the prep cache 16:43:31 yeah 16:43:57 Andy Smith proposed openstack/openstack-ansible-os_trove master: Update to use oslo.messaging service for RPC and Notify https://review.openstack.org/574789 16:44:00 ok 16:44:09 mnaser: https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/prep-scripts/centos_7_prep.sh.j2#L2 16:44:37 wait 16:44:37 what 16:44:56 ooo 16:45:00 https://github.com/openstack/openstack-ansible-lxc_hosts/commit/0d8fa41d32667ea309947f6ef4643192570fd0b1 16:45:14 this was fixed in queens 16:45:19 but very difficult to backport 16:45:44 thanks for the hidden feature in cloudnull 's patch! :p 16:45:53 fixing da bug! 16:46:02 ok 16:46:03 ? 16:46:09 yeah worth backporting to me 16:46:09 fix resolved? 16:46:15 do we want to backport that whole thing? 16:46:17 which branch was it? 16:46:20 it would have to be a branch only change 16:46:29 well it's a bug 16:46:39 we can reimplement in queens though 16:46:39 did i make a booboo ? 16:46:43 that would be simpler 16:46:44 cloudnull: no you fixed one 16:46:49 cloudnull: no you did great 16:46:51 ha! that's a first 16:46:52 that's what i mentioned 16:47:01 in the bug 16:47:10 so confirmed low? 16:47:13 cloudnull: well. You hide a feature in a patch, so the commit message isn't great. But the fix is there! :p 16:47:25 mnaser: confirmed low 16:47:46 winning? 16:47:47 I meant I agree, not merely repeting 16:47:48 #link https://bugs.launchpad.net/openstack-ansible/+bug/1779534 16:47:48 Launchpad bug 1779534 in openstack-ansible "pip install offline fails with new version of get-pip.py" [Undecided,New] 16:47:51 cloudnull: indeed 16:48:06 sadly I expected this. 16:48:18 get-pip is not meant for production and we don't pin it. 16:48:25 Everybody said "it's fine" 16:48:50 I haven't confirmed this though. 16:48:56 but this could well happen. 16:49:34 https://pip.pypa.io/en/stable/user_guide/#installing-from-local-packages 16:49:44 it does look like its still a thing? 16:50:33 for rocky onwards this will be much less of a thing, because we use get-pip a lot less 16:50:33 maybe get-pip isn't tested with that and broke it in its packaging? I don't know. I haven't tested it myself 16:50:38 weoird 16:50:47 odyssey4me: yeah, distro packages. 16:50:56 curl -O https://bootstrap.pypa.io/get-pip.py && python get-pip.py -d => "no such option: -d" 16:51:07 wow, weird that they removed that arg :/ 16:51:08 if this really is an issue, it's easy enough for us to pin to an older version as suggested for the stable branches 16:51:10 mnaser: so confirmed 16:51:26 I'd prefer if we pin 16:51:36 I'd say it's confirmed and medium 16:51:39 i think a pin makes our life easier 16:51:39 or high 16:51:50 because it prevents deploying 16:51:55 sets bad expectations 16:52:07 pinning or vendoring it in 16:52:09 anyone wanna push up the quick 1 minute patch for the pin? 16:52:13 looks like it's already provided too 16:52:30 the vendor in is even easier for the offline installs, as ppl don't have to mirror the file. 16:52:38 but not as backportable though 16:52:46 I'd rather we didn't vendor 16:52:55 yeah 16:53:00 that's a whole another mess (imho) 16:53:01 we already vendor ssl certs 16:53:29 we're moving away from it already - so let's continue to do that and do the simplest workaround for older releases which is to pin 16:53:36 I'll do those patches 16:53:38 I am fine with a pin. 16:53:42 odyssey4me: thank you. may i assign to you 16:53:47 ? 16:53:48 yep 16:53:54 done 16:53:54 thanks 16:53:55 as long as it gets fixed, we are already better :) 16:54:00 ++ 16:54:00 thanks odyssey4me ! 16:54:02 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778914 16:54:02 Launchpad bug 1778914 in openstack-ansible "galera server binds to 0.0.0.0 address" [Undecided,New] 16:54:21 i ran into this a while back 16:54:34 i started doing some patches but i got tired cause everything listens to 0.0.0.0, not just galera. 16:54:57 and this stops potential 3-node aio metal deploys 16:54:59 yeah I ran into it too when introducing the on metal testing 16:55:17 honestly, i think we should listen on specific interfaces 16:55:22 I agree 16:55:32 we already know the ip addresses of br-mgmt or whatever the ip we'll access things on 16:55:33 I don't think the proposed solution is a bad idea there. 16:55:36 and it's just better practice 16:55:53 but this won't fix this issue 16:56:08 why not, isnt galera_wsrep_address the address that the server runs on 16:56:10 so it will be 16:56:14 bind-address = 1.1.1.1 16:56:16 yeah that's fine 16:56:24 but if you run haproxy too on the same node? 16:56:36 tbh haproxy should listen on the vip only 16:56:37 what will be the internal vip address ip? 16:56:53 ok 16:57:10 if vip == galera_wsrep_address then galera_wsrep_address == 127.0.0.1 and haproxy listen on vip 16:57:10 so on your 3 node metal that would be a fix 16:57:22 if vip != galera_wsrep_address then galera_wsrep_address = galera_wsrep_address and haproxy listen on vip 16:57:23 because the vip would be set to a different address 16:57:55 mnaser: yeah I'd say that's kinda what I wrote 16:58:00 in the comment 16:58:08 okay, confirmed low and we can suggest the fix? 16:58:12 * mnaser has no time to actually implement it 16:58:13 mmm 16:58:31 you are now bringing haproxy vars into a different role 16:58:33 that's ugly 16:58:39 but that's fine we've done that in the past 16:58:44 this would happen in integrated repo 16:58:51 the galera_wsrep_address stuff 16:58:54 that would be fine as group var indeed 16:58:54 and haproxy will listen on vip regardless 16:59:24 ok 16:59:31 confirmed low posted a comment with a recommended solution 16:59:42 ok great 16:59:44 I have to go 16:59:50 thanks everyone 16:59:54 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778586 16:59:54 Launchpad bug 1778586 in openstack-ansible "aio_lxc fails on openSUSE Leap 42.3: package conflict between gettext and gettext-runtime" [Undecided,New] - Assigned to Jean-Philippe Evrard (jean-philippe-evrard) 17:00:19 looks like that one is already assigned 17:00:27 and it is being discussed with hwoarang and evrardjp :) 17:00:42 we're at time but if anyone has any super pertinent stuff we can quickly talk over? 17:01:36 ETIMEOUT 17:01:42 thanks everyone 17:01:44 #endmeeting