16:00:05 #startmeeting openstack_ansible_meeting 16:00:08 Meeting started Tue Jun 26 16:00:05 2018 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:09 #topic rollcall 16:00:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:11 The meeting name has been set to 'openstack_ansible_meeting' 16:00:15 o/ 16:00:18 Logan V proposed openstack/openstack-ansible master: Bootstrap lxc_net mtu for gate https://review.openstack.org/557484 16:00:19 prometheanfire apologies, but I'm way out of context to be able to grok what you're talking about 16:00:21 o/ 16:00:29 o/ for 5 minutes 16:00:35 o/ 16:00:43 Tahvok: anything in particular you want us to start with if you'er around for 5? :-) 16:00:46 o/ 16:00:48 o/ 16:01:02 mnaser: no, you've already answered to my question 16:01:06 wonderful 16:01:07 o/ 16:01:10 Thanks a lot btw.. 16:01:13 o/ for 10-15 min 16:01:16 thank YOU :-) 16:01:18 #topic last week highlights 16:01:22 mnaser: sorry for the delay, my venvs are busted. 16:01:28 evrardjp: np, we have sometime 16:01:34 so, updates on my side 16:01:41 o/ 16:01:44 the big breakages removing epel have mostly been resolved 16:01:55 transition to rdo is pretty much complete, we use it in our jobs now entirely (yay) 16:02:01 w00t! 16:02:03 rdo pushed up uwsgi so distro jobs are working 16:02:14 which should help hwoarang efforts in adding distro support 16:02:45 ci jobs clean up is still in progress, heba from my side has patches up for all projects to update/add openstack-ansible-role-jobs project-template to make managing those jobs easier and more centralized 16:02:51 o/ for 10 minutes 16:03:01 you can follow it here https://review.openstack.org/#/q/owner:%22Heba+Naser+%253Cheba%2540vexxhost.com%253E%22+is:open (we have been looking at unbreaking roles that are breaking out of that) 16:03:33 we merged a whole bunch of mirrors so our jobs should be more reliable (percona being a bad one). mariadb is wip with infra (mirrors are almost up) and that will be a big unreliable one which will be fixed :) 16:03:46 i broke the world here - https://review.openstack.org/#/c/578086/ -- so thanks to odyssey4me for fixing my mistakes :) 16:04:02 cloudnull's been fixing nspawn stuff, which is cool because we can start landing fixes in that side (another yay) 16:04:21 and finally, packethost was having issues regarding mtu/checksumming/etc. logan- seems to be digging into that (i totally dragged you into that one, sorry) 16:04:34 packethost has been disabled, so timeouts should be gone, but hopefully i think logan- was talking about reviving that 16:04:53 o/ 16:04:55 now that's all for my monologue, anyone else has anything exciting they're working on the past week? 16:05:00 * cloudnull is late but here :) 16:05:07 thanks muchly both mnaser and logan- for climbing into that one 16:05:15 what is packethost? 16:05:18 mostly logan-, i just did the complaining 16:05:22 the cloud provider or something else? 16:05:31 hwoarang: packethost is a baremetal company provider who we have a cloud deployed on that was recently added to infra 16:05:37 aha ok 16:05:41 https://www.packet.net/ 16:05:50 thanks! 16:06:54 yep we should be able to get it working with https://review.openstack.org/#/c/557484/ 16:07:04 o/ 16:07:20 the uplink interface has the correct mtu from dhcp, but the bridges we build always assume 1500, so the tldr is just set the lxc_net bridge to whatever the uplink is 16:07:40 Albert Mikaelyan proposed openstack/openstack-ansible-os_nova master: Add qemu-kvm to package list for ubuntu-16.04 https://review.openstack.org/578140 16:07:44 yeah i figured that was a reason behind why it occured 16:07:46 mnaser: done ^ 16:08:06 Tahvok: thank you 16:08:12 I'm out guys. Will try to move my tennis session, so I could be full hour with you next time 16:08:19 yep, seems sensible to me - it'll also help people with AIO's in other environments where the MTU is slightly lower than usual (private openstack clouds, for example) 16:09:11 https://review.openstack.org/#/c/576884/3 16:09:28 can i have eyes on that, it's the patch below setting upgrade jobs to nv 16:09:35 so that will help push it into the gate so we can unblock gates 16:09:51 (while we wait for the bugs list to go up :p) 16:10:04 https://review.openstack.org/#/c/577885/3 would also be cool to prepare things for mariadb mirrors work 16:10:17 mnaser: the bug list is up 16:10:34 sweet, thanks evrardjp. i'll do my homework on time next time. 16:10:57 i'll move to bug triage but please any cores have a look at the two changes above so we can unblock our gates 16:11:02 #topic triage 16:11:07 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778663 16:11:08 Launchpad bug 1778663 in openstack-ansible "Pike→Queens upgrade fails on final step in run_upgrade.sh (running haproxy-install.yml)" [Undecided,New] 16:12:19 mnaser: it's my fault, my code isn't UTF8 ready and ^ bug causes issues in python3. 16:12:28 https://github.com/openstack/openstack-ansible/blob/stable/queens/scripts/run-upgrade.sh mentioned file 16:12:46 I think there's a patch for that already - hang a sec 16:12:55 oh its missing ${UPGRADE_PLAYBOOKS} 16:12:59 https://github.com/openstack/openstack-ansible/blob/stable/queens/scripts/run-upgrade.sh#L203 16:13:18 https://review.openstack.org/569857 16:13:29 wait no, that's not it 16:13:32 (at my comment) 16:13:34 oh, unless I'm making a broad assumption without reading :o 16:13:41 ahhhh 16:13:44 no i think that might be it 16:14:05 I am not sure we should use tags there 16:14:23 evrardjp: user seems to report that everythign worked well when they did 16:14:40 tags aren't essential for that, but yeah - that patch fixes it 16:14:58 I can modify the commit to add the bug ref if that'd be good for everyone? 16:14:59 https://github.com/openstack/openstack-ansible/commit/21739027606df272f8caff0a4b36f5d2bd82681b 16:15:08 odyssey4me: yeah 16:15:13 odyssey4me: do that quickly and we'll +A 16:15:34 Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Upgrade task missing equals on tag https://review.openstack.org/569857 16:15:46 I think that's when the upgrade started to fail. And I can definitely it's during my "experiment month" 16:16:07 ok, done - although as I did it there were very many votes added :p 16:16:15 can i get another core to +A that ^ 16:16:22 * mnaser pokes spotz again 16:16:30 or that :p 16:16:38 ok cool 16:16:44 done already 16:16:46 oh sure.... 16:16:47 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778586 16:16:47 Launchpad bug 1778586 in openstack-ansible "aio_lxc fils on Leap 42.3" [Undecided,New] 16:16:53 reviewed. 16:17:07 hmm 16:17:35 hwoarang: any idea about the above? ^ 16:17:41 I vote for hwoarang :) 16:17:43 evrardjp: do we bug you about suse stuff or not yet 16:17:44 :P 16:17:57 haha. good question. 16:18:06 I have the env still up and can provide more details if necessary 16:18:12 ofc you can : ) 16:18:37 I can also try to debug this myself, but I need someone to hold my hand ;) 16:18:56 hmm, looks like a package conflict unless I'm reading it wrong? 16:19:03 odyssey4me: yeah that's what i see it too 16:19:07 but our gates are ok for opensuse right now 16:19:08 Yes that's how I am reading it odyssey4me 16:19:15 so im not sure how the state changed 16:19:25 master hasn't move that much since that patch 16:19:27 are you using any special mirrors or something 16:19:31 No 16:19:43 no corporate reverse or transparent proxy 16:19:46 I created a kvm vm and run the normal AIO stuff 16:19:58 No, no procies 16:20:01 proxies 16:20:01 ok 16:20:18 i don't really know enough about zypper and friends :( 16:20:19 nicolasbock: can you put some repo info on the bug? 16:20:20 I will run a new leap42.3 with vbox when my host is setup 16:20:22 I will take it 16:20:25 Yes hwoarang 16:21:02 wonder why sysstat is required though 16:21:03 it's likely best for nicolasbock, evrardjp and hwoarang to partner up to figure it out to confirm and triage... perhaps we leave current bug state and move on 16:21:04 cool, shall i assign to you evrardjp ? and it's nice nicolasbock is here too so we can do some more interactive debugging after the meeting (or whenever) 16:21:04 Matthew Thode proposed openstack/openstack-ansible-os_octavia stable/queens: Adds the issuer to the CAs https://review.openstack.org/578118 16:21:05 ? 16:21:12 sure 16:21:16 yup 16:21:18 odyssey4me: i agree to that, cool 16:21:21 Yes, sounds good 16:21:30 the dream team in action 16:21:30 evrardjp: did you see what I did there? ;) 16:21:50 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778537 16:21:50 Launchpad bug 1778537 in openstack-ansible "LXC bridge issue with networkd on CentOS 7" [Undecided,New] 16:21:50 I attached the repo info hwoarang 16:22:09 odyssey4me: assign more work to ppl, great! 16:22:10 cloudnull: has been doing some networkd fixes 16:22:11 evrardjp, :+1: 16:22:23 cloudnull: do you think the above might already be fixed? 16:22:35 * cloudnull reading 16:22:48 well, in this case the person did the prep themselves - they're asking for docs of known issues, which I think is fair 16:22:49 that was a new issue tux was running into 16:23:01 I think tux is the reporter 16:23:54 maybe we should have this networkd behavior optional, and distro based 16:23:56 I'm not sure what the issue was in the environment. from what I can tell the networkd setup was done by hand largely following mhayden's blog post. 16:23:57 can we say this is confirmed and assign to cloudnull or was this resolved with the recent set of patches? 16:23:57 would be interesting to see iptables-save output from the host, ip addr / networkctl status output from the container, and see if lxc-net dnsmasq is running on the host 16:24:13 or maybe incomplete and add what logan- is asking for? 16:24:14 im thinking either dnsmasq isnt running or the nat rules is missing 16:24:35 that gate seems to run fine with networkd as it stands today . 16:24:39 or started at a wrong point 16:24:54 perhaps we need a user story which discusses prepping the hosts with networkd config and shares the things that need knowing? 16:24:57 cloudnull: well we don't test what all deployers do -- rebooting things 16:25:00 so im not certain there's anything specifically wrong with networkd and centos 16:25:20 that's a good point odyssey4me 16:25:21 cloudnull: that's a gap in our testing we could cover in the future 16:25:35 in tux's case it wasn't working at all, nevermind a reboot. 16:25:41 odyssey4me: I am fine with the user story 16:25:54 i dont think any other deployment tool tests rebooting things so we have other fun things to catch up on before that :P 16:25:59 because that's not part of the "osa" deploy itself 16:26:01 it was something off with lxcbr0 and NAT 16:26:17 cloudnull: would you be up for writing up at least the skeleton of a user story given the networkd experience - I'm happy to make it pretty and have spotz make it prettier 16:26:19 yeah ive seen issues with dnsmasq not starting/restarting properly on eni->networkd conversions in lxc_hosts, but never issues on greenfield, so it seems like this will need some more diag 16:26:29 odyssey4me sure. 16:26:34 logan-: agreed with you 16:26:38 hehe 16:26:39 it's not the first one. 16:26:52 I have some tools running in the lab with an all networkd setup 16:27:00 ok, so assign to cloudnull for now - I'd say low but confirmed 16:27:01 i can doc that 16:27:06 ++ 16:27:07 yay so given the resolution for this is 'soft notes', can i assign to cloudnull ? 16:27:09 cool 16:27:17 yeah sounds nice 16:27:25 done! 16:27:32 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778463 16:27:32 Launchpad bug 1778463 in openstack-ansible "[magnum][pike] Image upload in playbook does not work" [Undecided,New] 16:27:40 ahhhhhhhhh 16:27:43 i remember this 16:27:49 i pushed up a patch 16:27:59 ok great 16:28:03 that sounds an easy fix 16:28:04 let me find it 16:28:20 https://review.openstack.org/#/c/543256/ 16:28:51 Mohammed Naser proposed openstack/openstack-ansible stable/queens: Run openstack_openrc before Magnum installation https://review.openstack.org/578147 16:28:57 backport to stable/queens 16:29:49 alrighty, so note the link to the change and assign to you? 16:29:52 Mohammed Naser proposed openstack/openstack-ansible stable/queens: Run openstack_openrc before Magnum installation https://review.openstack.org/578147 16:29:58 odyssey4me: if you don't mind i added a closes-bug 16:30:05 ill take the +2 back and yep assigned to me 16:30:22 for the last part: https://github.com/openstack/openstack-ansible/blob/feef46a4b4af249fd3d2a48d1fad9f248d9b82e8/playbooks/os-keystone-install.yml#L22-L29 16:30:25 juggling workflows lol 16:30:50 odyssey4me: did you finish https://github.com/openstack/openstack-ansible/commit/d8fcd1ae5378ab623b4ca02d10037514dba97e03 ? 16:30:56 applying to magnum too 16:30:58 sure, although I don't think jeepyb will do things to the ticker given that the review is targeted to stable/queens 16:31:20 it would do things to the bug if there was a series assignment, but OSA hasn't used those since newton IIRC 16:31:21 odyssey4me: Ok that made no sense to me:) 16:31:33 the one before newton about the ticker 16:31:51 hmm i see 16:31:51 odyssey4me: nevermind it's defaulted 16:32:10 are we good on this one? 16:32:18 with backports yes. 16:32:25 evrardjp: not just yet - need to figure out how to get https://review.openstack.org/568142 fixed up - then will do other roles 16:32:26 cools 16:32:42 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778412 16:32:42 Launchpad bug 1778412 in openstack-ansible "Create OpenStack-Ansible requirement wheels" [Undecided,New] 16:32:50 spotz: will explain after 16:33:03 odyssey4me: thanks 16:33:12 user says 16:33:15 "Please close this one, i think i don't have permission to close." 16:33:22 will check if that's a real dupe 16:33:25 yay, free bug close points 16:33:44 looks like that was related to mismatching libvirt 16:33:49 which is something we've already resolved moving forward 16:33:56 yay 16:34:00 what do we put as cancelled 16:34:05 invalid? 16:34:07 we've been looking at updating the requirement for libvirt-python on stable releases 16:34:22 mnaser: it's a real duplicate 16:34:28 mnaser: so clicking on top right 16:34:34 oh mark as dup 16:34:35 then 1730314 16:34:36 distros seem to update libvirt not like other packages (which they pin more or less) 16:34:36 hmm, I *think* we got this figured out 16:34:38 then save 16:34:43 my memory is rusty 16:35:09 odyssey4me: ? 16:35:14 IIRC this was a CentOS issue, and we resolved this by switching to symlinking the host libvirt-pyton into the venv, right? 16:35:31 prometheanfire: yeah afaik centos just ignores what the constraints say.. 16:35:35 we are marking it as duplicate 16:35:54 their comment: "most of our team builds libvirt, we know what we're doing" *shrugs* 16:35:58 this was the whole issue around centos using a different libvirt to everyone else 16:36:07 Bug 1730314 is already a duplicate of bug 1636567. You can only mark a bug report as duplicate of one that isn't a duplicate itself. 16:36:07 bug 1636567 in openstack-ansible "duplicate for #1730314 devstack mitaka installation fails with error "Running setup.py bdist_wheel for libvirt-python: finished with status 'error'" in Ubuntu 16.10" [High,Confirmed] https://launchpad.net/bugs/1636567 16:36:08 bug 1636567 in openstack-ansible "devstack mitaka installation fails with error "Running setup.py bdist_wheel for libvirt-python: finished with status 'error'" in Ubuntu 16.10" [High,Confirmed] https://launchpad.net/bugs/1636567 16:36:22 haha 16:36:23 mnaser: yeah you might have to track : ) 16:36:29 dependency solving yay 16:36:36 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778098 16:36:37 Launchpad bug 1778098 in openstack-ansible "os_horizon role fails if horizon_custom_themes is specified" [Undecided,New] 16:36:42 mnaser: yep, it was a cent issue 16:37:21 it looks like we somehow already have a fix for this 16:37:55 hmm this is weird 16:38:04 so if you configure a theme, it expects it to be uploaded 16:38:07 that is documented 16:38:11 o 16:38:15 just a sec 16:38:18 cool 16:39:10 yeah the docs is bad 16:39:20 https://docs.openstack.org/openstack-ansible-os_horizon/latest/ 16:39:25 in default 16:39:33 it should be a more explicit thing 16:39:42 spotz: could you deal with this? 16:39:57 evrardjp: yeah 16:40:02 would updating the constraint in requrements help? 16:40:03 basically writing a story about how to use a custom theme 16:40:31 it should be based on https://docs.openstack.org/openstack-ansible-os_horizon/latest/ default's explanation, mixed with the user's input of the bug above 16:40:37 explaining another variable is required 16:40:55 it can be done in two steps: changing the defaults/main.yml to be more explicit, but also adding a complete user story 16:41:15 cool thanks 16:42:02 spotz: are you cool with taking this on? :p 16:42:15 apologies - took a little while to track down the patches which solved https://launchpad.net/bugs/1636567 16:42:15 Launchpad bug 1636567 in openstack-ansible "devstack mitaka installation fails with error "Running setup.py bdist_wheel for libvirt-python: finished with status 'error'" in Ubuntu 16.10" [High,Fix released] - Assigned to Jesse Pretorius (jesse-pretorius) 16:42:17 mnaser: Yeah, you know me I'll ask questions:) 16:42:47 ++ 16:43:01 spotz: marking as confirmed, low and assigning to you :) 16:43:12 mnaser: okie:) 16:43:25 last one 16:43:27 #link https://bugs.launchpad.net/openstack-ansible/+bug/1778085 16:43:27 Launchpad bug 1778085 in openstack-ansible "Horizon health check fails when developer panels enabled" [Undecided,New] 16:44:14 ooou a regression perhaps 16:44:34 hahah I KNEW it. 16:45:08 I suggest we assign that to cloudnull who did the first patch with the renames :p 16:45:13 the workaround/fix seems relatively reasonable 16:45:18 lol 16:45:20 is that an ok fix? 16:45:22 mnaser: yes 16:45:31 if it is, it's a matter of just pushing the patch 16:45:34 man, those were done in mitaka - I'm surprised it's taken this long to surface 16:45:41 https://review.openstack.org/#/c/573318/ 16:45:47 maybe that helps? 16:46:24 damn you really want to install elasticsearch pip package everywhere 16:46:44 cloudnull: that exposes the bug 16:47:02 i wonder why that wasn't caught in ci 16:47:13 mnaser: different config : ) 16:47:36 I must say, installing the elasticsearch package everywhere by default makes me uncomfortable - no matter how useful it is, especially because the version override's also being done 16:48:10 is it intalled by default? 16:48:15 *installed 16:48:19 that way it will. 16:48:27 I'd really that was more of an opt-in, personally... but the operators should really be chiming in. 16:48:31 if it doesn't hurt, i can't imagine it being too much of an issue. the long term resolution would be adding some sort of opt-in thing 16:48:43 mnaser: it can hurt for packagers 16:48:47 but i dont want to burden contributions that don't hurt things personally 16:49:03 sure, that's why it can be opt in and supported in certain types of deployment 16:49:13 https://review.openstack.org/#/q/topic:osprofiler+(status:open+OR+status:merged) looks to me like it went with adding by default, not opt-in 16:49:15 mnaser: I agree there. 16:49:24 odyssey4me: I asked for opt-in. 16:49:31 evrardjp https://review.openstack.org/#/c/573318/10/defaults/main.yml - that package is required 16:49:32 I didn't -2. 16:49:37 we don't have a huge team and we shouldn't start being super strict about these things that do "good" things at the end of teh day 16:49:45 sure 16:49:50 the driver we added is for elk, 16:50:00 cloudnull: ok I will -2 it then. 16:50:02 without it it wont do anything 16:50:11 is it time that we have a generic 'I want these extra packages in every venv' variable that's used in all roles? 16:50:12 I am sad we arrive to that place 16:50:16 opt-in was the deal 16:50:26 that's no opt-in 16:50:28 it has to be there for horizon 16:50:31 err https://review.openstack.org/#/c/573526/3/defaults/main.yml what am i missing? 16:50:52 ok 16:50:57 i'll ask that we take a step back first 16:50:58 i thought those osprofiler patches all got changed to remove the elasticsearch pip install 16:51:11 can someone push `horizon_server_name` change first 16:51:27 which is for https://bugs.launchpad.net/openstack-ansible/+bug/1778085 16:51:28 Launchpad bug 1778085 in openstack-ansible "Horizon health check fails when developer panels enabled" [Undecided,New] 16:51:30 jrosser: same. Which was okay for me 16:51:45 mnaser: well, the reporter asked if that was a suitable fix - we can simply respond affirmatively 16:51:51 it did. all other roles are not adding the elasticsearch plugin. 16:51:57 odyssey4me: ill push it up quickly :) 16:52:01 perhaps the reporter will submit the patch, and I think we should provide that opportunity 16:52:12 however the driver we built for horizon is elasticsearch specific 16:52:27 I reported the bug, I can submit a patch 16:52:28 cloudnull: I'd prefer if we write a user story about how this gets done. Because the implications are bigger than what you see. 16:52:29 let's leave it as a new bug (so it's discussed again next week), but respond to the reporter 16:52:46 ah, and there we have stuartgr1 :) 16:52:47 evrardjp I'd love to know more about what those implications are? 16:53:02 Mohammed Naser proposed openstack/openstack-ansible-os_horizon master: Switch to using ansible_fqdn in horizon_server_name https://review.openstack.org/578151 16:53:03 odyssey4me: oops 16:53:05 sorry 16:53:10 sorry stuartgr1 :< 16:53:21 mnaser abandon :) let stuartgr1 go ahead 16:53:30 odyssey4me: that's a good idea 16:53:32 cloudnull: ofc 16:53:41 stuartgr1: it's all yours :) 16:53:49 ok, thanks 16:53:50 please let me know if you need help submitting the patch 16:54:02 stuartgr1 thanks for asking in the bug, very often just pushing up a review with the bug reference would stimulate a faster response :) 16:54:10 #topic open discussion 16:54:17 odyssey4me: ++ 16:54:35 do we want to discuss the issue regarding elasticsearch in os_horizon 16:54:48 it looks like cloudnull agreed to drop it in all venvs except for horizon where it seems to be a requirement 16:54:48 the ironic role needs to be overhauled 16:55:03 evrardjp cloudnull if we're ok with going with elasticsearch as the 'opinionated' way of doing profiling, then no opt-in is fine... adding opt-in sometimes adds more code for little benefit 16:55:14 i think cloudnull did a fairly ok compromise by dropping it but having it in one role isn't that huge of a deal. :x 16:55:22 prometheanfire: what type of work does it need right now? 16:55:27 well, fine IMO :) 16:55:40 it's mainly in the hardware type stuff https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html 16:55:51 my first pass was at https://review.openstack.org/#/c/561277/ 16:56:05 wow that's a big document 16:56:06 lol 16:56:54 my idea was to turn each classic driver into a key, with a list containing the rest of the values (hardware type, boot, deploy, inspect, management, power) as the value 16:56:55 odyssey4me: I disagree with you on the elasticsearch part, even if opinionated. 16:56:59 much of the overhaul is provided in detail in that review and has nothing to do with the hardware/ironic changes - it's just role code style and simplification to make maint easier 16:57:02 +1 prometheanfire ironic role does need overhauled 16:57:40 odyssey4me: do we feel like that's an extra thing that prometheanfire has to deal with 16:57:45 as in: does it block his work? :x 16:57:50 the ironic role as it stands today is a mitaka implementation which was overly complex and has had no maint except that which was required 16:57:51 odyssey4me: sure, I was mainly seeking clarification about my plan to use that key/value(list) method for setting the keys 16:58:01 mnaser: I'm more or less the 'ironic guy' here :P 16:58:16 i'm all for improving things but it would be a bummer if prometheanfire gets stuck with having to clean it all up just to land their patch 16:58:29 (i know we'll be over time, but it seems like we got some stuff to clear out) 16:58:31 prometheanfire: yep, I think using a key:value mechanism was the suggested approach - following what's been done in nova/neutron/etc 16:58:32 mnaser: there is that, but I'm willing to do the work 16:58:43 oh cool, i'm happy to hear that 16:58:45 odyssey4me: cool, that's the main guidance I was looking for 16:58:48 i'm happy to help with reviews. 16:59:12 mnaser: unfortunately the role has reached a state where it can't pass tests, so it needs the overhaul to even get a pass 16:59:26 odyssey4me: ugh, that's a bit of a bummer 16:59:42 i can have a look and hack on a thing or two, but that would be on personal time (probably weekends) 16:59:45 for the elasticsearch profiling, I think it's a very good user story, and we should explain how it's done anyway. We just give an extra override for horizon. Until it's made opt-in in the role and needs a simple boolean flip 16:59:49 overhauling the structure first, then implementing the deprecation changes would be a good approach - so that the overhauls can be back ported 17:00:02 that would be nice 17:00:04 but hard to do 17:00:16 overhauling backport is something we need to accept as a community too 17:00:28 so hear me out here 17:00:28 prometheanfire: I'd even suggest disabling voting for the functional testing for the overhaul parts, then building up a chain of patches which does the overhaul in small bits and the updating last which includes turnig voting back on. 17:00:46 do we really need to backport the overhauls 17:00:52 no 17:00:59 if it wasn't working in old releases, do we expect users to show up now and use it 17:01:04 that's what I meant as "accept as a community too" 17:01:27 maybe that can reduce the workload on prometheanfire (at the expense of a new user who tries to deploy newton ironic and that not working) 17:01:28 mnaser in order to enable back porting other work to make ironic work in queens/pike/etc, yes - right now the role is unmaintainable 17:01:30 I think we don't have to actively backport it. 17:01:30 odyssey4me: it's depricated in queens, not rocky 17:01:39 so the change over has to be backported 17:01:44 oh i see 17:01:58 at least that's what I've seen, I probably need to reconfirm that 17:02:03 this is a tough one. i think we just have to do the hard/tough job of working on fixing that 17:02:14 mnaser: yep 17:02:21 yup it seems to 17:02:25 mnaser: specifically newton/pike/queens onwards matters to RAX... newton is fine as far as I know, pike+ is broken 17:02:30 the good part about it being unmaintianed is that it should be easier to backport :P 17:02:42 pike was fine 17:02:45 odyssey4me: prometheanfire mnaser I think that's the important part 17:02:50 queens works with cjloader's patches 17:03:01 if it's important for RAX, as member of the community, it can then bring the patches 17:03:04 I can help prometheanfire with the overhaul 17:03:14 we'll vote for it, as any member of the community 17:03:17 prometheanfire: ok, that's good news 17:03:29 i know this might be unrelated but 17:03:46 i hate asking this but prometheanfire are you currently with rax? 17:03:49 that's great to see ppl at rax contributing to ironic : ) 17:03:55 I'm done with the topic unless others have questions 17:03:58 mnaser: yarp 17:03:59 or at least os_ironic : ) 17:04:01 ok cool! 17:04:04 cjloader: great 17:04:09 so the backport best interest is in yours too 17:04:18 didnt want to have a load on you for something that you don't necessarily need :P 17:04:22 now 17:04:24 one more topic 17:04:41 good progress on qdrouterd role, looking to land base job if someone can take a look, https://review.openstack.org/#/c/575505/ 17:04:56 i see a happy xenial job, yay 17:04:57 Merged openstack/openstack-ansible-tests master: Add repo_build pip cache https://review.openstack.org/576884 17:05:00 prometheanfire thanks for taking it on, and apologies for the burden, but trying to review any patches to ironic is a minefield and the overhaul will resolve that 17:05:01 i'll have a look shortly. 17:05:10 mnaser: thanks! 17:05:17 elasticsearch and horizon. i think cloudnull made an appropriate comprimise in dropping it from all roles 17:05:22 mnaser: odyssey4me we don't need a happy xenial job for this 17:05:24 except horizon which it does not seem to be possible 17:05:27 ansmith: * 17:05:38 odyssey4me: si 17:05:54 i don't think it's something that warrants a -2 at this point 17:06:17 mnaser: I see it as an obstacle for some deployers. We don't want to have obstacles :) 17:06:20 we're not in a state where we have full time contributors that we can ask for overhauls and adding opt-in systems for a contribution that is largely more of a part-time thing 17:06:24 prometheanfire feel free to ping me if you're stuck, I'm happy to partner up to get it to a better place 17:06:31 i don't see how it is an obstacle, an extra library in a venv 17:06:44 (with my deployer hat on, i don't even KNOW what libraries go in the venv) 17:06:46 if it works, it works 17:06:56 prometheanfire: same, I'd be happy to help 17:07:05 evrardjp what obstacle? 17:07:11 mnaser: it could be licensing. 17:07:18 its already part of global requirements 17:07:24 there is no licensing 17:07:25 cloudnull: I have noted my comments on the bug 17:07:47 odyssey4me: sure :D 17:07:52 if its part of global requirements then it cant have license problems. it's also apache-2.0 licensed so it's okay 17:08:00 ^ 17:08:02 Merged openstack/openstack-ansible-tests master: Switch upgrade jobs to be non-voting https://review.openstack.org/577884 17:08:22 a -2 of a core is kindof an effective veto of "i will refuse to merge this" 17:08:27 not "i disagree with it" 17:08:46 if this is the stance we're going to take then we need to do a significant audit of the packages we install in the venvs. 17:09:02 cloudnull: I think we should do it indeed. 17:09:08 my concern is more around us overriding the version - putting us out of sync with g-r/u-c... but if the project's contributors are happy to take the risk that come with that, it's fine to me 17:09:16 we should also probably emove things like NFS from glance, and non-reference drivers from neutron. 17:09:21 cloudnull: this work of cleanup was already started this cycle 17:09:35 well, doing that in theory sure sounds like adding more pain for operators.. 17:09:42 we said we're gonna for new features next cycles, and clean this cycle. This doesn't seem aligned either 17:09:57 things happen. resources disappear. times disappears. 17:10:09 we've discussed this, we aren't all full time anymore so we have to understand that we need to operate in a different way 17:10:21 this can be handled differently if we all work full time on OSA, very easy to dedicate the time to clear it all out 17:10:32 cloudnull: I will work on removing the non-reference drivers. That's sadly what I will have to do I think. 17:10:34 but given a lot of us are doing this 'on the side', we need to be a bit more flexible to help continue the project to progress 17:10:49 mnaser: I think you're right on the flexibility 17:10:51 working the non reference driver makes me sad because we have users like OPNFV which uses ODL to test things 17:10:55 well, perhaps we need to be more opinionated 17:11:11 and i was going to start pushing up some work to support opencontrail as an optoin 17:11:12 mnaser: but here, there is nothing that can't be done with a user story 17:11:15 we carry a very flexible framework, but more options means more work to maintain it 17:11:16 because of customer demand 17:11:34 evrardjp I think removing the non-reference drivers would be a waist of time, on top of the fact it'd be detrimental for adoption. 17:11:41 odyssey4me: thing is, we don't have to test and support them all. we can still test the base scenario, but let's not take away the out-of-the-box experience 17:11:54 OPNFV folks *love* OSA and use it for all their deployments, we'd hurt them if we took away this stuff. 17:11:55 we ideally need to try and implement facilities instead of specific things, as that means you get the basics and can plug other things in 17:12:04 what mnaser said, 17:12:09 cloudnull: it has a dobule effect: For me it shows the aptitude of having 3rd party extensions that can be written by everyone 17:12:30 mnaser: don't get me wrong, I don't want these to be removed 17:12:33 i just want us to scope back and look at the specific problem here which is: installing elasticsearch in the horizon venv. 17:12:53 most if not all non reference drivers are installed optionally so its not an apples to apples comparison 17:12:56 I want to make sure it's part of a way to behave with 3rd party ways if necessary 17:13:13 I mean for this specific case, wouldn't it be better to be able to set a flag to enable profiling, and have a var which sets which packages you want added to the venv? Instead of a hard-cded set of packages which you then have to remove if you don't want that thing? 17:13:14 agreed with logan- too : p 17:13:27 odyssey4me: agreed with you too. 17:13:33 odyssey4me: ++, that's what I was thinking 17:13:50 odyssey4me: i think evrardjp is ok with this, i'm just trying to not add extra workload on someone whos adding a productive useful change for 'optimizing' 17:13:55 I think we should have a profiling variable that can be flipped in group vars if necessary 17:14:07 kinda like "Hey i showed up to help with this thing but instead ended up with a giant load of extra optimizations and requests" 17:14:20 which again, totally cool if we're full time, but we're not, and so by doing that, that feature might just never land. 17:14:58 well do you prefer land a hurtful feature for some ppl, helping some others, OR not merging? 17:15:10 blast - I typed a bunch and pushed a bad key and lost it all 17:15:13 I'd prefer the patch to be okay for everyone 17:15:14 evrardjp what is this hurting ? 17:15:16 dont see how it's hurtful 17:15:33 odyssey4me i hate it when i do that :) 17:15:37 it's even part of openstack requirements so packagers won't have too bad of a time 17:15:45 odyssey4me: so frustrating makes you even give up having to rewrite things sometimes lol 17:16:06 basically I suggest a more generic facility for each role - something like neutron_extra_optional_packages 17:16:27 that can then be re-used for *anything* a deployer wants without having to override the ddefault package list as they have to today 17:16:33 that's exactly what I suggested odyssey4me 17:16:36 odyssey4me: i agree, but who's gonna do that work... 17:17:09 cloudnull: do you mind posting this on the ML so that we can have a discussion where we can kinda think about stuff and formulate answers 17:17:20 it doesn't have to be implemented everywhere at once. just implement it in horizon for know so the pattern is established and can be implemented elsewhere as needed 17:17:26 for now* 17:17:27 just to kinda say "hey operators, what do you think about having elasticsearch in here" 17:17:36 it'd be massively easier to do after moving to using the common venv build role, and I can implement something like that as a follow on - but there're no guarantees that'll make the Rocky release 17:17:55 and "hey cores, what are your thoughts about that?" 17:18:07 odyssey4me: I can help implementing it 17:18:40 it basically already exists here: https://github.com/openstack/ansible-hardening/blob/1fd694a40c4d4013a2d407f043df99f3fc0e9e47/vars/redhat.yml#L63-L69 17:18:40 odyssey4me I added a way to inject any such package into the venv create and repo build role - https://review.openstack.org/#/c/574544/ https://review.openstack.org/#/c/574546/ 17:18:46 for now it'd be a bit of a slog, but an easy common pattern 17:18:51 except for distro packages. but that pattern could be easily adopted for extra pip packages 17:18:52 I'm about to knock two things off my plate this week so I'll have some cycles 17:19:20 oh that's awesome 17:20:06 cloudnull: hmm, lemme look at those - that could be a good interim solution 17:20:35 I'm not fond of taking intelligence out of the roles again, but if we can implement something that keeps it in the roles and uses another facility then that's cool 17:20:43 or at least is easy to transition later 17:20:50 ok so conclusion to this: cloudnull maybe post to ML to discuss this, d34dh0r53 to help add 'opt-in' pattern? 17:20:51 however with horizon, because the config exists in python, if the lib does not exist and osprofiler is enabled, horizon breaks in terrible, cryptic ways. 17:21:31 so I guess we could cause it to fail if the lib is not found in the venv and elasticsearch is present in the connection string? 17:21:33 cloudnull: nothing a doc can't fix, as we need a doc anyway 17:21:34 cloudnull: hey you back!!! how do i run try ansible on openstack ? 17:22:08 i want to make changes and see if its going to right place using dry tun 17:22:12 run* 17:22:14 I'm happy to look at that horizon review more closely when I have a fresh head. Right now it's the end of the day. 17:22:20 tux_: we're in the middle of a meeting, should be able to talk soon about that :) 17:22:29 :) enjoy! sorry 17:22:31 I suspect we can find a happy medium 17:22:43 ++ 17:22:49 what's the review again? 17:22:56 https://review.openstack.org/#/c/573318/ 17:23:03 primarily https://review.openstack.org/#/c/573318/10/defaults/main.yml 17:23:05 evrardjp a doc can't fix this issue. 17:23:19 the lib has to be present at venv buld time. 17:23:35 fwiw all other projects got 'osprofiler' except this one is the one 'odd' case which i'm okay with making an exception 17:23:39 we dont run pip install when the venv is available on teh repo server. 17:24:04 cloudnull at a glance, could we have something like an 'optional' set of packages (much like neutron does) which puts those packages there at runtime if that option is enabled? 17:24:14 cloudnull: that can be in /etc/openstack_deploy like all the rest of the user_variables. 17:24:33 evrardjp so the user has to override the entire package list / 17:24:46 can we please move this to a mailing list discussion, as i think it's more productive there? 17:24:51 omg that could be fixed by using a temp variables in vars/ 17:24:57 that's nothing 17:25:00 i'd like to wrap up the meeting 17:25:07 convenience has an end 17:25:08 odyssey4me yes. though that's a lot of extra tasks to accomplish something extreamly simple, with no known impact. 17:25:08 I'd like to eat 17:25:24 please? :) 17:25:27 yeah 17:25:31 thanks for chairing 17:25:33 if not, i'll post a message 17:25:38 quoting the meeting 17:25:39 mnaser sure. 17:25:41 and people can read 17:25:42 yay 17:25:45 thanks 17:25:47 that's much better 17:25:50 have a good day. 17:25:51 THANK YOU EVERYONE :) 17:26:00 #endmeeting