16:01:15 <evrardjp> #startmeeting openstack_ansible_meeting
16:01:16 <openstack> Meeting started Tue Sep 19 16:01:15 2017 UTC and is due to finish in 60 minutes.  The chair is evrardjp. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:19 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:01:34 <spotz> \o/
16:01:35 <evrardjp> #topic new bugs
16:01:44 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1718188
16:01:45 <openstack> Launchpad bug 1718188 in openstack-ansible "Example layer 3 routed environment configuration limits VIP to POD1" [Undecided,New]
16:01:45 <prometheanfire> o/
16:02:08 <jrosser_> ahha thats mine
16:03:16 <jrosser_> so either ive misunderstood how the vip works, or it cant escape from the subnet infra1 is on
16:03:49 <evrardjp> the configuration doesn't match the description
16:04:13 <evrardjp> pod1_container: 172.29.236.0/24,   pod2_container: 172.29.237.0/24
16:04:35 <evrardjp> but https://docs.openstack.org/project-deploy-guide/openstack-ansible/pike/app-config-pod.html#network-configuration
16:04:40 <evrardjp> isn't consistent
16:04:45 <openstackgerrit> Merged openstack/openstack-ansible-os_ironic stable/newton: Fix typo in timeout  https://review.openstack.org/504969
16:04:53 <openstackgerrit> Merged openstack/openstack-ansible-os_nova master: Add always tag where it's needed  https://review.openstack.org/494386
16:04:57 <evrardjp> lb vip address could be updated to use a /22
16:05:12 <evrardjp> and description adapted
16:05:22 <jrosser_> not in a l3 example like that
16:05:34 <evrardjp> why not?
16:05:38 <logan-> yeah it'll need to be in the same L3 segment
16:06:06 <jrosser_> because the network l3 config only allows certain subnets to live in each pod
16:06:06 <evrardjp> not sure what you both mean here
16:06:12 <jrosser_> and they dont span pods
16:06:14 <logan-> the vip is limited to one pod
16:06:19 <jrosser_> so the vip cant float
16:06:29 <jrosser_> yes what logan- says
16:06:55 <evrardjp> ok let's take a step back first
16:07:10 <evrardjp> do you agree there is a mismatch between the code and the description ?
16:08:31 <logan-> i'm not sure which description. im just looking at the o_u_c config dump in that doc and the config is invalid
16:08:34 <jrosser_> im on my mobile on the train so its not ideal to look in great detail
16:08:42 <logan-> haproxy_hosts can only contain hosts in one pod
16:09:02 <jrosser_> the floating ip os a layer2 construct i think
16:09:13 <jrosser_> but the infra nodes have no l2 adjacency
16:09:16 <jrosser_> that is the issue
16:09:28 <logan-> we use vrrp (l2 failover) to fail the vip between hosts which will not work to fail the vip between l3 segments
16:09:39 <jrosser_> agreed
16:09:52 <jrosser_> i forsee sinilar problems for  eutron routers in that example too
16:09:55 <evrardjp> I agree on there, but let's focus on each bug at a time?
16:10:04 <evrardjp> cidr_networks:
16:10:05 <evrardjp> pod1_container: 172.29.236.0/24
16:10:05 <evrardjp> pod2_container: 172.29.237.0/24
16:10:06 <evrardjp> pod3_container: 172.29.238.0/24
16:10:09 <evrardjp> pod4_container: 172.29.239.0/24
16:10:16 <mgariepy> asettle, replied to your comment.
16:10:20 <evrardjp> is obviously a mismatch with the rest
16:10:26 <mgariepy> in : https://review.openstack.org/#/c/505289
16:10:31 <asettle> mgariepy: grazie! (I didn't want to message you here because of triage)
16:11:02 <asettle> (hence the -1 to grab attention)
16:11:04 <mgariepy> (sorry)
16:11:09 <evrardjp> then
16:11:12 <asettle> (+2a for you, sir)
16:11:23 <logan-> yeah evrardjp I see what you're saying re: the cidr_networks
16:11:27 <jrosser_> i intended this bug to be about the l2 vs l3 / vrrp issue
16:11:33 <evrardjp> ok
16:11:39 <jrosser_> i hadnt spotted anything else wrong with the config
16:12:00 <jrosser_> but i didnt look in great detail
16:12:22 <jmccrory> i'll take this and updated docs. you'll either still need some l2 subnet spanning across haproxy hosts, or to use an external lb
16:12:25 <evrardjp> jmccrory: are you there?
16:12:26 <jmccrory> update*
16:12:36 <jmccrory> evrardjp yes, just got on train and laptop
16:12:36 <evrardjp> yeah
16:12:55 <evrardjp> so here we plan for external adjacency on l2
16:13:12 <evrardjp> or alternatively we should do a completely different section
16:13:20 <evrardjp> but then we start talking about bgp
16:13:29 <evrardjp> it sounds more complex than just giving a disclaimer
16:13:31 <jrosser_> my 2p is that the infra nodes dont fit the pod model
16:13:42 <jrosser_> that should be limited to compute/storage
16:13:54 <evrardjp> why?
16:13:56 <jrosser_> and other thing done for l2 infra
16:13:58 <admin0> isn’t it acceptable to have an l2 connectivity between different pods if you want the vips to run on any pod ?
16:14:07 <evrardjp> as long as haproxy is doing alright?
16:14:11 <jrosser_> no  ecause its a l3 example
16:14:49 <jrosser_> anyway, could debate at length :) it doesnt work as is
16:14:56 <admin0> with multi pods, you can easily add 2-3 more machines as your haproxy host and then it can see all internal pods/networks
16:15:31 <jrosser_> yes
16:15:33 <admin0> decoupling haproxy from controllers ( that live on a certain pod ) might be a workaround /way to go
16:15:39 <evrardjp> as long as haproxy nodes would be the integration to the network, it should work, but I guess it all depends on the networks you'll use
16:15:59 <evrardjp> jrosser_: I suggest you propose a patch then :p
16:16:03 <jmccrory> just an example, no networks are going to be completely identical. i'll updat the docs and we can probably move on
16:16:22 <admin0> haproxy nodes are the gateways via which the api/horizon .. so if there are multi pods, you  need to ensure your haproxy hosts sees them
16:16:38 <evrardjp> agreed. Modifying the docs telling the requirements for this, and update for consistency sounds great jmccrory
16:16:41 <admin0> and it can be on l3 ..   just the haproxies need to be in l2 for the vip/vrrp
16:16:53 <evrardjp> jrosser_ admin0 you can review the patch jmccrory will post :p
16:16:56 <jrosser_> l3 agents?
16:16:58 <jrosser_> ok
16:17:08 <admin0> ok
16:17:22 <evrardjp> marking it as confirmed and medium because the pod deployment is not the most often read item in the docs
16:17:25 <evrardjp> ok everyone?
16:17:34 <jmccrory> ok
16:17:48 <jrosser_> sure, thanks everyone
16:18:04 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1718187
16:18:05 <openstack> Launchpad bug 1718187 in openstack-ansible "Implement neutron-fwaas-dashboard" [Undecided,New]
16:18:49 <evrardjp> andymccr: are you there?
16:18:58 <andymccr> hello!
16:19:02 <evrardjp> wasn't that work done already for translations?
16:19:09 <andymccr> no we dont deploy fwaas i dont think
16:19:14 <evrardjp> ok
16:19:33 <andymccr> but i know jafeha_ is looking at adding a patch for this
16:19:37 <evrardjp> Marking it as confirmed, thanks to the docs, and to high ?
16:19:47 <andymccr> well i guess you cant work around it so yeah maybe high
16:19:49 <evrardjp> jafeha_: that would be great
16:20:00 <andymccr> i just said that if he runs into issues we can help guide through the process
16:20:18 <evrardjp> yeah broken expectations..., maybe we should add testing into this scenario
16:20:21 <admin0> isn’t network >> firewall what Fwaas is ?
16:20:25 <evrardjp> into a *
16:20:31 <admin0> which is there when i do it using ansible
16:20:49 <openstackgerrit> Merged openstack/openstack-ansible master: Fix variable def.  https://review.openstack.org/505289
16:20:53 <openstackgerrit> Major Hayden proposed openstack/openstack-ansible-pip_install master: Optimize pip_install for CentOS  https://review.openstack.org/504509
16:20:53 <evrardjp> admin0: the bug is still valid
16:20:58 <andymccr> i think adding fwaas dashboard to the translations site would be cool but im not sure what else would be needed
16:21:23 <evrardjp> fair enough
16:21:40 <evrardjp> in the meantime let's see what jafeha_ says
16:21:42 <evrardjp> next
16:21:44 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1717881
16:21:46 <openstack> Launchpad bug 1717881 in openstack-ansible "public api does not work if it's behind proxy with ssl termination" [Undecided,New]
16:22:24 <openstackgerrit> Merged openstack/openstack-ansible-os_neutron master: Update links in some docs  https://review.openstack.org/504306
16:22:38 <openstackgerrit> Marc Gariépy (mgariepy) proposed openstack/openstack-ansible stable/pike: Fix variable def.  https://review.openstack.org/505288
16:23:19 <evrardjp> would that be linked https://bugs.launchpad.net/openstack-ansible/+bug/1713663 ?
16:23:20 <openstack> Launchpad bug 1713663 in openstack-ansible "Set enable_proxy_headers_parsing = True when HAProxy is used" [Medium,Confirmed]
16:23:30 <evrardjp> Adri2000: are you there?
16:24:21 <admin0> looks similar
16:24:25 <evrardjp> yeah.
16:24:43 <evrardjp> Ok will link the bug to the other bug.
16:26:01 <evrardjp> next is maybe a docs bug?
16:26:03 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1717506
16:26:04 <openstack> Launchpad bug 1717506 in openstack-ansible "Quick Start in openstack-ansible" [Undecided,New]
16:26:23 <asettle> evrardjp: just left a comment in that one. Unless someone else is totally aware of all the things
16:26:39 <evrardjp> yes I see
16:26:45 <evrardjp> Yeah I have an idea
16:26:48 <asettle> To me it's fairly incomplete, but if someone is able to fill in the gaps
16:27:00 <evrardjp> if you cd /opt/openstack-ansible/playbooks
16:27:07 <prometheanfire> :D
16:27:11 <evrardjp> cp etc/openstack_deploy/conf.d/{aodh,gnocchi,ceilometer}.yml.aio  ... won't work
16:27:22 <evrardjp> because etc/ is only in /opt/openstack-ansible
16:27:30 <evrardjp> not playbooks subfolder
16:27:32 <asettle> Ohhhh yeah, makes sense.
16:27:46 <asettle> How have we had that for so long? I've never had an issue with the quick start
16:28:07 <admin0> we just know what to copy ( without blindly copy/paste )
16:28:32 <asettle> S'pose that's probably true
16:28:38 <evrardjp> yeah and the for f... sounds bad too
16:28:56 <admin0> maybe better do multi lines copy/paste
16:29:00 <admin0> instead of for loop
16:29:01 <evrardjp> asettle: I guess admin0 is right
16:29:11 <admin0> else there will be hey i am using zsh and this does not work
16:29:13 <evrardjp> I'll mark it as confirmed
16:29:45 <evrardjp> spotz, admin0 or asettle could you have some time to refactor that?
16:29:52 <admin0> ok
16:29:57 <admin0> i will
16:29:58 <evrardjp> sorry, oxford comma.
16:30:02 <asettle> Thanks admin0 :)
16:30:03 <spotz> :)
16:30:20 <evrardjp> well you all understood me, so let's continue. Thanks admin0!
16:31:34 <evrardjp> next
16:31:36 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1717321
16:31:37 <openstack> Launchpad bug 1717321 in openstack-ansible "Content-Security-Policy for services" [Undecided,New]
16:31:44 <prometheanfire> yo
16:32:57 <evrardjp> that's barely readable :p
16:33:09 <prometheanfire> welcome to my thought process :P
16:33:29 <evrardjp> Haha.
16:33:38 <evrardjp> that's not helpful for triaging you know :p
16:33:48 <prometheanfire> basically it is a question of if and how we want to lock down access
16:34:03 <evrardjp> I don't even know if it's an issue or a feature request right now.
16:34:12 <admin0> this is a good to have
16:34:20 <asettle> Feature request?
16:34:21 <prometheanfire> security request?
16:34:29 <admin0> as an operator and apis etc facing public and looking at people scanning and trying to ddos or break in, is a nice addon
16:34:29 <asettle> Security feature request :D
16:34:39 <prometheanfire> wfm
16:34:53 <prometheanfire> it's a 'good practice'
16:35:15 <prometheanfire> if it helps I've been running all those settings since ocata
16:35:26 <prometheanfire> and newton actually
16:35:27 <evrardjp> so the good practice is for all the webservers and load balancer to handle Content-Security-Policy , right?
16:35:58 <openstackgerrit> Major Hayden proposed openstack/openstack-ansible-openstack_hosts master: Optimize openstack_hosts for CentOS  https://review.openstack.org/504437
16:36:12 <prometheanfire> yes
16:36:15 <evrardjp> the next question is, could you document that? Because it only applies on valid certificates I guess
16:36:27 <prometheanfire> CSP doesn't need ssl
16:36:59 <prometheanfire> only the section below 'for ssl something like the following' (and the first comment) are for ssl
16:37:20 <prometheanfire> X-XSS-Protection, X-Frame-Options, X-Content-Type-Options, Content-Security-Policy do not need ssl
16:37:43 <evrardjp> so maybe we should set something that sets the defaults for non ssl, and a boolean to flip when using proper certificates to add this extra security
16:37:50 <evrardjp> for the ssl parts.
16:38:22 <evrardjp> so I consider this doesn't break anything, but we should definitely make the life of everyone simpler if we define this
16:38:25 <prometheanfire> sgtm, how it's implimented will differ based on how we do the load balancing though
16:38:32 <evrardjp> let's mark this as confirmed and wishlist then
16:38:34 <admin0> “maybe we should set something that sets the defaults for non ssl” — defaults should always be SSL :D .. even with self-signed certs
16:39:02 <prometheanfire> did we decide on not using nginx at all or what (what layer should I be adding the headers to?
16:39:09 <evrardjp> admin0: that's a different architecture case, I will not go that far for this bug. Let's focus on what's achievable.
16:39:15 <admin0> right
16:39:41 <evrardjp> prometheanfire: we have decided to take an exploration phase with uwsgi fast router vs nginx
16:39:48 <evrardjp> on each physical node.
16:39:58 <prometheanfire> what does that mean?
16:40:13 <evrardjp> summarized version:
16:40:17 <admin0> it means write for uwsgi now and later we go for nginx :D
16:40:21 <admin0> :D
16:40:40 <evrardjp> whatever we are currently supporting, won't change. So if you have horizon behind apache behind haproxy, let's keep that
16:41:05 <evrardjp> same for uwsgi apps behind nginx.
16:41:25 <evrardjp> we should still take care of that for the current state, whatever the decision in the future would be
16:41:57 <prometheanfire> ok, so it's probably going to differ per role
16:41:57 <evrardjp> prometheanfire: do you have cycles to fix that for default nginx services for example, like placement?
16:42:23 <prometheanfire> ya, it SHOULD be an easy oneliner (for each header)
16:42:35 <jmccrory> wishlist , add docs for best practices in upstream projects, patches for osa config whenever convenient
16:42:56 <andymccr> NB we clean up nginx bits for placement and other services that had it: https://github.com/openstack/openstack-ansible-os_nova/blob/master/tasks/nova_uwsgi.yml#L35-L41
16:42:59 <prometheanfire> jmccrory: ya, I had to figure out the CSP for horizon by trial and error, not fun
16:43:04 <evrardjp> jmccrory: agreed.
16:43:23 <andymccr> it just removes the site conf though and not nginx itself so that it wont break existing nginx hosts that are doing more than just placement for example
16:43:27 <admin0> prometheanfire: how many days/weeks did it took ?
16:43:45 <prometheanfire> a couple days looking at a web debug console
16:43:51 <prometheanfire> by days I mean an hour a day or so
16:43:52 <prometheanfire> just annoying
16:44:27 <evrardjp> ok let's continue.
16:44:32 <prometheanfire> one last thing
16:44:33 <jmccrory> prometheanfire heh yeah bet it wasn't fun. would be good to have any project specific weirdness in their docs though
16:44:39 <evrardjp> next
16:44:41 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1716927
16:44:42 <openstack> Launchpad bug 1716927 in openstack-ansible " "Failed to update apt cache."}" [Undecided,New]
16:48:04 <evrardjp> mmm
16:48:31 <jmccrory> needs more info? maybe apt log? could be connection issue with keyserver or repo
16:49:00 <hwoarang> yeah looks random
16:49:31 <openstackgerrit> Alexandra Settle proposed openstack/openstack-ansible master: [deploy-guide] Updates git clone link  https://review.openstack.org/505333
16:49:44 <evrardjp> yes we need more apt log data
16:49:45 <asettle> admin0: that should be your bug fix to 1715178 btw
16:50:48 <admin0> noted
16:51:02 <evrardjp> Marked as incomplete.
16:51:05 <evrardjp> next
16:51:06 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1716925
16:51:07 <openstack> Launchpad bug 1716925 in openstack-ansible "No package matching 'libmariadbclient-dev' is available"}" [Undecided,New]
16:51:48 <evrardjp> that is maybe the follow up
16:52:13 <hwoarang> yep
16:53:55 <evrardjp> incomplete too then
16:54:10 <evrardjp> next
16:54:13 <evrardjp> #link https://bugs.launchpad.net/openstack-ansible/+bug/1716908
16:54:14 <openstack> Launchpad bug 1716908 in openstack-ansible "Unable to create new instances after upgrade." [Undecided,New]
16:57:15 <evrardjp> qemu-system-{{ ansible_WHATEVER }} is not in explicit dependency
16:57:22 <evrardjp> https://github.com/openstack/openstack-ansible-os_nova/blob/master/vars/ubuntu-16.04.yml#L53
16:57:45 <evrardjp> but I don't know what apt-get install qemu installs
16:57:49 <evrardjp> so maybe I am wrong there
16:58:58 <evrardjp> anyone?
16:58:59 <logan-> qemu -> qemu-system -> qemu-system-* is the apt dependency chain
16:59:20 <evrardjp> so latest qemu should be alright
17:00:19 <evrardjp> but weirdly in his case qemu-system was 2.8 and qemu-block-extra was 2.8
17:00:32 <evrardjp> but qemu-system-x86 was 2.5
17:01:05 <evrardjp> we should probably add it to the explicit list then?
17:01:11 <logan-> yeah so that's interesting...
17:01:14 <evrardjp> I guess it's all aptitude behavior
17:01:15 <logan-> qemu-system upgraded to UCA
17:01:23 <logan-> the rest of the qemu-system-* deps didn't go to UCA
17:02:05 <evrardjp> maybe we should give uca a higher prio
17:02:15 <evrardjp> what happens if installing manually?
17:02:49 <logan-> im testing right now, installing 'qemu' on a base system, adding uca, and then ill see what happens upgrading qemu after uca is installed
17:03:39 <logan-> gate logs should help confirm this too
17:03:46 <logan-> since we pull in the apt history
17:03:55 <logan-> and ara stores all of the apt stdout
17:04:26 <evrardjp> yeah
17:04:37 <evrardjp> let's stop the bug triage for today though
17:04:51 <evrardjp> we can triage this as confirmed or not offline of the meeting
17:04:57 <evrardjp> thanks everyone!
17:05:02 <prometheanfire> yarp
17:05:05 <evrardjp> plenty of bugs for next week then!
17:05:15 <evrardjp> any last words?
17:05:16 <evrardjp> :p
17:05:21 <prometheanfire> evrardjp: mind taking a look at https://bugs.launchpad.net/openstack-ansible/+bug/1717321/comments/3 for my plan of action?
17:05:22 <openstack> Launchpad bug 1717321 in openstack-ansible "Content-Security-Policy for services" [Wishlist,Confirmed]
17:05:22 <evrardjp> 5
17:05:23 <evrardjp> 4
17:05:24 <evrardjp> 3
17:05:30 <evrardjp> yeah I will have a look
17:05:33 <prometheanfire> after meeting, thanks
17:05:34 <evrardjp> 2
17:05:38 <evrardjp> 1
17:05:40 <evrardjp> #endmeeting