*** thuydang has quit IRC | 00:00 | |
*** thuydang has joined #openstack-ansible | 00:00 | |
*** cshen has joined #openstack-ansible | 00:52 | |
*** cshen has quit IRC | 00:57 | |
*** gyee has quit IRC | 00:58 | |
*** jcosmao has quit IRC | 01:04 | |
*** rodolof has quit IRC | 01:40 | |
*** macza has quit IRC | 01:44 | |
*** vnogin has joined #openstack-ansible | 02:03 | |
*** vnogin has quit IRC | 02:03 | |
*** dave-mccowan has joined #openstack-ansible | 02:38 | |
*** maharg101 has quit IRC | 02:48 | |
*** admin0 has quit IRC | 03:41 | |
*** lbragstad has joined #openstack-ansible | 03:50 | |
*** lbragstad has quit IRC | 03:51 | |
*** dave-mccowan has quit IRC | 04:03 | |
*** udesale has joined #openstack-ansible | 04:10 | |
*** lbragstad has joined #openstack-ansible | 04:13 | |
*** asettle has joined #openstack-ansible | 04:23 | |
*** markvoelker has joined #openstack-ansible | 05:04 | |
*** asettle has quit IRC | 05:06 | |
*** cshen has joined #openstack-ansible | 05:49 | |
*** cshen has quit IRC | 05:54 | |
*** hwoarang has quit IRC | 06:20 | |
*** hwoarang has joined #openstack-ansible | 06:21 | |
*** ThiagoCMC has quit IRC | 06:37 | |
*** mkuf has quit IRC | 06:40 | |
*** mkuf has joined #openstack-ansible | 06:41 | |
*** mkuf has quit IRC | 06:51 | |
*** mkuf has joined #openstack-ansible | 06:52 | |
*** pcaruana has joined #openstack-ansible | 07:12 | |
*** vnogin has joined #openstack-ansible | 07:13 | |
*** DanyC has joined #openstack-ansible | 07:14 | |
*** vnogin has quit IRC | 07:17 | |
*** DanyC has quit IRC | 07:18 | |
*** nurdie has joined #openstack-ansible | 07:27 | |
*** fnpanic_ has joined #openstack-ansible | 07:28 | |
fnpanic_ | hi | 07:28 |
---|---|---|
fnpanic_ | good morning | 07:29 |
*** cshen has joined #openstack-ansible | 07:29 | |
*** nurdie has quit IRC | 07:31 | |
*** markvoelker has quit IRC | 07:45 | |
sum12 | #openstack-nova | 07:49 |
*** DanyC has joined #openstack-ansible | 07:49 | |
fnpanic_ | i am still at the AIO with ubuntu 18.04 | 07:51 |
fnpanic_ | now everything works flawless until setup-hosts | 07:52 |
fnpanic_ | fails at task: Ensure that the LXC cache has been prepared] | 07:52 |
fnpanic_ | http://paste.openstack.org/show/737282/ is the log from /var/log/lxc-cache-prep-commands.log | 07:57 |
fnpanic_ | maybe it is again me but this was a fresh ubuntu 18.04 | 08:03 |
fnpanic_ | locale also correct | 08:03 |
*** maharg101 has joined #openstack-ansible | 08:05 | |
*** DanyC has quit IRC | 08:08 | |
*** aludwar has quit IRC | 08:09 | |
fnpanic_ | i need to doublecheck the sources..list | 08:09 |
fnpanic_ | maybe something is wrong here | 08:10 |
*** aludwar has joined #openstack-ansible | 08:10 | |
*** markvoelker has joined #openstack-ansible | 08:16 | |
*** gillesMo has joined #openstack-ansible | 08:22 | |
*** priteau has joined #openstack-ansible | 08:38 | |
*** kopecmartin|off is now known as kopecmartin | 08:45 | |
*** tosky has joined #openstack-ansible | 08:53 | |
*** shardy has joined #openstack-ansible | 09:01 | |
fnpanic_ | i re-installed the host because there was a problem with sources.list being messed up | 09:02 |
fnpanic_ | now it looks fairly good :-) | 09:02 |
*** Emine has joined #openstack-ansible | 09:05 | |
*** cshen has quit IRC | 09:21 | |
*** rodolof has joined #openstack-ansible | 09:48 | |
*** dcdamien has joined #openstack-ansible | 09:50 | |
jrosser | fnpanic_: you might consider using a VM of some sort for building AIOs | 09:51 |
jrosser | becasue it is so quick to destroy/recreate if something goes wrong. i use vagrant/virutalbox but almost any approach you like will be fine | 09:51 |
*** cshen has joined #openstack-ansible | 09:53 | |
*** rodolof has quit IRC | 09:56 | |
*** rodolof has joined #openstack-ansible | 09:56 | |
*** cshen has quit IRC | 09:58 | |
*** DanyC has joined #openstack-ansible | 10:03 | |
*** gisak has joined #openstack-ansible | 10:21 | |
odyssey4me | redkrieg hmm, which series/tag/release is that for? unfortunately federation is not heavily used, and it's not tested, so it's a best effort thing and sometimes stuff breaks - is that rocky? | 10:21 |
odyssey4me | redkrieg if you could report a bug for the two issues, we can get them fixed up | 10:22 |
odyssey4me | jrosser I think we need to go ahead with https://review.openstack.org/#/c/625070/ to prevent any more merges which are actually broken | 10:23 |
*** cshen has joined #openstack-ansible | 10:24 | |
*** DanyC has quit IRC | 10:24 | |
*** DanyC has joined #openstack-ansible | 10:25 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server stable/rocky: upgrade: start service before applying policies https://review.openstack.org/625200 | 10:26 |
jrosser | odyssey4me: do you know why it always returns 0? | 10:26 |
odyssey4me | jrosser last I heard mnaser was following up with mtrenish, but I guess with season and all it's going to be hard to get answers right now | 10:27 |
odyssey4me | I've got a star on it to follow it up again in the new year once arxcruz|next_yr is back. | 10:28 |
*** hamzaachi has joined #openstack-ansible | 10:28 | |
jrosser | so we need that patch to merge? i don't really understand whats going on myself though | 10:28 |
odyssey4me | jrosser so, right now our tempest test runs always succeed - even if the tests run by tempest fail | 10:29 |
odyssey4me | for some reason there's always a 0 return code from tempest | 10:29 |
*** cshen has quit IRC | 10:29 | |
jrosser | yeah i saw the discussion yesterday | 10:29 |
odyssey4me | so yeah, given that patch reverts the patch which was the most recent change and is likely the culprit - better to revert and regroup | 10:30 |
jrosser | ok. there is a lot broken generally with the tempest changes | 10:31 |
jrosser | there a handful of roles that are totally broken | 10:32 |
odyssey4me | jrosser yeah, I know about barbican and such - but those are the distro jobs broken because of the changes relating to that | 10:34 |
odyssey4me | I'll revisit those - they either need a distro plugin package added, or need to be forced to use a source build | 10:34 |
arxcruz|next_yr | odyssey4me: hey, what u need from me ? :) | 10:35 |
odyssey4me | arxcruz|next_yr nothing that can't wait - have a good holiday! | 10:35 |
arxcruz|next_yr | odyssey4me: it's freaking cold outside lol :) | 10:35 |
odyssey4me | arxcruz|next_yr lol, totally agreed : | 10:36 |
*** markvoelker has quit IRC | 10:36 | |
*** markvoelker has joined #openstack-ansible | 10:37 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_server stable/rocky: Increase Galera self-signed SSL CA expiration https://review.openstack.org/625201 | 10:38 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_server stable/queens: Increase Galera self-signed SSL CA expiration https://review.openstack.org/625202 | 10:38 |
*** markvoelker has quit IRC | 10:41 | |
*** hamzaachi has quit IRC | 10:44 | |
*** hamzaachi has joined #openstack-ansible | 10:44 | |
odyssey4me | jrosser I suspect that somehow with https://review.openstack.org/#/c/607814/5 - https://github.com/openstack/openstack-ansible-os_cinder/commit/2c3aea81b55e4057fb610ec274222af2ddad5adf is no longer in effect, causing cinder-volume to fail | 10:46 |
odyssey4me | I can't see why - I'll fire up a VM now to test with. | 10:47 |
jrosser | is this something slipped through a bad test that returned good? | 10:47 |
odyssey4me | I think so, although it's a bit curious that the test passed earlier the same day - which is 2 days after the os_tempest merge. | 10:48 |
*** electrofelix has joined #openstack-ansible | 10:49 | |
*** cshen has joined #openstack-ansible | 10:50 | |
jrosser | odyssey4me: is it jinja variable scoping? | 10:57 |
jrosser | https://review.openstack.org/#/c/607814/5/tasks/cinder_install.yml line 61 vs 69 for example | 10:57 |
jrosser | modifying service inside the for loop then using it outside | 10:58 |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed https://review.openstack.org/625213 | 10:58 |
jrosser | oh no, it's all inside the loop | 11:00 |
jrosser | but i think thats where i'd start | 11:00 |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/queens: gnocchi_resources override fixed https://review.openstack.org/625215 | 11:07 |
*** markvoelker has joined #openstack-ansible | 11:16 | |
odyssey4me | I've got to go afk for a few hours - bbiab. | 11:17 |
*** cshen has quit IRC | 11:27 | |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed https://review.openstack.org/625213 | 11:28 |
*** admin0 has joined #openstack-ansible | 11:30 | |
admin0 | \o | 11:30 |
openstackgerrit | Dmitriy Rabotjagov (noonedeadpunk) proposed openstack/openstack-ansible-os_ceilometer stable/rocky: gnocchi_resources override fixed https://review.openstack.org/625213 | 11:31 |
*** CeeMac has joined #openstack-ansible | 11:34 | |
*** gary_perkins has quit IRC | 11:37 | |
gisak | hi guys | 11:45 |
gisak | never met this before: http://paste.openstack.org/show/737295/ | 11:46 |
gisak | its asking for /etc/ansible/hosts, but i never had a such file and wasnt asked for it before, now ansible wants it | 11:47 |
*** gary_perkins has joined #openstack-ansible | 11:50 | |
admin0 | gisak, what command are you using ? | 11:54 |
*** cshen has joined #openstack-ansible | 11:58 | |
gisak | openstack-ansible setup-hosts.yml | 12:02 |
*** cshen has quit IRC | 12:02 | |
admin0 | could be an error(logical) in the openstack_user_config file as well .. like indenetation etc | 12:07 |
*** udesale has quit IRC | 12:11 | |
gisak | yeah, u're right, thank you very much ) | 12:13 |
*** cshen has joined #openstack-ansible | 12:19 | |
*** pcaruana has quit IRC | 12:21 | |
*** pcaruana has joined #openstack-ansible | 12:22 | |
*** pcaruana is now known as pcaruana|intw| | 12:25 | |
*** hamzaachi has quit IRC | 12:33 | |
*** rodolof has quit IRC | 12:40 | |
*** rodolof has joined #openstack-ansible | 12:41 | |
jamesdenton | mornin | 12:59 |
*** ansmith has joined #openstack-ansible | 13:00 | |
*** fnpanic_ has quit IRC | 13:07 | |
*** dcdamien has quit IRC | 13:18 | |
*** cshen has quit IRC | 13:32 | |
*** pcaruana|intw| has quit IRC | 14:05 | |
*** Emine has quit IRC | 14:16 | |
openstackgerrit | kourosh vivan proposed openstack/openstack-ansible-os_tempest master: Add user and password for secure image download (optional) https://review.openstack.org/625266 | 14:17 |
*** udesale has joined #openstack-ansible | 14:27 | |
mgariepy | ioni, are you around ? | 14:28 |
ioni | mgariepy, sure, what's up | 14:28 |
mgariepy | so, i had to disable transparent_hugepage on 4.15 kernel | 14:28 |
mgariepy | when using pci passthrough the memory is allocated and pinned for the guest memory. | 14:29 |
mgariepy | https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1808412 | 14:30 |
openstack | Launchpad bug 1808412 in linux (Ubuntu) "4.15.0 memory allocation issue" [Undecided,Confirmed] | 14:30 |
ioni | cool | 14:31 |
*** dave-mccowan has joined #openstack-ansible | 14:31 | |
*** pcaruana has joined #openstack-ansible | 14:31 | |
mgariepy | was fun haha :D | 14:31 |
mgariepy | so unless you are doing stuff that require qemu to alloc and pin all guest mem, you probably won't see the issue. | 14:32 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_cinder master: Combine, rather than replace init overrides https://review.openstack.org/625267 | 14:33 |
odyssey4me | mnaser jrosser That fixes the cinder-volume service on centos again. :) https://review.openstack.org/625267 | 14:33 |
odyssey4me | cores, please review asap to unblock the master integrated build | 14:33 |
mgariepy | can i have some review for this simple doc patch :) https://review.openstack.org/#/c/622978/ | 14:36 |
odyssey4me | mgariepy done | 14:37 |
mgariepy | wonderful :D | 14:38 |
*** dave-mccowan has quit IRC | 14:43 | |
jrosser | odyssey4me: did you see same patch in nova is slightly different, nothing is combined.... is that an oversight? | 14:46 |
gisak | hey guys, what about nova_oslomsg_notify_host_group is undefined error during setup-openstack.yml ? | 14:47 |
gisak | I have updated the /etc/ansible/roles/os_ceilometer/ folder from https://git.openstack.org/cgit/openstack/openstack-ansible-os_ceilometer/commit/?id=ec29ffad366ae899a93c8d6cab01ac64ffa69059 | 14:48 |
*** ostackz has joined #openstack-ansible | 14:48 | |
gisak | but still get the same error when running setup-openstack.yml playbook | 14:48 |
*** thuydang has quit IRC | 14:48 | |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron master: Add app-ovn.rst to index in documentation https://review.openstack.org/622978 | 14:48 |
fnpanic | hi | 14:50 |
fnpanic | so aio works till the setup-infrastructure playbooks | 14:51 |
fnpanic | -x | 14:51 |
*** Adri2000 has quit IRC | 14:51 | |
fnpanic | sitting behind a proxy i set the http_proxy and https_proxy | 14:52 |
fnpanic | i also set the pip_validate_certs: false galera_package_download_validate_certs: false | 14:52 |
fnpanic | it is failiing at getting the keys for galera from keyserver | 14:53 |
fnpanic | TASK [galera_client : Add keys (primary keyserver)] | 14:53 |
fnpanic | and the alternate task | 14:53 |
fnpanic | what do i need to do to fix this? | 14:54 |
fnpanic | despite getting rid of the proxy ;-) | 14:54 |
odyssey4me | fnpanic fnpanic this is ubuntu, right? | 14:54 |
fnpanic | yeah | 14:55 |
fnpanic | 18.04 | 14:55 |
*** Adri2000 has joined #openstack-ansible | 14:55 | |
odyssey4me | that's these tasks: https://github.com/openstack/openstack-ansible-galera_client/blob/b13dba202174a098cfaf3a86e5ea5713acab70df/tasks/galera_client_install_apt.yml#L29-L60 | 14:55 |
odyssey4me | I guess something is required to make those work with a proxy - maybe jrosser can advise? | 14:55 |
odyssey4me | fnpanic that said - did you have the proxy env vars set when you did bootstrap-aio ? | 14:56 |
fnpanic | exactly this tasks | 14:56 |
fnpanic | yes! | 14:56 |
odyssey4me | ok, so you should have a file /etc/openstack_deploy/user_variables_proxy.yml | 14:56 |
fnpanic | the http_proxy= and https_proxy= are set on boot | 14:57 |
odyssey4me | fnpanic so is https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/files/user_variables_proxy.yml present as /etc/openstack_deploy/user_variables_proxy.yml ? | 14:57 |
fnpanic | yes | 14:57 |
fnpanic | is there | 14:58 |
*** vnogin has joined #openstack-ansible | 14:58 | |
odyssey4me | ok, and if you look at your hosts/containers - can you see that content in /etc/environment ? | 14:58 |
jrosser | I can help | 14:58 |
fnpanic | aio host yes | 14:58 |
jrosser | But later/next week sadly | 14:58 |
fnpanic | ? | 14:59 |
fnpanic | so the proxy settings are correct | 14:59 |
odyssey4me | fnpanic can you see /etc/environment has the proxy vars set in the galera_server container? | 14:59 |
*** weezS has joined #openstack-ansible | 14:59 | |
jrosser | fnpanic: if you get properly stuck give me a shout and I’ll try to replicate it | 14:59 |
fnpanic | one moment | 15:00 |
fnpanic | enviroment file in the container is there and looks correct | 15:01 |
fnpanic | proxy is reachable from container and host | 15:01 |
fnpanic | now it is in the retry loop of this task TASK [galera_client : Add keys (primary keyserver)] 3 retires left | 15:02 |
odyssey4me | looks like this may be a known bug: https://github.com/ansible/ansible/issues/31691 | 15:04 |
jrosser | odyssey4me: I mirror so wouldn’t see that | 15:04 |
*** ivve has joined #openstack-ansible | 15:05 | |
ostackz | Hi, struggling to understand why my VMs do not receive IP. Seems that my vxlan does not work. | 15:05 |
ostackz | Docs say: br-vxlan should contain veth-pair ends from required LXC containers and a physical interface or tagged-subinterface. My output is https://pastebin.com/raw/d7BxBHEP | 15:05 |
ostackz | "brctl show br-vxlan" - I see only physical interface in bridge, does anyone see in working infra node that neutron container is also bridged on br-vxlan? | 15:05 |
*** markvoelker has quit IRC | 15:06 | |
fnpanic | damn | 15:06 |
odyssey4me | fnpanic that's ok - you can apply an override to work around it - something like this | 15:07 |
fnpanic | tell me and i will give it a try :-) | 15:09 |
odyssey4me | yep, just putting it together | 15:09 |
fnpanic | what is also strange is that the retires seem to take forever to timeout | 15:09 |
fnpanic | delay says 2 but this is way longer than two | 15:10 |
fnpanic | odyssey4me: Thanks! | 15:10 |
fnpanic | need coffee brb | 15:10 |
ostackz | can anyone share "brctl show br-vxlan" from working infra node? Trying to understand what bridge members should be. At least how many bridge members there are? >1? | 15:13 |
gisak | guys any hints regarding "nova_oslomsg_notify_host_group is undefined" error ? | 15:14 |
fnpanic | ostackz: mom | 15:14 |
odyssey4me | fnpanic I think I'm just going to push up a patch to vendor that key in - this is a common issue... hold a few minutes while I get that prepped | 15:14 |
fnpanic | http://paste.openstack.org/show/737311/ | 15:15 |
fnpanic | thanks! | 15:16 |
fnpanic | ostackz: look at the past | 15:17 |
odyssey4me | gisak I dunno if noonedeadpunk is around, but he's probably the guy to help you. | 15:17 |
ostackz | fnpanic: thanks, now I see that I am lacking neutron interface in bridge. And in cact that cointainer even does not have eth10 at all. | 15:17 |
redkrieg | odyssey4me: it's a stable/rocky checkout from a couple weeks back. I'll submit a bug | 15:17 |
noonedeadpunk | gisak: It's in ceilometer? | 15:17 |
gisak | yes | 15:18 |
fnpanic | maybe you have a mistake in openstack_user_config.yml | 15:18 |
fnpanic | i guess provider_networks: section ;-) | 15:18 |
fnpanic | ostackz: have you looked at the production examples? | 15:19 |
noonedeadpunk | gisak: I think, that in 18.1.0 it should be already fixed. | 15:19 |
noonedeadpunk | What version of OSA are you running? | 15:19 |
ostackz | fnpanic I have redeployed my openstack from same config files as before, but now vxlan does not work. It did before | 15:20 |
gisak | 2.5.10 | 15:20 |
fnpanic | ostackz: ok this is strange | 15:21 |
noonedeadpunk | gisak: just make sure, that you have the following in /etc/ansible/roles/os_ceilometer/defaults/main.yml : https://github.com/openstack/openstack-ansible-os_ceilometer/blob/stable/rocky/defaults/main.yml#L90-L97 | 15:21 |
fnpanic | nothing changed in the infra? Why have you redeployed it? | 15:21 |
ostackz | it was upgraded several times, then I tried Rocky before time and afterwards went back to queens | 15:22 |
fnpanic | ah, have you reinstalled the base os? | 15:23 |
ostackz | pike-queens upgrade did not remove unneeded containers as it was supposed to, but that is old story :) | 15:23 |
fnpanic | have you reinstalled the deployment host? | 15:23 |
ostackz | I did reinstall OS | 15:23 |
gisak | thanks, indeed #Nova notification was missing | 15:23 |
ostackz | ok, thanks for bridge member sharing, need to go now, will dig into vxlan later. | 15:24 |
*** dcapone2004_ has joined #openstack-ansible | 15:25 | |
*** dcapone2004_ has quit IRC | 15:26 | |
*** nurdie has joined #openstack-ansible | 15:28 | |
fnpanic | kk | 15:30 |
*** hamzaachi has joined #openstack-ansible | 15:30 | |
redkrieg | odyssey4me: here's my bug report, please let me know if you need any additional info: https://bugs.launchpad.net/openstack-ansible/+bug/1808543 | 15:31 |
openstack | Launchpad bug 1808543 in openstack-ansible "Keystone Federation cannot complete SP node setup on stable/rocky" [Undecided,New] | 15:31 |
noonedeadpunk | gisak: probably, not only nova is missing, so you probably should check it before runnning role again | 15:31 |
*** hamzaachi has quit IRC | 15:32 | |
*** hamzaachi has joined #openstack-ansible | 15:32 | |
*** kopecmartin is now known as kopecmartin|off | 15:32 | |
*** ivve has quit IRC | 15:34 | |
*** hamzaachi has quit IRC | 15:48 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 15:48 |
*** hamzaachi has joined #openstack-ansible | 15:48 | |
odyssey4me | fnpanic I dunno if that will pick cleanly to rocky - but try that | 15:48 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 15:50 |
fnpanic | odyssey4me: so looking at this i guess you just prep downloaded the GPG keys for galera in this patch right? | 15:58 |
fnpanic | i will patch it in manually into rocky and give it a try | 15:58 |
odyssey4me | fnpanic you should be able to cherry-pick it into /etc/ansible/roles/galera_client | 15:58 |
odyssey4me | from that url, use the 'download' drop-down on the top right, choose the 'copy to clipboard' icon next to the cherry-pick option, then in your VM change into /etc/ansible/roles/galera_client and paste that command | 15:59 |
openstackgerrit | Andy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment https://review.openstack.org/624184 | 16:00 |
odyssey4me | if that doesn't work you can grab the patch file as an archive/zip and extract it there and apply it | 16:00 |
odyssey4me | effectively that removes the use of the proxy at all, because the files are in the git tree, copied over and imported | 16:03 |
fnpanic | odyssey4me: cherry-picked and i will try it now and report back | 16:07 |
odyssey4me | fnpanic excellent, thanks | 16:09 |
fnpanic | odyssey4me: nothing to thank here, i need to thank you :-) | 16:10 |
odyssey4me | fnpanic feedback on work done is just as important as the work being put together | 16:10 |
fnpanic | but proxies suck anyway | 16:10 |
fnpanic | if you get anything from me it is feedback ;-) | 16:11 |
odyssey4me | they make life a little more complicated sometimes :) | 16:11 |
fnpanic | so far it looks farily good :-) | 16:11 |
fnpanic | but lets wait till the playbook finishes | 16:11 |
odyssey4me | fnpanic that role is only used in two places - the utility container, and the galera_server container - so if you're passed those, then it's all good | 16:12 |
fnpanic | it is already installing galera in the containers | 16:13 |
odyssey4me | if that patch is working for you, please submit your +1 on the patch with a comment that you tested it and it works for you :) | 16:13 |
fnpanic | ok | 16:13 |
fnpanic | will it make it into rocky or not worth backporting? | 16:13 |
odyssey4me | it helps to have that for anyone else who looks at the patch to vote on whether it should merge or not | 16:14 |
odyssey4me | oh yes, I'll port it back once it merges to master | 16:14 |
odyssey4me | it's a bit odd that this was done for galera_server and not galera_client, but I'm guessing mnaser forgot about that one :p | 16:14 |
fnpanic | now it is here.... | 16:16 |
fnpanic | TASK [rabbitmq_server : Add rabbitmq apt-keys] | 16:16 |
fnpanic | i guess this will have the same issue | 16:16 |
fnpanic | no retry yet but i gues this will happen shortly | 16:16 |
fnpanic | i will comment the galera client fix | 16:17 |
*** gisak has quit IRC | 16:18 | |
odyssey4me | ok, but we have a precedence for that now - so I can implement the same thing for the rabbitmq_server role to get that sorted out easily :) | 16:19 |
fnpanic | great | 16:19 |
jrosser | odyssey4me: check this out https://review.openstack.org/#/c/625269/ | 16:19 |
*** pcaruana has quit IRC | 16:20 | |
fnpanic | i am happy to test | 16:20 |
odyssey4me | jrosser interesting, although I think we use systemd mounts now and don't bother with losetup | 16:20 |
odyssey4me | I wonder if something similar is possible there. | 16:20 |
odyssey4me | fnpanic ok, let me put a patch together for that too :) | 16:21 |
FrankZhang | odyssey4me: hey man, I was working os_barbican recently and found out the policy file is pretty old and purely static templated. There's no substitution on it. Is it still worthy we keep it? We tried working without the policy, and nothing changed. The default policy is enough so far. https://github.com/openstack/openstack-ansible-os_barbican/blob/master/templates/policy.json.j2 | 16:21 |
openstackgerrit | Merged openstack/openstack-ansible-os_cinder master: Combine, rather than replace init overrides https://review.openstack.org/625267 | 16:21 |
odyssey4me | fnpanic apologies for this, thanks for your patience | 16:22 |
*** spatel has joined #openstack-ansible | 16:22 | |
odyssey4me | FrankZhang I guess that if policy-in-code is done for barbican, then that policy should be removed and something like the implementation in os_keystone should be done | 16:22 |
*** ivve has joined #openstack-ansible | 16:24 | |
spatel | jamesdenton: ^^ | 16:24 |
jamesdenton | ? | 16:25 |
spatel | I am seeing these error mesg frequently | 16:25 |
spatel | ostack-compute-sriov-01 nova-compute:2018-12-14 11:22:22.161 40288 WARNING nova.pci.utils [req-0d87b5e4-6ece-4beb-880c-51c7c5835a66 - - - - -] No net device was found for VF 0000:03:09.0: PciDeviceNotFoundById: PCI device 0000:03:09.0 not found | 16:25 |
spatel | everything working fine so far 2 SR-IOV instance also running on this compute node.. | 16:25 |
spatel | any idea what is this WARNING for? | 16:25 |
jamesdenton | i think i've seen this before, and it was just cosmetic. But i don't recall the details | 16:26 |
FrankZhang | odyssey4me: we tested queens and rocky os_barbican, all of them failed releasing secret for other service. The scenario we worked on is Octavia TLS terminated loadbalancer which asked for secret from barbican. This one was verified on devstack and other format openstack but OSA. I'm guessing os_barbican has some problem on its wrap or configs. Do you anyone have well knowledge of os_barbican? | 16:26 |
spatel | jamesdenton: i wonder if its related to PciPassthroughFilter | 16:27 |
odyssey4me | FrankZhang I don't have the first clue about how it works. It'd be nice if it could get some attention from people who do. | 16:27 |
spatel | i am seeing this error popping up every minute | 16:27 |
fnpanic | odyssey4me: i will be afk for some time and will look at another computer, can you send the review to me? name: panic! | 16:28 |
fnpanic | thanks | 16:28 |
odyssey4me | oh bother, https://github.com/openstack/neutron/commit/7bb0b841511ead6fc58bdfe2a378801576c68f85 merged - so now our neutron role needs fixing | 16:28 |
odyssey4me | I guess it's time for policy-in-code changes to be implemented across the board. | 16:29 |
*** electrofelix has quit IRC | 16:29 | |
*** gyee has joined #openstack-ansible | 16:37 | |
*** rodolof has quit IRC | 16:38 | |
*** rodolof has joined #openstack-ansible | 16:39 | |
*** hamzaachi has quit IRC | 16:41 | |
*** hamzaachi has joined #openstack-ansible | 16:41 | |
*** rodolof has quit IRC | 16:45 | |
*** rodolof has joined #openstack-ansible | 16:45 | |
openstackgerrit | Michael Johnson proposed openstack/openstack-ansible stable/rocky: Update Octavia to latest stable/rocky SHA https://review.openstack.org/625306 | 16:47 |
openstackgerrit | Michael Johnson proposed openstack/openstack-ansible stable/queens: Update Octavia to latest stable/queens SHA https://review.openstack.org/625307 | 16:50 |
*** vnogin has quit IRC | 16:52 | |
openstackgerrit | Michael Johnson proposed openstack/openstack-ansible stable/pike: Update Octavia to latest stable/pike SHA https://review.openstack.org/625309 | 16:52 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server master: Use in-repo GPG keys https://review.openstack.org/625312 | 16:53 |
*** tosky has quit IRC | 16:56 | |
*** shardy is now known as shardy_mtg | 16:58 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-rabbitmq_server master: Use in-repo GPG keys https://review.openstack.org/625312 | 16:58 |
odyssey4me | if any cores are around, it'd be good to get https://review.openstack.org/625291 in so that the client and server mechanisms match and it works online/offline and through a proxy | 17:01 |
*** udesale has quit IRC | 17:07 | |
*** markvoelker has joined #openstack-ansible | 17:31 | |
jrosser | odyssey4me: left you a comment there | 17:32 |
*** markvoelker has quit IRC | 17:35 | |
*** Emine has joined #openstack-ansible | 17:36 | |
*** macza has joined #openstack-ansible | 17:45 | |
*** macza has quit IRC | 17:45 | |
odyssey4me | jrosser I would think so, except the galera_server patch did not have a reno or care about the previous implementation... but yeah, I guess I can fix them and reno them both | 17:56 |
spatel | folks if i delete flavor does that delete or impact any running instance on that flavor? | 17:57 |
jrosser | if it respected the apt_key fields that were used previously it wouldnt need a reno | 17:57 |
odyssey4me | it's probably actually simpler then to do what logan- suggested - just have a dict and pass it in, leaving the greatest flexibility | 17:57 |
spatel | i don't think but just want to confirm | 17:57 |
jrosser | and exiting overrides would just carry on working like before | 17:57 |
jrosser | i think logan- and I are meaning the same thing | 17:57 |
odyssey4me | well, sort of | 17:58 |
odyssey4me | but yeah, let me just make is backwards compatible - and fix galera_server | 17:59 |
*** gillesMo has quit IRC | 18:00 | |
odyssey4me | actually, I don't think logan's mechanism will work because we have extra options in there - and I think that will flunk out | 18:05 |
*** shardy_mtg has quit IRC | 18:08 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 18:20 |
*** rodolof has quit IRC | 18:27 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 18:30 |
odyssey4me | logan- jrosser I think that https://review.openstack.org/625291 is the best way forward, assuming that it works. :) | 18:30 |
*** aludwar has quit IRC | 18:32 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 18:32 |
openstackgerrit | Andy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment https://review.openstack.org/624184 | 18:32 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 18:34 |
*** aludwar has joined #openstack-ansible | 18:36 | |
*** chandan_kumar has quit IRC | 18:46 | |
*** spatel has quit IRC | 18:53 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal https://review.openstack.org/625331 | 18:59 |
*** fnpanic_ has joined #openstack-ansible | 19:03 | |
fnpanic_ | hi | 19:05 |
fnpanic_ | i have seen the changes on the apt_key for galera_client | 19:06 |
fnpanic_ | anything i can test for rabbitmq yet? | 19:06 |
odyssey4me | fnpanic_ I've pushed up a patch which should be usable for testing, although it may change like the galera_client one: https://review.openstack.org/625312 | 19:07 |
nurdie | odyssey4me: Update on my issue with trying to upgrade an OSA CentOS controller cluster with YUM. We couldn't get the OSA scripts for 16.0.23 to work. So many dependency errors and pip failures, etc. I ended up figuring out that repo container had the 16.0.23 services tarballs, and I created bash scripts to go through all of the containers, pull their perspective tarballs, unzip, and edit the systemd unit files to use the new ven | 19:22 |
nurdie | vs. We have a working OS cluster again! | 19:22 |
odyssey4me | nurdie ah, awesome - now the road to upgrades :) | 19:23 |
fnpanic_ | odyssey4me: thanks! can you send me the link | 19:23 |
nurdie | odyssey4me: Yes, that's Pike. So I went with your recommendation and continued with the upgrade from Ocata to Pike with those tarballs. Thanks for all of your input the other night. It helped a lot | 19:24 |
odyssey4me | fnpanic_ I did. ;) | 19:25 |
odyssey4me | nurdie excellent - time to get up to queens, then rocky :) | 19:25 |
fnpanic_ | yeah | 19:25 |
fnpanic_ | sorry ;-) | 19:25 |
nurdie | odyssey4me: Yep! We are already preparing for that. Considering moving to an all-in-one setup with controllers on metal so that we have to rely less on OSA | 19:26 |
*** Emine has quit IRC | 19:26 | |
odyssey4me | nurdie if your env is small and simple, and you like editing files by hand - sure :) | 19:26 |
nurdie | odyssey4me: It is pretty small. Only 3 controllers and a few compute nodes, and I ended up editing a bunch of configs by hand anyways because we couldn't figure out some of the OSA pip/python venvs errors | 19:28 |
*** nurdie has quit IRC | 19:30 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 19:36 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-galera_client master: Use in-repo GPG keys https://review.openstack.org/625291 | 19:36 |
*** nurdie has joined #openstack-ansible | 19:37 | |
fnpanic_ | i have cherry-picked the patch and will give it a try now | 19:38 |
*** nurdie has quit IRC | 19:41 | |
fnpanic_ | odyssey4me: looks very good so far :-) | 19:47 |
fnpanic_ | odyssey4me: in your patchset the files are in files/gpg | 19:49 |
fnpanic_ | the problem is that the path looks only in gpg | 19:49 |
fnpanic_ | i changed this and it works :-) | 19:49 |
odyssey4me | fnpanic_ hmm, that's odd - it should prefix it with 'files/' automatically | 19:50 |
odyssey4me | can you add that as a comment in-line in the review please? | 19:50 |
fnpanic_ | will do | 19:53 |
fnpanic_ | so setup-infra works now flawlessly behind a procx | 19:53 |
fnpanic_ | proxy | 19:53 |
fnpanic_ | let's see what setup-openstack does | 19:54 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal https://review.openstack.org/625331 | 19:54 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal https://review.openstack.org/625331 | 19:55 |
fnpanic_ | mhhh TASK [os_keystone : Ensure newest key is used for credential in Keystone | 19:58 |
fnpanic_ | fails | 19:58 |
fnpanic_ | http://paste.openstack.org/show/737335/ | 19:58 |
fnpanic_ | the error message says nothing.... | 19:58 |
odyssey4me | I'm out for the night - time to go offline. I may pop on tomorrow again. | 19:59 |
jamesdenton | see ya odyssey4me | 19:59 |
fnpanic_ | see you | 20:00 |
fnpanic_ | any idea or is keystone broken in aio/rocky? | 20:01 |
fnpanic_ | this does not look like a proxy problem right? | 20:01 |
*** macza has joined #openstack-ansible | 20:01 | |
jrosser | it is possibly more like a no_proxy problem | 20:03 |
fnpanic_ | i was looking into this - you are reading my mind :-) | 20:04 |
fnpanic_ | proxy absolutly suck | 20:04 |
fnpanic_ | sorry for being offensive | 20:04 |
jrosser | does no_proxy look sensible? | 20:04 |
fnpanic_ | yes it does | 20:04 |
fnpanic_ | checking the container now | 20:05 |
fnpanic_ | host is as it should | 20:05 |
fnpanic_ | no_proxy="localhost,127.0.0.1,172.29.236.100,10.0.243.190,172.29.238.62,172.29.237.51,172.29.236.253,172.29.238.29,172.29.239.67,172.29.239.182,172.29.236.238,172.29.239.228,172.29.236.143,172.29.239.15,172.29.238.110,172.29.238.167" | 20:05 |
fnpanic_ | same in the container | 20:05 |
fnpanic_ | :-( | 20:06 |
logan- | there should be a log file in /var/log/keystone/keystone-manage.log that has more detail iirc | 20:07 |
fnpanic_ | that is the lan ip 10.0.243.190 | 20:07 |
fnpanic_ | ok | 20:07 |
fnpanic_ | mhhh | 20:08 |
fnpanic_ | no such logfile, not in the keystone container, not in rsyslog container not on the host | 20:09 |
jrosser | does your no_proxy env var actually have those quotes? | 20:10 |
jrosser | env | grep no_proxy <- does that show it like you showed above | 20:10 |
fnpanic_ | it came from export | 20:10 |
jrosser | ah ok | 20:10 |
fnpanic_ | :-) | 20:10 |
jrosser | so, what i'd do next is run the same command that the playbook did by hand | 20:11 |
fnpanic_ | makes totally sense | 20:11 |
jrosser | but strace <command> and then you'll see in vast detail what happened | 20:11 |
jrosser | and buried in there will be the url it tried | 20:11 |
jrosser | or you ca perhaps try something simpler first | 20:12 |
jrosser | which would be to use curl/wget to test the keystone endpoint on your loadbalancer | 20:12 |
jrosser | just see that it returns anything from the POV of the container | 20:12 |
jrosser | fnpanic: actually that test of the LB is really important - please try that first | 20:15 |
*** noonedeadpunk has quit IRC | 20:15 | |
fnpanic_ | ok | 20:16 |
fnpanic_ | hatop says that keystone is down for service and admin | 20:19 |
fnpanic_ | infra services are fine like galera | 20:19 |
fnpanic_ | galera, rabbit and repo are healthy | 20:20 |
admin0 | quick quesiton .. is there a proxy or something that will allow me to cache the bootstrap files so that its faster in my network | 20:20 |
fnpanic_ | all openstack services are not yet ready because not setup :-) | 20:20 |
fnpanic_ | admin0: we use squid for all traffic and it caches the files | 20:21 |
fnpanic_ | but it introduces other problems :-( | 20:22 |
jrosser | i mirror all the requried apt repos | 20:22 |
jrosser | with debmirror - there are lots of choices here | 20:22 |
admin0 | i think squid way is more transparent :) | 20:22 |
jrosser | and it's all behinh squid too, just for added fun | 20:22 |
admin0 | :) | 20:22 |
fnpanic_ | ;-) | 20:23 |
fnpanic_ | jrosser: so what to check next? | 20:23 |
jrosser | perhaps try the keystone-manage command by hand from the container, with the verbose flag | 20:27 |
fnpanic_ | how can i find out which command was executed easily? | 20:28 |
jrosser | https://github.com/openstack/openstack-ansible-os_keystone/blob/master/tasks/keystone_credential_create.yml#L85-L89 | 20:29 |
jrosser | you can re-run just the keystone playbook with -vvv to see more debug, that would probably show you | 20:29 |
fnpanic_ | ok | 20:30 |
jrosser | take a quick look in playbooks/setup-openstack.yml to see how it's all organised | 20:30 |
fnpanic_ | ok | 20:31 |
fnpanic_ | d | 20:37 |
*** priteau has quit IRC | 20:37 | |
fnpanic_ | # /openstack/venvs/keystone-18.1.1/bin/keystone-manage -d credential_migrate --keystone-user "keystone" --keystone-group "keystone" | 20:37 |
fnpanic_ | this one does no output but exit code is 1 | 20:38 |
jrosser | add --logfile /tmp/log.txt | 20:38 |
jrosser | also on your keystone container can you try 'wget <internal_vip>:8181' and see if that works | 20:41 |
fnpanic_ | i added keystone-manage: error: unrecognized arguments: --logfile /tmp/log.txt | 20:42 |
fnpanic_ | the docs of keystone-manage say this is correct.... | 20:42 |
fnpanic_ | strange | 20:42 |
jrosser | it probably wants to be the first option | 20:43 |
fnpanic_ | wget works flawless | 20:44 |
fnpanic_ | you are right | 20:45 |
fnpanic_ | first option works.... | 20:45 |
jrosser | i'm really hoping there is something useful in the log, otherwise i'm running out of ideas | 20:46 |
fnpanic_ | http://paste.openstack.org/show/737338/ | 20:47 |
fnpanic_ | maybe you have an idea | 20:47 |
fnpanic_ | access denied.... | 20:47 |
fnpanic_ | looks like it cannot connect to the db | 20:48 |
admin0 | is there any option in squid to make it cache downloads from https:// as well ? | 20:55 |
admin0 | or care to share some config | 20:55 |
admin0 | found it :) | 20:56 |
admin0 | ignore me :) | 20:56 |
fnpanic_ | ;-) | 20:56 |
*** weezS has quit IRC | 20:58 | |
openstackgerrit | Andy Smith proposed openstack/openstack-ansible master: Add qdrouterd role for rpc messaging backend deployment https://review.openstack.org/624184 | 20:59 |
fnpanic_ | i re-ran the setup-infra and in hatop the db is up | 21:01 |
jrosser | maybe something to do with the earlier errors with apt keys | 21:02 |
fnpanic_ | mhhh but everything went as expected | 21:03 |
fnpanic_ | so i think it looks fairly good from what i can see, db and rabbit are online | 21:04 |
*** chandan_kumar has joined #openstack-ansible | 21:13 | |
fnpanic_ | so i guess noone ever installed OSA with a proxy :-) | 21:13 |
jrosser | theres a few of us | 21:16 |
fnpanic_ | that gives me hope | 21:17 |
fnpanic_ | so what am i doing wrong :-) | 21:17 |
jrosser | i have all sorts, squid proxy, deb mirrors, pip mirror, ssh bastion betweek deploy host and cloud and so on | 21:17 |
jrosser | but that doesnt all happen on day 1 with no effort | 21:18 |
fnpanic_ | we have "just" a squid here | 21:18 |
fnpanic_ | which does http and https | 21:18 |
fnpanic_ | that's it | 21:18 |
jrosser | however, i put the patch in to add user_variables_proxy.yml and it *should* work | 21:18 |
jrosser | if it doesnt, something needs fixing | 21:19 |
fnpanic_ | which patch? | 21:19 |
fnpanic_ | can i cherry-pick it? | 21:19 |
jrosser | no - the code that picked up your env vars go proxies and auto-configured that in the AIO setup | 21:20 |
fnpanic_ | ah ok | 21:20 |
fnpanic_ | but this looks ok for me | 21:20 |
jrosser | from what you are saying things are working now? | 21:21 |
fnpanic_ | no same error on keystone install | 21:21 |
fnpanic_ | http://paste.openstack.org/show/737338/ | 21:21 |
*** DanyC has quit IRC | 21:22 | |
fnpanic_ | btw does it make sense to split user_variables.yml in user_variables_nova.yml for example or does this opose problems? | 21:29 |
jrosser | that works | 21:30 |
fnpanic_ | :-) | 21:30 |
jrosser | i'd think about what point you start from scratch again | 21:32 |
fnpanic_ | ? | 21:33 |
jrosser | it's really useful to be able to treat AIO as disposable so you can be sure that theres no left over cruft from stuff that went wrong early on | 21:33 |
jrosser | so working in a VM is a good approach | 21:33 |
fnpanic_ | it is a kvm vm on synnefo/ganeti | 21:33 |
admin0 | fnpanic_, care to share your squid conf ? | 21:34 |
admin0 | i am trying to go down this route as my friday night project | 21:34 |
fnpanic_ | need to look at this it is a squid cluster haproxy and 4 squids at the office | 21:35 |
fnpanic_ | i guess the conf is in our puppet git | 21:35 |
admin0 | are you making the servers ignore ssl certs ? | 21:35 |
fnpanic_ | no | 21:35 |
fnpanic_ | i will try a vagrant vm with a local proxy.... | 21:39 |
fnpanic_ | will the apt_key changes be in master or also rocky? | 21:40 |
jrosser | i would expect that to get backported quite a long way - ansible seems to have had trouble with this for a long time | 21:42 |
jrosser | so certainly master & rocky | 21:42 |
fnpanic_ | that sounds great | 21:43 |
fnpanic_ | how can i know when it is in rocky? | 21:43 |
jrosser | if you could verify the latest version that went up that would be great | 21:44 |
fnpanic_ | ok i will do | 21:44 |
jrosser | once it is reviewed and merges into master a new review is created by cherry picking the patch onto the stable/rocky branch | 21:44 |
fnpanic_ | got it | 21:45 |
fnpanic_ | you are talking about 625312 | 21:45 |
fnpanic_ | right? | 21:45 |
jrosser | yes | 21:45 |
fnpanic_ | k | 21:46 |
jrosser | oh hold on - there are two arent there | 21:46 |
jrosser | there is a new version of https://review.openstack.org/#/c/625291/8 | 21:47 |
fnpanic_ | on 625312 i added a comment | 21:47 |
fnpanic_ | the files prefix is missing in the vars/ubuntu.yml | 21:48 |
fnpanic_ | the over one i need to test | 21:48 |
jrosser | if you think it doesnt work as it stands please put a -1 | 21:49 |
fnpanic_ | ok | 21:50 |
jrosser | right i'm done for today - i will try an AIO behind proxy on monday | 21:51 |
*** dcapone2004_ has joined #openstack-ansible | 21:51 | |
dcapone2004_ | is anyone around to answer a couple newbie questions regarding openstack ansible? | 21:52 |
fnpanic_ | ok thanks | 21:52 |
fnpanic_ | have a great weekend | 21:52 |
fnpanic_ | dcapone2004_: | 21:53 |
fnpanic_ | hi | 21:53 |
fnpanic_ | maybe i can | 21:53 |
fnpanic_ | maybe | 21:53 |
dcapone2004_ | cool | 21:53 |
openstackgerrit | Nicolas Bock proposed openstack/openstack-ansible master: Increase CentOS test coverage https://review.openstack.org/610311 | 21:53 |
*** rodolof has joined #openstack-ansible | 21:54 | |
dcapone2004_ | basically I am trying to deploy a small test environment and I am hvaing difficultly determining what interfaces to map to what bridges as I cannot determine from the documentation what interface is designed to supply external IP addresses / floating ip addresses | 21:54 |
fnpanic_ | on a usual setup you have a few bridges | 21:55 |
dcapone2004_ | it is also possible, that my network cabling method might not work with openstack-ansible, but essentially, I have 4 physical host, a deployment host, an infrastructure host, a compute host, and a storage host | 21:55 |
fnpanic_ | br-vlan will be the provider network | 21:55 |
fnpanic_ | and there you will have floating ips | 21:56 |
fnpanic_ | this sounds ok | 21:56 |
dcapone2004_ | all hosts have a GigE connection to external network switch and are assigned public IPs, I have bridged this to br-mgmt | 21:56 |
fnpanic_ | have you looked at the production config example and walked trough the deployment guide? | 21:56 |
fnpanic_ | ok | 21:56 |
dcapone2004_ | compute host has a direct 10G connection to the storage host (no switch) | 21:56 |
fnpanic_ | this sounds special | 21:57 |
dcapone2004_ | compute host also has a 1GB direct connection to Infra host as I thought this link was needed for Neutron to function, but I'm thinking this is completely unnecessary | 21:57 |
fnpanic_ | not sure if this works | 21:57 |
fnpanic_ | cinder needs to play also nice with in here | 21:57 |
fnpanic_ | first of all take a look here | 21:59 |
fnpanic_ | https://docs.openstack.org/project-deploy-guide/openstack-ansible/rocky/targethosts.html | 21:59 |
fnpanic_ | configure the network section till the end of the page | 21:59 |
fnpanic_ | this makes networking more clear | 21:59 |
fnpanic_ | and | 21:59 |
fnpanic_ | this one | 22:00 |
fnpanic_ | https://docs.openstack.org/openstack-ansible/rocky/user/prod/example.html | 22:00 |
dcapone2004_ | yeah i read that a few times, I think what confuses me is where the external IPs come in and how I can assign the same physical interface to both br-mgmt and br-vlan (because I understood what you seemed to have confirmed in that the floating ip[s come from the flat/vlan networks) | 22:01 |
fnpanic_ | there you can see under Host network configuration and Environment layout which nodes need to talk to each other | 22:01 |
fnpanic_ | yes | 22:02 |
dcapone2004_ | yeah that is what I used as a rference for my cabling, compute needs to talk to everything (hence, the external switch connection, a direct connection to infra, and a direct connect to storage) | 22:02 |
fnpanic_ | br-mgmt is internal communication and managment | 22:02 |
dcapone2004_ | storage only needs to talk to compute and mgmt, hence the direct connect to compute and the external switch connection to get to mgmt | 22:03 |
fnpanic_ | computes can also have provider bridges so if you put an instance on a provider network without a floting in it needs this one | 22:03 |
fnpanic_ | if you use a router it need to connect to the network node | 22:03 |
dcapone2004_ | yeah which in this small test deployment is the same as the infra node from what i can tell in the documentation | 22:04 |
*** priteau has joined #openstack-ansible | 22:05 | |
*** priteau has quit IRC | 22:05 | |
dcapone2004_ | so what it seems like is I am an interface/subnet short, basically unlike an aio deploy where the mgmt and floating networks are (or at least can be since I have done it) the same, with this type of deployment, I should add a separate vlan, use a small subnet to use for management and then use what I have currently assigned to mgmt and assign it to br-vlan | 22:07 |
dcapone2004_ | that is probably the quickest way to get me to a deployment, but I feel like if an aio can share the same subnet for mgmt and floating ips, there must be a way to configure it for this deployment to be the same | 22:08 |
jrosser | dcapone2004_: on the examples https://docs.openstack.org/openstack-ansible/rocky/user/prod/example.html the external network (for floating ip etc) is assumed to be a "flat" network on br-vlan. this means it is the untagged/native traffic on that bridge | 22:13 |
fnpanic_ | i guess so | 22:13 |
fnpanic_ | i am not aware how | 22:13 |
dcapone2004_ | jrosser, yes that is the plan, but for the moment, this environment is deployed offsite, so my mgmt network is public IPs, and presently the mgmt network i am using is essentially the same "flat" network | 22:14 |
dcapone2004_ | that is where I am running into the issue where i think my addressing is not supported | 22:15 |
admin0 | dcapone2004_ .. how do you plan to use the floating ips/network ? is it on vlan or flat ? | 22:15 |
admin0 | i see .. vlan .. | 22:16 |
admin0 | so its easy .. | 22:16 |
jrosser | having the mgmt network on public IP with no firewall or anything is not a great plan | 22:16 |
jrosser | it's assumed that it is a private network | 22:17 |
fnpanic_ | :-) | 22:18 |
fnpanic_ | btw in the production example it can be a flat or tagged vlan right? | 22:18 |
dcapone2004_ | I am aware of that issue long-term and for production | 22:18 |
fnpanic_ | that depends on what you create in neutron | 22:18 |
*** rodolof has quit IRC | 22:18 | |
dcapone2004_ | for now I have just blocked the management IPs used on the hosts to not allow any traffic except from our office | 22:19 |
dcapone2004_ | so I can essentially remotely manage it | 22:19 |
admin0 | dcapone2004_, you already have enough infra to not do an aio but do a proper install | 22:19 |
admin0 | dcapone2004_, maybe this will help.. https://www.openstackfaq.com/openstack-dev-server-setup-ubuntu/ -- | 22:20 |
admin0 | so basically you can have even a single interface or multiple, it all works via vlans | 22:20 |
dcapone2004_ | yep, but I am essentially trying to learn openstack-ansible for deployment, I have used packstack in the past and I am trying to graduate to a better deploymenttool | 22:20 |
dcapone2004_ | our production plan is for a ceph cluster for storage, 3 infra nodes, and 2 compute nodes, but need to take some baby steps to understand openstack-ansible much better first | 22:21 |
fnpanic_ | then go for the production deployment guide | 22:21 |
jrosser | have you got control of they switch & creation of vlans etc? | 22:22 |
dcapone2004_ | I also get the vlan thing, I think I just need to use a different subnet/vlan for the mgmt and call it a day because that solves my problem, it just wasn't necessarily my plan | 22:23 |
dcapone2004_ | yes | 22:23 |
jrosser | make ther public ip just for ssh into your boxes | 22:23 |
jrosser | put mgmt net on another vlan and it's all then just like the prod example | 22:23 |
dcapone2004_ | essentially, I was looking to minimize subnet usage, so I was looking to assign a simple /24 for mgmt and "vlan", use the first 5-6 IPs of the subnet for mgmt, block those IPs at the firewall level to all traffic except our office IPs for administration | 22:24 |
jrosser | you need dozens of IP on the mgmt net | 22:24 |
jrosser | becasue each container on each host needs one | 22:24 |
admin0 | dcapone2004_, what you can do is this. add 1 ip to the router .. and then NAT it to your external VIP .. that way, you can access it via 1 IP .. then the real IPs goes only into your floating ip range | 22:25 |
admin0 | rest = all private | 22:25 |
dcapone2004_ | here is where I am having the issue....how do I remotely reach the physical hosts for management if the mgmt network is private IPs? Is the design goal/expectation for it to be managed via VPN? | 22:27 |
fnpanic_ | quick question, is it sufficent when the deployment host has internet access only? | 22:27 |
admin0 | dcapone2004_, you need 1 server on public | 22:27 |
admin0 | then use that as a jumphost | 22:27 |
dcapone2004_ | yep, so that can be the deployment server which makes the most sense | 22:27 |
admin0 | yes | 22:28 |
fnpanic_ | or do all hosts and the containers need internet access? do they not use the repo host for downloading packages and so on? | 22:28 |
admin0 | dcapone2004_, that is normally how its done as well | 22:29 |
*** ivve has quit IRC | 22:29 | |
dcapone2004_ | and I am guessing the suggestion would be that the public IP used for that external access be on a different subnet than the intended floating ip range? | 22:30 |
admin0 | not necessary .. :) | 22:30 |
admin0 | the floating IP you get is based on vlans and the dhcp range you specify | 22:30 |
dcapone2004_ | fnpanic, I think they all need internet, but the intent would be that your mgmt network would have NAT/PAT going on for the servers to access the internet, but not allow access in | 22:31 |
admin0 | so its possible that you have the same range IP in jumphost, and you also use parts of it as floating, on the same vlan | 22:31 |
admin0 | thus having one subnet only, but effectively restricting those IPs based on how you add the range | 22:31 |
admin0 | you can say i have /24, and .1 as router .. so you reserve say 32 ips in the front for future use .. and then in openstack , add the same network and gateway, but in dhcp, give only from .33- 250 range | 22:32 |
dcapone2004_ | ok, that makes sense and I knew that actually, I just forgot I would now have 2 VLANs, so I wouldn't have an issue mapping the 2 different subnets on the deployment host | 22:33 |
admin0 | dcapone2004_, how many network cards are there in each server ? and how many VLANs do you have, and what is the vlanID of the public range ? | 22:33 |
fnpanic_ | dcapone2004_: yeah this is how i set it up | 22:33 |
fnpanic_ | then i need to get the proxy to work | 22:33 |
admin0 | my squid is not saving files :( | 22:34 |
dcapone2004_ | my brain cramp was 2 interfaces/bridges using the same subnet because I was "merging" the mgmt and floating ip subnets, not realizing the IP demands that the mgmt network had | 22:34 |
admin0 | dcapone2004_, how many interfaces do you have ? | 22:34 |
admin0 | you will have 4 bridges .. but how many physical interface ? | 22:34 |
admin0 | 2 or 1 | 22:34 |
admin0 | in each of the server | 22:34 |
dcapone2004_ | I have plenty of NICs, so I can tehcnically have up to 6 if I ever needed the bandwidth, but right now, I have it connected like this: | 22:35 |
dcapone2004_ | all hosts have a GigE connection to external network switch and are assigned public IPs, I have bridged this to br-mgmt | 22:35 |
dcapone2004_ | compute host has a direct 10G connection to the storage host (no switch) | 22:35 |
dcapone2004_ | compute host also has a 1GB direct connection to Infra host | 22:36 |
admin0 | so eth0 of all = public/management ip .. | 22:37 |
dcapone2004_ | basically, I need to trunk the port going to the external switch on the dpeloyment host, 1 vlan for mgmt, 1 for external/floating range for remote access to the environment | 22:37 |
admin0 | if you alrady have public IP, why do you need remote access range ? | 22:37 |
dcapone2004_ | well because as mentioned in my comment, that is where my issue/confusion has come in, because that public IP is what I used on the mgmt bridges | 22:38 |
dcapone2004_ | basically I put all 4 physical hosts on that public subnet, and bridged to the mgmt interface | 22:38 |
dcapone2004_ | which is where I was stuck not understand how to bridge it to br-vlan as well for floating IPs | 22:39 |
dcapone2004_ | I basically need to remove those public IPs from the mgmt bridge, use a private subnet there, trunk/Vlan the deployment server so that I can have a br-mgmt on the private subnet I use and a second vlan with a public IP attached to it | 22:40 |
dcapone2004_ | to manage the environment remotely | 22:41 |
dcapone2004_ | and trunk/vlan the infra physical host in the same way to provide external connectivity to the openstack VMs which should run route all traffic through that system via neutron | 22:42 |
admin0 | eth0 = say private range (can be dhcp or static) (in ALL) .. in deploy make eth0.100 and add your public Ip say under vlan 100 for remote management and ssh | 22:42 |
admin0 | eth1 = connect this of all servers to the switch, make trunk and allow vlan tag say 10, 20 and 30 | 22:42 |
admin0 | eth1.10 = add this to br-mgmt and give an ip of 172.29.236.x in ALL | 22:42 |
admin0 | eth1.20 = add this to br-vxlan and give an ip of 172.29.240.x in compute and infra | 22:42 |
admin0 | eth2 = add this to br-storage and give an ip of 172.29.244.x on compute and storage | 22:42 |
admin0 | eth3 = add this to br-vlan | 22:43 |
admin0 | now assuming your floating ip range is say vlan 200, on eth3, allow trunk and add vlan 200 | 22:43 |
admin0 | when u add ip later, neutron will add eth3.200 and send it tagged | 22:43 |
admin0 | and when u configure, make external VIP as 172.29.236.2 (for example ) then you can also add eth0.100:10 another public IP and SNAT/DNAT/DMZ to .2 so that whenever that public IP is hit, it opens horizon and public access to your openstack | 22:44 |
dcapone2004_ | ok, to try to morph your example to what I have currently cabled (to try to avoid a trip to recable for the moment and to also test my understanding of everything), could I do the following: | 22:46 |
dcapone2004_ | eth0.100 on deploy, private IP subnet - add to br-mgmt, eth0.200, public ip for remote management and ssh, no other connection into deploy host | 22:48 |
admin0 | all you need on deploy is the br-mgmt ip range and public | 22:48 |
*** hamzaachi has quit IRC | 22:49 | |
dcapone2004_ | yup, that is what i was trying to convey, 1 physical connection, 2 vlans, the br-mgmt vlan and the public IP vlan | 22:49 |
admin0 | yep | 22:49 |
admin0 | for simplicly ,, br-public and br-mgmt :) | 22:49 |
*** hamzaachi has joined #openstack-ansible | 22:49 | |
admin0 | one has your public IP for remote connection and one has 172.29.236.2 | 22:49 |
dcapone2004_ | on infra physical host, eth0.100 private ip assigned in mgmt subnet, add to br-mgmt, eth0.200 - add to br-vlan for floating IP range, eth1 (no vlaning required as it is a direct connect to compute) add br-br-vxlan with 172.29.240.x | 22:51 |
dcapone2004_ | on compute host, eth0.100 private ip assigned in mgmt subnet, add to br-mgmt, eth1 (no vlaning required as it is a direct connect to compute) add br-br-vxlan with 172.29.240.x, eth2 (also no VLAN with direct connect to storage) add to br-storage assign 172.29.244.x | 22:51 |
dcapone2004_ | on storage host, eth0.100 private ip assigned in mgmt subnet, eth2 (or eth1, the correct port directly connect to compute) (also no VLAN with direct connect to storage) add to br-storage assign 172.29.244.x | 22:52 |
admin0 | right | 22:53 |
admin0 | but there is a catch | 22:54 |
admin0 | you cannot add a vlan directly like that on br-vlan | 22:54 |
admin0 | as neutron creates the vlan tag later on | 22:54 |
admin0 | what you can do is add eth0 to br-vlan .. .. and then add eth0.100 to br-mgmt | 22:54 |
admin0 | so that this eth0.200 is created later automatically | 22:54 |
dcapone2004_ | got it, that makes sense | 22:55 |
admin0 | dcapone2004_, its a old article that i am redoing for rocky, but go to this page: https://www.openstackfaq.com/openstack-liberty-private-cloud-howto/ | 22:55 |
admin0 | search for click here to see single network card | 22:55 |
admin0 | and there you will see this exactly | 22:56 |
*** macza has quit IRC | 22:56 | |
admin0 | +click here to see single network card network configuration of c11 .. c25 nodes | 22:56 |
admin0 | so there, eth0 has a public ip, and also part of br-vlan .. because neutron adds and tags tags the eth0.200 later, you can even have ip and use it | 22:57 |
dcapone2004_ | got it | 22:57 |
*** cshen has joined #openstack-ansible | 22:58 | |
admin0 | so if you have a dhcp, you can use it and give direct ip to eth0 for management .. and not tag .100 specifically | 22:58 |
admin0 | because in that case, 172.29.236.x is on vlan1 | 22:58 |
admin0 | or have a diff ip there, and hvae mgmt under a new vlan tag as in that example | 22:59 |
admin0 | upto you | 22:59 |
dcapone2004_ | yep, I got that, was using the specific vlan numbers to make it easier to illustrate | 22:59 |
admin0 | will this cloud needs to be accessible from outside ? | 22:59 |
dcapone2004_ | I didn't read that page in depth, but it brought it a quick question that might be answered if I read the whole page, but is the VXLAN network used by os-ansible a different network than the tenant network(s) inside openstack? | 23:00 |
dcapone2004_ | yes, it would, glad you brought that up, because I am missing that public ip configuration somewhere | 23:00 |
admin0 | br-vlan is the outermost network which runs on a vlan | 23:00 |
admin0 | where all vxlan runs | 23:01 |
admin0 | br-vxlan is the trunk for your internal networks | 23:01 |
dcapone2004_ | internal networks "inside" openstack or "internal" to openstack ansible deployment (that was what my question was targeting) | 23:01 |
admin0 | tenant network :) | 23:02 |
admin0 | you did the eth0.100 -- that takes care of openstack deployment/manaagement/api internal traffic | 23:02 |
dcapone2004_ | ok that is what I understood it to be | 23:02 |
admin0 | that is on br-mgmt | 23:02 |
admin0 | br-vxlan = network on top of which all east-west traffic flows | 23:02 |
admin0 | does your cloud need to be accessible via public ? | 23:03 |
dcapone2004_ | yep, at leat horizon would need to be | 23:03 |
admin0 | if yes, then you need to do this ( before setup ) | 23:03 |
dcapone2004_ | and the API endpoints | 23:03 |
admin0 | so what you will do is assuming your 4 nodes have .11, .12 , .13 and .14 ip on 172.29.236.x range, | 23:04 |
admin0 | what you do is now add eth0.200:10 on public and add a 2nd public IP a.b.c.d -- this for your stack/cloud | 23:04 |
admin0 | and in your openstack_user_config, on external_lb_vip_address: cloud.yourdomain.com .. and in your internal DNS, point cloud.yourdomain.com to 172.29.236.10 ( virtual IP for example) and in user_variables, do haproxy_keepalived_external_vip_cidr: "172.29.236.9/22" haproxy_keepalived_external_interface: "eth0.100" | 23:06 |
dcapone2004_ | that should only be need on the infra host where horizon/keystone are installed correct? | 23:06 |
admin0 | that way, your haproxy ( if you decide it to be on infra) will have 2 ips .. | 23:06 |
admin0 | when haproxy is up .it will also have .9 .. which is NAT/DMZ from the public Ip we added .. and then your endpoints and horizon etc will be on cloud.example.com | 23:07 |
dcapone2004_ | and I don't think it would need to be added as a subinterface right? just eth0.200 because eth0.100 is bridged to br-mgmt and there is no assignment for eth0.200 at all in the "stack" | 23:07 |
*** lbragstad has quit IRC | 23:08 | |
admin0 | if external, i normally make it br-public and if internal/management/ssh i do it as br-ssh | 23:08 |
admin0 | keeps it sane | 23:08 |
admin0 | but you get the idea | 23:08 |
admin0 | the public name/api endpoint is set via external_lb_vip_address which is mapped via haproxy_keepalived_external_vip_cidr and haproxy_keepalived_external_interface | 23:09 |
admin0 | so if you have say 10.11.12.x via eth4 .. you can have that IP given as well | 23:09 |
dcapone2004_ | ok, I guess the question is, does this second public IP, need to be in the same range as the br-mgmt and NATTED or can it be a separate public IP from the same range as used on the deploy host for public access? | 23:10 |
admin0 | anything | 23:10 |
admin0 | its how you want your cloud to be accessed | 23:10 |
admin0 | it can be on a completely new set of ips on new interface as well | 23:10 |
admin0 | just that if infra is the haproxy host, it will get added to haproxy | 23:10 |
dcapone2004_ | got it, I thought so, but you have been so very helpful, figured I'd pick your brain while I have the chance | 23:10 |
admin0 | its midnight for me .. so tomorrow :) | 23:11 |
admin0 | success :) | 23:11 |
dcapone2004_ | I meant with that question....I'm done with you for a while I hope....thanks a lot | 23:12 |
fnpanic_ | good night | 23:12 |
admin0 | no problem .. if you have questions, just ask | 23:12 |
*** lbragstad has joined #openstack-ansible | 23:16 | |
*** lbragstad has quit IRC | 23:22 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!