*** markvoelker has quit IRC | 00:02 | |
*** ThiagoCMC has joined #openstack-ansible | 00:13 | |
*** cshen has joined #openstack-ansible | 00:14 | |
ThiagoCMC | cloudnull, hey man, I'm running the setup-hosts playbook now but, I thought that it would be using systemd-nspawn everywhere! However, I'm still seeing that "lxc-ls -f" works in my servers! Is this expected? | 00:14 |
---|---|---|
ThiagoCMC | I thought that it would be no more "*lxc*" process running... | 00:15 |
cloudnull | correct you should not see any lxc when running nspawn | 00:15 |
ThiagoCMC | Oh | 00:16 |
ThiagoCMC | Something is wrong... | 00:16 |
cloudnull | :'( | 00:16 |
ThiagoCMC | :-( | 00:16 |
ThiagoCMC | I have the line "container_tech: nspawn" under "global_overrides:" | 00:16 |
ThiagoCMC | Maybe I should declare it before "global_overrides:" (i.e., side by side with it, and "used_ips"... ? | 00:17 |
cloudnull | you can destroy the containers, `openstack-ansible lxc-containers-destroy.yml` and purge all things lxc eveywhere `ansible -m shell -a 'apt-get remove --purge lxc* lxd* snap*'` hosts | 00:17 |
ThiagoCMC | Nice! No need to re-deploy everything lol | 00:18 |
cloudnull | the global_overrides should work but maybe we need to do use a group var instead . | 00:18 |
*** cshen has quit IRC | 00:18 | |
ThiagoCMC | It isn't working. | 00:19 |
cloudnull | once you purge all the things I'd also recommend running something like `ansible -m shell -a 'ip link del lxcbr0' hosts` | 00:19 |
cloudnull | and `ansible -m shell -a 'systemctl stop lxc-dnsmasq' hosts` | 00:20 |
cloudnull | just to be sure ;) | 00:20 |
cloudnull | bummer on the global_overrides, that should work however i'll have to dig into why it's not . | 00:20 |
cloudnull | that said you could add the option to a file /etc/openstack_deploy/group_vars/all.yml | 00:21 |
cloudnull | which should do much of the same thing | 00:21 |
cloudnull | I'd be curious if that goes. | 00:21 |
cloudnull | once you have the variable in place you can test it with `openstack-ansible containers-nspawn-deploy.yml --list-hosts` | 00:21 |
*** rgogunskiy has joined #openstack-ansible | 00:22 | |
cloudnull | if you see hosts and containers in the list it's working | 00:22 |
ThiagoCMC | That's cool! I'll try again in a few minutes. | 00:22 |
ThiagoCMC | I'm also using Ubuntu MaaS, so, if I can't clean it up using your suggestions, I'll re-deploy. | 00:23 |
ThiagoCMC | BTW, the "openstack-ansible containers-nspawn-deploy.yml --list-hosts" should work right after "setup-hosts" playbook, right? | 00:23 |
cloudnull | yes | 00:23 |
cloudnull | the list will work no matter | 00:23 |
ThiagoCMC | ok | 00:23 |
cloudnull | that just returns a list of known host items | 00:24 |
cloudnull | if its empty the variable isn't working | 00:24 |
ThiagoCMC | Ok, I'll wait for the setup-hosts to finish, then, I'll try it | 00:24 |
cloudnull | the output looks like so http://paste.openstack.org/show/732453/ | 00:26 |
ThiagoCMC | Thank you! | 00:27 |
cloudnull | in that environment i have 1 host with nspawn enabled "utility1" | 00:27 |
ThiagoCMC | Awesome | 00:28 |
cloudnull | the "all_nspawn_containers" list is empty because its generated when the playbook is executed. the important part of that output is the "nspawn_host" section | 00:29 |
cloudnull | if that has host entries then nspawn is infact enabled. | 00:29 |
ThiagoCMC | Ok | 00:30 |
*** mmercer has quit IRC | 00:47 | |
*** rgogunskiy has quit IRC | 00:56 | |
*** markvoelker has joined #openstack-ansible | 00:58 | |
*** ThiagoCMC has quit IRC | 01:18 | |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: clean up readme https://review.openstack.org/611668 | 01:24 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release https://review.openstack.org/611661 | 01:25 |
*** markvoelker has quit IRC | 01:32 | |
openstackgerrit | Merged openstack/openstack-ansible-ops master: Cleanup the osquery role https://review.openstack.org/611641 | 01:39 |
*** jmccrory has joined #openstack-ansible | 02:02 | |
*** jonher has quit IRC | 02:05 | |
*** jonher has joined #openstack-ansible | 02:05 | |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification https://review.openstack.org/611741 | 02:17 |
*** ThiagoCMC has joined #openstack-ansible | 02:18 | |
openstackgerrit | Merged openstack/openstack-ansible-ops master: clean up readme https://review.openstack.org/611668 | 02:24 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release https://review.openstack.org/611661 | 02:29 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification https://review.openstack.org/611741 | 02:29 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification https://review.openstack.org/611741 | 02:30 |
*** rgogunskiy has joined #openstack-ansible | 02:54 | |
cloudnull | greatgatsby yes. that should work. however i know there have been issues with cloud-init and networking specifically so a lot of folks use config-drive and glean (https://docs.openstack.org/infra/glean/) to do a lot of that . | 03:08 |
* cloudnull is no expert of glean or cloud-init but odyssey4me and/or prometheanfire might be able to speak more to the advantages/disadvantages of either solution. | 03:09 | |
openstackgerrit | Merged openstack/openstack-ansible-os_gnocchi stable/rocky: use include_tasks instead of include https://review.openstack.org/609297 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-os_gnocchi stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611754 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-galera_client stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611755 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-os_horizon stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611756 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-os_panko stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611757 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-ops stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611758 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-plugins stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611759 | 03:10 |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-os_tempest stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611760 | 03:11 |
*** ThiagoCMC has quit IRC | 03:19 | |
*** rgogunskiy has quit IRC | 03:26 | |
openstackgerrit | Merged openstack/openstack-ansible-ops master: Additional playbook cleanup and use stable release https://review.openstack.org/611661 | 03:37 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-ops master: Additional cleanup and simplification https://review.openstack.org/611741 | 03:53 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix connection plugin for Ansible 2.6 https://review.openstack.org/611765 | 03:58 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible master: Update ansible to latest stable 2.6.x https://review.openstack.org/581166 | 04:04 |
*** faizy98 has quit IRC | 04:06 | |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible master: Use loop_var name in when clause https://review.openstack.org/611766 | 04:09 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible master: Pin argparse for idempotent pip installs https://review.openstack.org/611767 | 04:11 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible master: Idempotent system_crontab_coordination https://review.openstack.org/611768 | 04:12 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible-os_keystone master: Remove keystone service user https://review.openstack.org/611769 | 04:13 |
*** cshen has joined #openstack-ansible | 04:15 | |
openstackgerrit | Jimmy McCrory proposed openstack/ansible-role-python_venv_build master: Mark build task changed only when wheels are built https://review.openstack.org/611770 | 04:16 |
*** cshen has quit IRC | 04:19 | |
*** faizy98 has joined #openstack-ansible | 04:23 | |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible master: Update ansible to latest stable 2.6.x https://review.openstack.org/581166 | 04:47 |
openstackgerrit | Merged openstack/openstack-ansible-ops master: Additional cleanup and simplification https://review.openstack.org/611741 | 04:47 |
*** cshen has joined #openstack-ansible | 05:01 | |
*** cshen has quit IRC | 05:05 | |
*** cshen has joined #openstack-ansible | 05:09 | |
openstackgerrit | jacky06 proposed openstack/openstack-ansible-os_horizon master: Add watcher dashboard into horizon https://review.openstack.org/603156 | 05:12 |
*** pcaruana has joined #openstack-ansible | 06:14 | |
*** fghaas has joined #openstack-ansible | 07:04 | |
*** pcaruana has quit IRC | 07:09 | |
*** shardy has joined #openstack-ansible | 07:10 | |
*** pcaruana has joined #openstack-ansible | 07:28 | |
*** pcaruana is now known as pcaruana|elisa| | 07:30 | |
openstackgerrit | Markos Chandras (hwoarang) proposed openstack/openstack-ansible-os_neutron master: tasks: neutron_install: Fix group for neutron hosts https://review.openstack.org/611613 | 07:36 |
*** tosky has joined #openstack-ansible | 07:36 | |
*** jbadiapa has quit IRC | 07:53 | |
*** electrofelix has joined #openstack-ansible | 07:56 | |
*** suggestable has joined #openstack-ansible | 07:58 | |
*** FuzzyFerric has joined #openstack-ansible | 07:58 | |
*** thuydang has joined #openstack-ansible | 08:00 | |
*** DanyC has joined #openstack-ansible | 08:00 | |
*** thuydang has quit IRC | 08:04 | |
*** DanyC has quit IRC | 08:05 | |
*** jbadiapa has joined #openstack-ansible | 08:08 | |
*** jbadiapa has quit IRC | 08:18 | |
odyssey4me | I do love it when jmccrory suddenly jumps in and does a bunch of work - and the work to unblock ansible 2.6 is especially useful! | 08:30 |
*** cshen has quit IRC | 08:31 | |
odyssey4me | I wonder if logan- and evrardjp could look at https://review.openstack.org/#/c/611765 | 08:31 |
odyssey4me | and hwoarang given you've done some plugin work too | 08:32 |
hwoarang | fun stuff | 08:34 |
*** jbadiapa has joined #openstack-ansible | 08:36 | |
evrardjp | looks cleaner but upgrades will be challenging | 08:42 |
suggestable | Hey folks! | 08:43 |
suggestable | I asked yesterday afternoon, but there appear to be more people in the channel now, so... | 08:43 |
suggestable | Deploying stable/rocky from distro packages on Bionic. Running setup_hosts and setup_infrastructure go through just fine, but healthcheck_infrastructure fails on the second repo check (which I believe it should skip). Is this a bug? | 08:45 |
odyssey4me | suggestable: that work's not complete yet, and the healthcheck needs modification to work with it - if that's within your interest to use, perhaps you could help get it working? | 08:54 |
*** cshen has joined #openstack-ansible | 08:58 | |
*** DanyC has joined #openstack-ansible | 09:01 | |
suggestable | odyssey4me: I would love to, but I'm currently not permitted to contribute code to outside projects (yay for politics!). | 09:02 |
*** cshen has quit IRC | 09:03 | |
noonedeadpunk | morning everyone. | 09:03 |
noonedeadpunk | folks, what do you think about https://review.openstack.org/#/c/611585/ ? It's probably not very imprtant patch for everyone, but it just fixes some cases like mine) | 09:05 |
*** DanyC has quit IRC | 09:06 | |
*** strobelight has joined #openstack-ansible | 09:07 | |
odyssey4me | suggestable: hmm, well, would it be ok if you submitted bug reports - and provided the solution in the bug report? | 09:11 |
benkohl | Hi! Because 18.0.0 released, I tried to deploy openstack again. Now I get this error with setup-openstack.yml: https://snag.gy/RUILQx.jpg Any idea? | 09:12 |
*** strobelight has quit IRC | 09:12 | |
odyssey4me | benkohl: that appears to be a stale fact cache, or an unsupported distro | 09:13 |
*** strobelight has joined #openstack-ansible | 09:13 | |
suggestable | odyssey4me: I like your style ;-) | 09:14 |
odyssey4me | suggestable: also, if you're able - there are many bugs already registered that could do with triaging - and it's a great way to get to know OSA more | 09:14 |
suggestable | odyssey4me: Also, running setup-openstack I get "'keystone_oslomsg_rpc_password' is undefined". The pw_token_gen script was then run (with --regen) against an empty file to see if any more passwords were missing, and couldn't find any reference to the missing password. Perhaps the pw-token-gen script is bugged? | 09:15 |
benkohl | odyssey4me: yeah I really get many problems with fact cache and linux screen. I always delete the fact cache folder content before running a playbook. Is it possible to fill the fact cache before running the other playbooks? | 09:16 |
odyssey4me | suggestable: it's definitely there: https://github.com/openstack/openstack-ansible/blob/stable/rocky/etc/openstack_deploy/user_secrets.yml#L36 | 09:17 |
odyssey4me | suggestable: did you perhaps bootstrap on something older? | 09:17 |
odyssey4me | benkohl: you can run: ansible -m setup all | 09:17 |
benkohl | odyssey4me: Thank you :) | 09:18 |
suggestable | odyssey4me: I bootstrapped after grabbing the rocky files, over the top of the queens files/bootstrap. | 09:19 |
odyssey4me | benkohl: unfortunately the 'gather_facts' in the playbook works a little differently to using the setup module in a task - the second does a proper update, the first tries to be smart and often doesn't work all that nicely | 09:19 |
suggestable | Is there a way to force it to re-bootstrap from scratch? | 09:19 |
odyssey4me | suggestable: use the lxc-containers-destroy play to delete the containers and data, then wipe out your /etc/openstack_deploy directory (or move it to another path), then rebootstrap from scratch | 09:21 |
odyssey4me | although personally I prefer using disposable VM's | 09:21 |
*** hamzaachi has joined #openstack-ansible | 09:30 | |
suggestable | odyssey4me: When I grabbed the rocky base from GitHub, I moved /opt/openstack-ansible to a different path, then re-grabbed it, followed by running the Rocky bootstrap. Was this not sufficient? | 09:31 |
odyssey4me | suggestable: nope, all your user-space config is still in lpace - and the issue is that the user-space config is missing that var | 09:32 |
odyssey4me | specifically the /etc/openstack_deploy/user_secrets.yml file | 09:32 |
odyssey4me | there is one of the upgrade plays that will add anything that's missing | 09:32 |
*** greatgatsby has quit IRC | 09:34 | |
*** greatgatsby has joined #openstack-ansible | 09:34 | |
suggestable | odyssey4me: OK, I'm trying to deploy Rocky onto Bionic, on wiped-and-reloaded machines, using my Queens configs (with the line added to user_variables to set it to install from distro repos). The only machine that wasn't wiped and reloaded was the deployment host. | 09:35 |
suggestable | odyssey4me: How do I trigger the config upgrade script(s)? | 09:35 |
odyssey4me | suggestable: so if you had a running queens install, then you can just use the scripts/run-upgrade.sh script to do the upgrade unattended | 09:44 |
odyssey4me | or you can use the manual steps if you prefer https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/major-upgrades.html | 09:44 |
suggestable | odyssey4me: I *did* have a working test Queens deployment, but we blew it away for the upgrade, as we felt it was best to do a clean install on Bionic for longer-term support of the platform. | 09:52 |
*** faizy98 has quit IRC | 09:59 | |
*** faizy98 has joined #openstack-ansible | 10:00 | |
noonedeadpunk | odyssey4me: Am I right, that lxc containers are created with OS which is installed on node? So in case of OS upgrade from xenial to bionic, I may destroy and create containers one by one? And it's the way for upgrade? | 10:01 |
odyssey4me | suggestable: the upgrade doesn't actually need any running environment - just the old config and inventory will do I think | 10:02 |
*** DanyC has joined #openstack-ansible | 10:02 | |
odyssey4me | noonedeadpunk: yep, we have some notes here - pleasse add to them based on your tests and experience: https://etherpad.openstack.org/p/osa-rocky-bionic-upgrade | 10:02 |
suggestable | odyssey4me: Thanks. | 10:02 |
odyssey4me | look through the whole etherpad before doing anything, because there are two sets of testing and results there | 10:03 |
noonedeadpunk | odyssey4me: thanks, that will be usefull. I'll update it if find will something interesting. Once I get R working after upgrade from Q, will start OS upgrade. | 10:05 |
noonedeadpunk | As something strange happened with nova-api containers | 10:06 |
*** DanyC has quit IRC | 10:06 | |
*** faizy_ has joined #openstack-ansible | 10:13 | |
*** faizy98 has quit IRC | 10:17 | |
*** cshen has joined #openstack-ansible | 10:25 | |
*** hamzaachi has quit IRC | 10:29 | |
*** DanyC has joined #openstack-ansible | 10:39 | |
*** faizy_ has quit IRC | 10:40 | |
*** faizy_ has joined #openstack-ansible | 10:40 | |
*** DanyC has quit IRC | 10:43 | |
openstackgerrit | Markos Chandras (hwoarang) proposed openstack/openstack-ansible master: zuul: SUSE: Fix scenario for ceph jobs https://review.openstack.org/611843 | 10:52 |
*** dave-mccowan has joined #openstack-ansible | 10:55 | |
benkohl | odyssey4me: you hint worked... and now that :( https://snag.gy/xOqMXR.jpg sorry, I'm definitely annoying. | 11:01 |
benkohl | and that: https://snag.gy/RzDMtr.jpg | 11:03 |
odyssey4me | benkohl: was there perhaps a failure earlier in the tasks - because that handler failure is due to a missing service | 11:04 |
*** DanyC has joined #openstack-ansible | 11:06 | |
benkohl | odyssey4me: that make sense... The buffer of gnu screen seems to be to short. So I must run the task again with a larger buffer. | 11:06 |
odyssey4me | benkohl: yep, I use tmux with a buffer of 9999 - but note that there is a log file too in https://github.com/openstack/openstack-ansible/blob/master/scripts/openstack-ansible.rc#L19 | 11:08 |
noonedeadpunk | the only question about the upgrade is how to complete config conversion from lxc2 to lxc3. Am I right, that it should be completed manually with this release notes? https://discuss.linuxcontainers.org/t/lxc-2-1-has-been-released/487 | 11:11 |
noonedeadpunk | As without this conversion containers won't be up after upgrade. Or this is first variant, which may be skipped in case of usage second one? So upgrade 2 nodes, work only from the only one, and don't mind about non-working containers after upgrade at all | 11:15 |
*** DanyC_ has joined #openstack-ansible | 11:20 | |
*** DanyC has quit IRC | 11:23 | |
*** jonher has quit IRC | 11:26 | |
*** jonher has joined #openstack-ansible | 11:27 | |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Make system_crontab_coordination idempotent https://review.openstack.org/611768 | 11:28 |
odyssey4me | noonedeadpunk: the conversion should be automatic, we've put translations into the lxc-container-create role | 11:35 |
*** vollman has quit IRC | 11:37 | |
*** cshen has quit IRC | 11:41 | |
*** cshen has joined #openstack-ansible | 11:47 | |
canori01 | hey guys, I've been having trouble getting designate-sink to work with openstack-ansible. I've configured the notification topic and the neutron handler. However, designate-sink just sits there doing nothing. Has anyone ran into this? | 11:47 |
hwoarang | cores + mbuil any luck to help me get https://review.openstack.org/#/c/611613/ in to fix a neutron race condition? | 12:06 |
*** rgogunskiy has joined #openstack-ansible | 12:07 | |
*** rgogunskiy has quit IRC | 12:11 | |
noonedeadpunk | Folks, I have problems after Q->R upgrade. It started from VNC console in nova-api container. Then I decided to re-create these containers, after which nova-api just was refusing to start becaus of missing so libraries. I decided to re-create repo container as well, but building wheels I'm recieving the following error: http://paste.openstack.org/show/732488/ | 12:14 |
noonedeadpunk | And in /var/log/repo/wheel_build.log on this host I see following problem http://paste.openstack.org/show/732489/ | 12:15 |
odyssey4me | noonedeadpunk: did you look at https://gist.github.com/cloudnull/cb87440c8221104ed2b857e67289905f | 12:16 |
odyssey4me | that note helps resolve the missing so files in the venvs | 12:16 |
noonedeadpunk | but it is still on xenial | 12:16 |
odyssey4me | oh really? that's interesting | 12:16 |
noonedeadpunk | and during repo_build role run | 12:16 |
odyssey4me | oh, it's from gnocchi - of course :/ | 12:17 |
*** cshen has quit IRC | 12:18 | |
noonedeadpunk | yeah, and it's using ceph as a backend | 12:18 |
odyssey4me | ah, interesting - ok it seems that cython is not pinned anywhere - nice to find that gap | 12:18 |
odyssey4me | I'll push up a patch for that | 12:19 |
suggestable | Hey again... | 12:20 |
suggestable | I've managed to get it a lot further this time around, but hitting a dependency issue with Neutron-Linuxbridge-Agent. | 12:20 |
suggestable | Depends: neutron-linuxbridge-agent (= 2:13.0.0~b2-0ubuntu1~cloud0) but 2:13.0.1-0ubuntu1~cloud0 is to be installed | 12:20 |
suggestable | (stable/rocky on Bionic from distro) | 12:21 |
*** cshen has joined #openstack-ansible | 12:21 | |
noonedeadpunk | odyssey4me: cython is present to be honest, but it's too new for cradox | 12:21 |
odyssey4me | noonedeadpunk: yep, I'll help you out for an override now - just preparing a patch first | 12:21 |
*** faizy98 has joined #openstack-ansible | 12:22 | |
noonedeadpunk | yep, sure. | 12:22 |
odyssey4me | noonedeadpunk: can you try adding 'Cython<0.28' into https://github.com/openstack/openstack-ansible-os_gnocchi/blob/master/defaults/main.yml#L177-L184 and see if doing a repo rebuild works? | 12:24 |
*** faizy_ has quit IRC | 12:24 | |
noonedeadpunk | sure, give me several minutes to check this out | 12:25 |
*** vollman has joined #openstack-ansible | 12:26 | |
*** priteau has joined #openstack-ansible | 12:34 | |
*** gkadam has joined #openstack-ansible | 12:52 | |
*** strattao has joined #openstack-ansible | 12:52 | |
mbuil | hwoarang: One question regarding your patch, if I understand correctly, you detected that apparmor is sometimes not applied in a host because neutron_apparmor_hosts points to a neutron container which is not part of the inventory. To fix that, you want neutron_apparmor_hosts to point to all possible neutron containers, right? Then I wonder about the change because doesn't 'neutron_role_project_group' group less containers than 'all'? | 12:56 |
mbuil | hwoarang: in fact, I thought 'neutron_role_project_group' is a subgroup of 'all' | 12:56 |
odyssey4me | neutron_role_project_group is only applied by the repo build - it has absolutely nothing to do with anything else | 13:00 |
hwoarang | mbuil: the all subgroup contains containers which have nothing to do with neutron | 13:02 |
hwoarang | so it was filtering the wrong thing | 13:02 |
hwoarang | i want to only filter the neutron containers and find the physical hosts from them | 13:03 |
hwoarang | anyway, what we want is to iterate through the entire list of neutron conteiners, find all the physical hosts for them, and then fix apparmor | 13:03 |
noonedeadpunk | odyssey4me: I've tried to remove Cython from repo container, but it was again installed of 0.29 version. But it was installed by "Install pip packages (from repo)". So probably I should re-create container? | 13:04 |
*** faizy98 has quit IRC | 13:05 | |
*** faizy98 has joined #openstack-ansible | 13:06 | |
odyssey4me | noonedeadpunk: removing cython won't work - did the edit of the gnocchi role work? If not - I have a patch prepared which will work - I was just hoping for a less intrusive one. | 13:07 |
noonedeadpunk | odyssey4me: I've appended to gnocchi_pip_packages "Cython<0.28" (and "Cython<0.29"), but 0.29 was still installed at repo container | 13:09 |
noonedeadpunk | and the second thought, is, that ansible pip module accepts version as a separate argument and I never tried to set package version inside package name... | 13:10 |
openstackgerrit | Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible master: Set an upper pin for Cython to please gnocchi https://review.openstack.org/611864 | 13:10 |
odyssey4me | noonedeadpunk: try that patch | 13:10 |
mbuil | hwoarang: ok, understood, thanks | 13:12 |
odyssey4me | noonedeadpunk: you may need to force a wheel & venv rebuild using the flags | 13:13 |
odyssey4me | noonedeadpunk: actually, perhaps best to just remove /var/www/repo/os-releases/18.0.0 and all its contents | 13:14 |
noonedeadpunk | odyssey4me: thanks, will try it now | 13:14 |
*** lbragstad is now known as elbragstad | 13:23 | |
openstackgerrit | Mohammed Naser proposed openstack/openstack-ansible-os_neutron master: ovs: force to secure fail mode by default https://review.openstack.org/611874 | 13:37 |
noonedeadpunk | odyssey4me: that worked for me | 13:43 |
*** faizy_ has joined #openstack-ansible | 13:45 | |
*** faizy98 has quit IRC | 13:49 | |
mgariepy | hoping this is the last time i ask for this one: can i get a few votes : https://review.openstack.org/#/c/605789/ | 13:51 |
odyssey4me | cores - https://review.openstack.org/611864 needs to merge fairly urgently, and needs porting back to rocky when it is | 13:53 |
*** munimeha1 has joined #openstack-ansible | 13:53 | |
*** rpittau has quit IRC | 13:56 | |
*** rgogunskiy has joined #openstack-ansible | 13:57 | |
noonedeadpunk | Can you please also check this patch https://review.openstack.org/#/c/611585/ ? | 13:57 |
noonedeadpunk | Ok, I still have problem with nova-api after container re-creation. http://paste.openstack.org/show/732498/ | 14:01 |
noonedeadpunk | I thought that the problem is in the downloaded venv, but repo container re-creation didn't the trick | 14:03 |
*** rgogunskiy has quit IRC | 14:04 | |
spotz | odyssey4me: donr | 14:05 |
mgariepy | odyssey4me, done. | 14:06 |
*** rgogunskiy has joined #openstack-ansible | 14:10 | |
suggestable | Hi guys... Still struggling with this. Deploying stable/rocky on Bionic using distro packages. During setup-infrastructure, I hit this: | 14:13 |
suggestable | http://paste.openstack.org/show/RfYMFGF5mHjiK6gBgza4/ | 14:13 |
suggestable | TASK [os_neutron : Install neutron role packages] | 14:13 |
*** rgogunskiy has quit IRC | 14:15 | |
*** ThiagoCMC has joined #openstack-ansible | 14:15 | |
ThiagoCMC | Hey guys, the following playbook is failing right at the beginning: "openstack-ansible setup-openstack.yml", with error: FAILED! => {"msg": "'keystone_oslomsg_rpc_password' is undefined"} | 14:18 |
ThiagoCMC | What is happening ? | 14:18 |
*** rgogunskiy has joined #openstack-ansible | 14:18 | |
suggestable | ThiagoCMC: I had this same issue earlier today. You need to upgrade your configs from Queens to Rocky. | 14:19 |
*** thuydang has joined #openstack-ansible | 14:19 | |
ThiagoCMC | How? | 14:19 |
suggestable | The upgrade scripts will add in all the required changes. | 14:19 |
suggestable | /opt/openstack-ansible/scripts/run-upgrade.sh | 14:19 |
*** thuydang has quit IRC | 14:19 | |
ThiagoCMC | Hmm... Let me try it now! | 14:20 |
noonedeadpunk | odyssey4me: do you have any ideas what might be the root of this http://paste.openstack.org/show/732498/ ? I'd appreciate some start point for investigation | 14:22 |
ThiagoCMC | Since I have an old /etc/openstack_deploy subdirectories but, on a totally new and fresh env, I never thought that it was necessary to run an "upgrade", since there was no queens here. But, okay, trying it now... | 14:22 |
suggestable | ThiagoCMC: I had exactly the same situation earlier today. Glad I'm not the only one! :-) | 14:23 |
*** rgogunskiy has quit IRC | 14:23 | |
ThiagoCMC | Cool! Small world! :-D | 14:23 |
suggestable | ThiagoCMC: In case you're looking to deploy Rocky via distro packages, be aware that this appears to be currently broken on Ubuntu Bionic (as that's the situation I'm stuck on). | 14:24 |
ThiagoCMC | Oh, that's also exactly what I'm trying to do | 14:24 |
ThiagoCMC | After run-upgrade.sh, do I need to run setup-hosts.yml and setup-infrastructure.yml again? | 14:25 |
suggestable | Indeed, as the upgrade script will fail (after it's made the updates to your configs). | 14:25 |
ThiagoCMC | Damn | 14:26 |
ThiagoCMC | You gave up on installing via distro? | 14:26 |
suggestable | If your Queens configs worked, then the upgrade script will patch them to work with Rocky, and you should be good to go with installing via the normal playbooks. | 14:27 |
suggestable | I haven't yet, but leaning towards it... | 14:27 |
ThiagoCMC | I see, thanks for pointing this out! | 14:27 |
suggestable | No worries. | 14:27 |
suggestable | Glad I could help someone else! | 14:27 |
odyssey4me | ThiagoCMC suggestable - the distro install is only confirmed to be working for opensuse and centos at this point see https://review.openstack.org/#/c/608930/ as an example of the last merge for the tests to confirm they're working | 14:29 |
odyssey4me | notice that there's no 'distro' job for bionic yet | 14:29 |
odyssey4me | oh, and centos doesn't have a distro job yet either | 14:29 |
odyssey4me | the distro install support is brand new, and still needs work - if you're up for helping, that'd be great | 14:30 |
odyssey4me | on top of that, a source-based build cannot be changed to a distro-based build | 14:30 |
suggestable | odyssey4me: Can this please be made crystal clear on the main docs pages? This situation is a little chaotic... | 14:30 |
odyssey4me | suggestable: please register a bug for that | 14:31 |
ThiagoCMC | That sucks, I thought that it was stable enough, since it is released and documented but, okay, I'll re-deploy everything from scratch, using source. | 14:36 |
suggestable | ThiagoCMC: Same decision my boss and I just came to. #disappointing | 14:37 |
cloudnull | mornings | 14:37 |
ThiagoCMC | Morning! | 14:37 |
*** rgogunskiy has joined #openstack-ansible | 14:37 | |
cloudnull | ThiagoCMC how goes your nspawn'ing ? | 14:38 |
cloudnull | any cores around want to give this a little love tap https://review.openstack.org/#/c/581166/ :) | 14:38 |
ThiagoCMC | Not good, it was failing to start the systemd containers here and there, then, I returned to LXC and the playbook (setup-hosts.yml) worked as before. | 14:38 |
cloudnull | fair enough | 14:39 |
cloudnull | was that with rocky ? | 14:39 |
*** weezS has joined #openstack-ansible | 14:40 | |
ThiagoCMC | yep | 14:40 |
odyssey4me | cloudnull: several fixes have gone into master and not rocky - please try to trace back what's missing and do the appropriate backports | 14:41 |
odyssey4me | specifically for the nspawn_hosts and nspawn_container_create I've seen some missing, and not sure where else | 14:41 |
odyssey4me | I tried a few, but because they weren't ported back in time, there are just so many merge conflicts and I didn't have enough context to understand how to resolve them right. | 14:42 |
ThiagoCMC | suggestable, about the keystone_oslomsg_rpc_password fail, don't you think that the migrate_openstack_vars.py would do the trick, and not the run-upgrade.sh ? | 14:42 |
*** rgogunskiy has quit IRC | 14:42 | |
suggestable | ThiagoCMC: There's a secrets script that's run as part of run-upgrade.sh that adds the missing credentials. | 14:43 |
noonedeadpunk | folks, I really need help with nova-api service in regular LXC on rocky (with xenial), as it's not willing to start even after container re-creation | 14:43 |
ThiagoCMC | I see, okay! | 14:43 |
noonedeadpunk | probably I need to upgrade to bionic then? | 14:43 |
*** FuzzyFerric has quit IRC | 14:43 | |
odyssey4me | noonedeadpunk: nope, get it rigtht on xenial first | 14:43 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-os_swift master: Add variable for the ssh service and ensure its enabled https://review.openstack.org/607808 | 14:44 |
*** FuzzyFerric has joined #openstack-ansible | 14:44 | |
*** rgogunskiy has joined #openstack-ansible | 14:44 | |
*** rgogunskiy has quit IRC | 14:44 | |
noonedeadpunk | okay. try to debug python then and fix it manually | 14:44 |
cloudnull | odyssey4me good lookin' out. ill go see what the delta is | 14:45 |
ThiagoCMC | Looks like that the two cool things that I was looking forward to try on Rocky, nspawn and distro package, doesn't yet... :-( | 14:47 |
ThiagoCMC | But, okay! :-P | 14:47 |
*** spatel has joined #openstack-ansible | 14:48 | |
noonedeadpunk | hm, I thought that nspawn is experimental on rocky, isn't it? | 14:48 |
odyssey4me | noonedeadpunk: it is | 14:48 |
spatel | Today when i reboot one of instance i got this error anyone know what is heck is that? http://paste.openstack.org/show/732501/ | 14:48 |
odyssey4me | as is the distro package installs | 14:48 |
*** cshen has quit IRC | 14:50 | |
ThiagoCMC | Got it! lol | 14:50 |
noonedeadpunk | hm, os-nova-install fails as well on "Run nova-status upgrade check to validate a healthy configuration" as well :( | 14:50 |
ThiagoCMC | I'll try again those two new in 6 months... =) | 14:50 |
ThiagoCMC | BTW, does Rocky supports networking-ovn? | 14:51 |
*** jonher has quit IRC | 14:53 | |
*** jonher has joined #openstack-ansible | 14:53 | |
*** pcaruana|elisa| has quit IRC | 14:56 | |
*** sawblade6 has joined #openstack-ansible | 14:57 | |
jrosser | just bear in mind with distro installs you'll struggle to patch the actual openstack code | 14:57 |
jrosser | that may or may not matter to you | 14:57 |
*** pcaruana|elisa| has joined #openstack-ansible | 14:57 | |
*** DanyC has joined #openstack-ansible | 14:58 | |
openstackgerrit | Merged openstack/openstack-ansible-os_swift stable/rocky: releasenotes: oslo-messaging-separate-backends add project name https://review.openstack.org/611623 | 15:00 |
*** DanyC_ has quit IRC | 15:01 | |
*** pcaruana|elisa| has quit IRC | 15:08 | |
*** pcaruana has joined #openstack-ansible | 15:08 | |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron master: tasks: neutron_install: Fix group for neutron hosts https://review.openstack.org/611613 | 15:09 |
noonedeadpunk | odyssey4me: I do not know how, but I had libpython2.7 not installed inside container. It had only libpython2.7-minimal and libpython2.7-stdlib, which didn't suit uwsgi to start. | 15:09 |
noonedeadpunk | I even don't know wether I should patch smth or not regarding this. As looks it pretty strange. | 15:12 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_hosts stable/rocky: Combined backport from master to ensure nspawn functionality in rocky https://review.openstack.org/611905 | 15:18 |
*** gkadam has quit IRC | 15:18 | |
*** cshen has joined #openstack-ansible | 15:19 | |
cloudnull | ThiagoCMC maybe -cc jamesdenton - I know there was some work that happened w/ OVN im just not sure if its all in rocky | 15:24 |
jamesdenton | it's not all there yet | 15:25 |
jamesdenton | and the HA story isn't completely fleshed out | 15:25 |
ThiagoCMC | Ok, no worries! | 15:25 |
*** electrofelix has quit IRC | 15:25 | |
*** pcaruana has quit IRC | 15:31 | |
*** spatel has quit IRC | 15:33 | |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix issues with and enable Python 3 job https://review.openstack.org/611909 | 15:35 |
openstackgerrit | Jimmy McCrory proposed openstack/openstack-ansible-plugins master: Fix issues with and enable Python 3 job https://review.openstack.org/611909 | 15:36 |
*** rgogunskiy has joined #openstack-ansible | 15:49 | |
*** vnogin has joined #openstack-ansible | 15:50 | |
*** vnogin has quit IRC | 15:50 | |
*** rgogunskiy has quit IRC | 15:54 | |
openstackgerrit | Chandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added support for installing tempest from distro https://review.openstack.org/591424 | 16:05 |
*** mgariepy has quit IRC | 16:10 | |
*** DanyC_ has joined #openstack-ansible | 16:13 | |
ThiagoCMC | cloudnull, are you using nspawn on top of ZFS (root)? | 16:14 |
*** mgariepy has joined #openstack-ansible | 16:15 | |
*** DanyC has quit IRC | 16:16 | |
*** FuzzyFerric has quit IRC | 16:23 | |
*** suggestable has quit IRC | 16:23 | |
*** openstackgerrit has quit IRC | 16:24 | |
*** fghaas has quit IRC | 16:27 | |
*** shardy has quit IRC | 16:30 | |
cloudnull | ThiagoCMC no. i use nspawn w/ btrfs | 16:42 |
cloudnull | I've done several setups, my general go to is LVM w/ a logical volume mounted at /var/lib/machines formatted BTRFS | 16:43 |
noonedeadpunk | folks, is there any switcher between python3 and python2 for venvs and deployment? | 16:43 |
noonedeadpunk | I meen some variable) | 16:44 |
ThiagoCMC | cloudnull, you're really brave! :-O | 16:44 |
cloudnull | I've also done BTRFS as root (that's running on my laptop now) its been great | 16:44 |
ThiagoCMC | Now I wanna do that too! Lol | 16:44 |
cloudnull | nah BTRFS w/ modern kernels seems to be pretty rock solid | 16:44 |
ThiagoCMC | cloudnull, is btrfs good to host qcow2 ? | 16:44 |
ThiagoCMC | I tried in the past, didn't worked | 16:45 |
ThiagoCMC | Under load | 16:45 |
cloudnull | generally I'd say no | 16:45 |
ThiagoCMC | Oh, ok | 16:45 |
cloudnull | I had a whole sub thread on that | 16:45 |
cloudnull | https://twitter.com/cloudnull/status/1050943827111026688 | 16:45 |
cloudnull | via sataII and sataIII w/ XFS and BTRFS there's no difference | 16:46 |
*** nsmeds__ has joined #openstack-ansible | 16:46 | |
cloudnull | in terms of IOPS | 16:46 |
cloudnull | via SAS XFS wins by a small margin | 16:46 |
cloudnull | those screenshots were fio results from within a VM | 16:47 |
noonedeadpunk | cloudnull: really interesting research | 16:48 |
cloudnull | you can make BTRFS host qcow's by using the no-data-cow mount option, but even with that XFS still wins | 16:49 |
cloudnull | in terms of IOPS | 16:49 |
cloudnull | BTRFS has snapshots, subvolumes, import / export tools, and other advanced management capabilities which other file systems may not have (except ZFS). | 16:50 |
cloudnull | so its largely a matter of what you want, in terms of commodity cloud with local disks, I'm probably going to stick with XFS for qcow for now. | 16:50 |
cloudnull | I also have my local NAS running BTRFS w/ ~14T of storage using the built in BTRFS RAID (using RAID10). My wife does a lot of over the network video editing on that and it's been great. | 16:52 |
cloudnull | so needless to say, I have no trust issues with BTRFS :) | 16:52 |
noonedeadpunk | do we support python3 venvs at all? | 16:53 |
cloudnull | noonedeadpunk I think so? we did for a time. | 16:54 |
*** openstackgerrit has joined #openstack-ansible | 16:54 | |
openstackgerrit | Merged openstack/openstack-ansible-os_masakari stable/rocky: use include_tasks instead of include https://review.openstack.org/610084 | 16:54 |
cloudnull | I know there were issues with py3 in the rocky cycle so a lot of that work was disabled/reverted | 16:54 |
cloudnull | odyssey4me ? | 16:54 |
ThiagoCMC | cloudnull, wow! That sounds fun! lol | 16:55 |
ThiagoCMC | What about hosting RAW images on btrfs, instead of qcow2? | 16:55 |
noonedeadpunk | cloudnull: I mean, that I'm missing libpython2.7 in nova-api container, so I'm thinking about patch, but I really don't know how to check that we're using python2 (as in case of python3 we don't need it) | 16:56 |
cloudnull | that should be ok, though I've not done it myself . | 16:56 |
cloudnull | noonedeadpunk "libpython2.7" should be part of the base image , | 16:56 |
cloudnull | maybe we need to be more explicit about it? | 16:57 |
cloudnull | yea I would assume that package would be installed with https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L48 | 16:57 |
cloudnull | however if it's not then I'd say yes, we should patch that array to have it | 16:57 |
noonedeadpunk | surprisingly for me, but it's not installed for nova-api container, which makes all uwsgi inside it to fail on launch | 16:57 |
cloudnull | in that case I'd say yes. lets get a patch in to make sure that doesn't happen to anyone else. | 16:58 |
cloudnull | you should be able to correct the issue across the environment with `ansible -m package -a 'name=libpython2.7' all` | 16:59 |
cloudnull | if you're still seeing the problem | 16:59 |
noonedeadpunk | cloudnull: I've re-created container several times, with the same result - libpython2.7 wasn't there... | 16:59 |
cloudnull | noonedeadpunk - probably need to rebuild the base image once we have that package in the list | 17:01 |
noonedeadpunk | I'm not sure, that I know how to do it. I've also re-created distro container, but it seems that it's not about it | 17:02 |
noonedeadpunk | moreover, python2.7 doesn't have a dependency of libpython2.7 | 17:03 |
cloudnull | interesting. | 17:03 |
noonedeadpunk | http://paste.openstack.org/show/732513/ | 17:04 |
*** hamzaachi has joined #openstack-ansible | 17:04 | |
noonedeadpunk | Unfrotunatelly I don't have now ability to test same with bionic, as I'm only preparing upgrade right now | 17:05 |
cloudnull | ok, so... add that package to the list within the role. then `machinectl remove ubuntu-bionic-amd64` (or whatever the base image name is). then purge the lxc cache `rm -rf /var/cache/lxc/downloads/*`. delete the container you want to rebuild `openstack-ansible lxc-container-destroy.yml --limit nova_all`. then rerun `openstack-ansible containers-dep | 17:06 |
cloudnull | loy.yml --limit lxc_hosts:nova_all` | 17:06 |
cloudnull | that would rebuild all the cache, nuke your old broken container, rebuild the cache, and recreate the container. | 17:07 |
noonedeadpunk | ok, I'll try now | 17:07 |
noonedeadpunk | thanks! | 17:07 |
cloudnull | do you want to add that package to https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L33 for both bionic and xenial ? | 17:08 |
odyssey4me | cloudnull: yeah, no py2 at all for rocky/master right now - we can explore py3 for stein | 17:10 |
noonedeadpunk | cloudnull: actually I was thinking to add it to somewhere here https://github.com/openstack/openstack-ansible-os_nova/blob/master/vars/ubuntu.yml#L23 but I think that add it explicitly if a good idea | 17:10 |
*** gyee has joined #openstack-ansible | 17:12 | |
cloudnull | noonedeadpunk that would work , my only concern would be that we have might need that package in all of the containers that use uwsgi | 17:14 |
jungleboyj | spotz: Is there a way to configure the hostname used by AIO at deployment time? | 17:15 |
spotz | I think I hacked it once jungleboyj. Let me go peek unless cloudnull knows off hand | 17:16 |
*** strattao has quit IRC | 17:17 | |
*** tosky has quit IRC | 17:18 | |
jungleboyj | spotz: Cool. Thanks. Going to try again on my new isolated node. | 17:19 |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron master: ovs: force to secure fail mode by default https://review.openstack.org/611874 | 17:21 |
*** mmercer has joined #openstack-ansible | 17:21 | |
*** strattao has joined #openstack-ansible | 17:21 | |
noonedeadpunk | cloudnull: it seemed enough just to purge the lxc cache and re-create container. | 17:31 |
cloudnull | ah cool! | 17:31 |
noonedeadpunk | thanks for the tip! | 17:31 |
cloudnull | couldve something gone off the rails in cache creation. | 17:31 |
cloudnull | **could've been | 17:32 |
cloudnull | glad you got it going though :) | 17:32 |
noonedeadpunk | yeah, now I may go further and try upgrade to bionic:) | 17:32 |
cloudnull | I did that on my lab environment . | 17:35 |
cloudnull | it remarkably well :) | 17:35 |
spotz | jungleboyj: roles/bootstrap-host/tasks/prepare_hostname.yml | 17:36 |
cloudnull | granted I was using nspawn so i didnt need to deal with the lxc container config conversion | 17:36 |
spotz | under the tests directory | 17:36 |
cloudnull | odyssey4me https://review.openstack.org/#/c/611905/ - if you get a chance | 17:36 |
cloudnull | a combined backport from to rocky for nspawn hosts. | 17:36 |
jungleboyj | spotz: Ah, Thank you! | 17:37 |
jungleboyj | spotz: Stupid question, but why are there tasks under tests that impact the outcome of deployment. | 17:39 |
spotz | jungleboyj: I'm thinking it's not tests as you're thinking tests but tests against its work from reading through the roles | 17:40 |
jungleboyj | Ok. Just totally different from Cinder as nothing in 'tests' would impact how Cinder works when deployed. | 17:41 |
jungleboyj | So, I had seen other notes on looking at things in 'tests' and was very confused by that. | 17:41 |
jungleboyj | Just a different environment I guess. | 17:41 |
ThiagoCMC | I'm curious about OSA Rocky, I'm deploying this using Ubuntu 18.04, including MaaS. Point 1: MaaS isn't the gateway. So, OS is creating the following file inside of each container: "/etc/apt/apt.conf.d/90curtin-aptproxy" with: "Acquire::http::Proxy "http://192.168.4.10:8000/";" <- this is MaaS PXE IP! Thing is, from the container, it can't reach that, as a workaround, I'm running via /etc/rc.local: "iptables -t nat - | 17:44 |
ThiagoCMC | A POSTROUTING -o bond0 -j MASQUERADE". Then, playbook worked! | 17:44 |
openstackgerrit | Kevin Carter (cloudnull) proposed openstack/openstack-ansible-nspawn_container_create stable/rocky: Combined backport from master to ensure nspawn functionality in rocky https://review.openstack.org/611926 | 17:45 |
cloudnull | ThiagoCMC you might be able to use a static route in your containers to get to that address. | 17:47 |
cloudnull | however if a local IPtables rule made it go, i'd probably just go with that | 17:47 |
ThiagoCMC | Yeah, 1 change per blade, not bad... I was just curious about why this happened. | 17:47 |
cloudnull | I assume the containers are on a different subnet ? | 17:48 |
ThiagoCMC | Is there a var to disable /etc/apt/apt.conf.d/90curtin-aptproxy from appearing in first place? | 17:48 |
cloudnull | and there's no route from the eth0 interface in the container to that network | 17:48 |
ThiagoCMC | Yes, MaaS PXE is subnet 1, OSA have its own br-mgmt, br-vxlan etc, diff subs | 17:48 |
cloudnull | that apt config is likely something maas is doing? | 17:48 |
ThiagoCMC | I can't be MaaS! | 17:49 |
ThiagoCMC | Since this file appears inside of OSA containers | 17:49 |
ThiagoCMC | After MaaS long done | 17:49 |
cloudnull | is that file on the host? | 17:49 |
ThiagoCMC | No, inside of each container! | 17:50 |
ThiagoCMC | That OSA creates | 17:50 |
cloudnull | the containers inherit config from the host maybe that's something we're just pulling in ? | 17:50 |
ThiagoCMC | During setup-hosts.yml | 17:50 |
ThiagoCMC | Oh, I didn't knew that... | 17:50 |
ThiagoCMC | Maybe! lol | 17:50 |
cloudnull | if /etc/apt/apt.conf.d/90curtin-aptproxy is on the host machine that might explain it | 17:50 |
cloudnull | osa does create a proxy file for apt which points to the repo servers | 17:51 |
ThiagoCMC | Yep, it is also there. | 17:51 |
ThiagoCMC | I didn't knew that each container would inherit config like this... Interesting! | 17:51 |
ThiagoCMC | Good to know. | 17:51 |
cloudnull | https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/vars/ubuntu-18.04.yml#L24 | 17:52 |
cloudnull | so you can change that by defining that list in your user_variables.yml file | 17:52 |
cloudnull | http://paste.openstack.org/show/732518/ | 17:53 |
cloudnull | would work | 17:53 |
*** strattao has quit IRC | 17:53 | |
ThiagoCMC | That's awesome! Thank you!!! | 17:54 |
ThiagoCMC | :-D | 17:54 |
cloudnull | ++ | 17:54 |
ThiagoCMC | Now I can see why I didn't saw this problem with nspawn! lol | 17:55 |
cloudnull | nspawn has the same mechanism though its used less. | 17:56 |
*** fghaas has joined #openstack-ansible | 17:56 | |
ThiagoCMC | The problem that I had with nspawn in OSA, was that, a few containers was (randomly) failing to start. I could see the container's subdir under /var/lib/machines but, "systemd start $container" failed. No idea why. | 17:57 |
*** strattao has joined #openstack-ansible | 17:57 | |
ThiagoCMC | It was also missing from `machinectl list` output, maybe because it was down? | 17:57 |
ThiagoCMC | It's my first time with nspawn / machinectl =P | 17:57 |
nsmeds__ | hmm ... is there a way to interact with `dynamic_inventory.py` and get IP addresses of containers? for example, running `ansible -i /opt/openstack-ansible/inventory/dynamic_inventory.py --list-hosts all` outputs all hostnames | 18:06 |
nsmeds__ | but I'd like to configure Prometheus' targets using the dynamic inventory, and therefore will need IP addresses | 18:07 |
cloudnull | ThiagoCMC I think the issue you were seeing was likely due to the missing nspawn patches in rocky | 18:08 |
nsmeds__ | (installing mysqld/rabbitmq-exporters inside the containers, but need Prometheus to target the exporters). | 18:08 |
cloudnull | the start / stop thing was likely due to a problem we had with systemd-escape | 18:09 |
ThiagoCMC | I'm glad that I'm not alone! lol | 18:09 |
ThiagoCMC | how can I try the new version? | 18:09 |
cloudnull | the backport patches are now up. however you can checkout the master version of the roles to get all the same things | 18:10 |
nsmeds__ | I understand that setting up a Consul cluster would likely resolve this issue (can configure the containers to register services with consul and Prometheus to get lists of targets that way), but do not have Consul setup yet | 18:10 |
*** cshen has quit IRC | 18:10 | |
cloudnull | nsmeds__ lxc-ls -f will show you all of the container IPs on a local machine | 18:10 |
openstackgerrit | Michael Vollman proposed openstack/openstack-ansible-os_manila master: Iniital commit for some scaffolding https://review.openstack.org/611929 | 18:11 |
openstackgerrit | Michael Vollman proposed openstack/openstack-ansible-os_manila master: Converting os_cinder role to os_manila role https://review.openstack.org/611930 | 18:11 |
cloudnull | otherwise, if you just want to interact with the inventory directly, you'd need to parse the json file. | 18:11 |
ThiagoCMC | cloudnull, ok, thanks! | 18:11 |
ThiagoCMC | nsmeds__, what about etcd? :-P | 18:11 |
openstackgerrit | Merged openstack/openstack-ansible master: Set an upper pin for Cython to please gnocchi https://review.openstack.org/611864 | 18:11 |
openstackgerrit | Merged openstack/openstack-ansible master: Use loop_var name in when clause https://review.openstack.org/611766 | 18:12 |
cloudnull | that said, I do like the consul idea, its something on my list of ops tools I aim to get implemented soon-ish | 18:12 |
cloudnull | if you have something to work from I'd love to collaborate on it | 18:12 |
*** DanyC_ has quit IRC | 18:12 | |
nsmeds__ | Yeah `lxc-ls -f` won't work because that means manually running around, collecting IPs, and then nothing gets updated when respinning containers/etc | 18:12 |
nsmeds__ | getting the JSON comes from ` sudo ./scripts/inventory-manage.py -e` yes ? | 18:13 |
cloudnull | yes. | 18:13 |
cloudnull | the inventory json file is localted at /etc/openstack_deploy/openstack_inventory.json | 18:14 |
cloudnull | that python file simply parses that file. | 18:14 |
*** cshen has joined #openstack-ansible | 18:14 | |
cloudnull | **that python script simply parses that json file. | 18:14 |
nsmeds__ | I think I'm causing a few headaches for myself because we have our own playbooks for setting up Prometheus+Grafana which we don't use `openstack-ansible` wrapper to run | 18:14 |
*** sum12 has quit IRC | 18:14 | |
nsmeds__ | but now I'm basically integrating the two, because Prometheus needs container IPs to target | 18:14 |
cloudnull | if you want it to interact with the json file you could use something like https://github.com/openstack/openstack-ansible-ops/tree/master/bootstrap-embedded-ansible | 18:15 |
nsmeds__ | ok I'll check that out, much appreciated | 18:15 |
cloudnull | just plain-jane ansible that can talk to an osa inventory | 18:15 |
cloudnull | we're using that method to deploy elk and osquery | 18:15 |
openstackgerrit | Merged openstack/openstack-ansible-os_keystone master: Remove keystone service user https://review.openstack.org/611769 | 18:17 |
*** sum12 has joined #openstack-ansible | 18:17 | |
*** noonedeadpunk has quit IRC | 18:21 | |
openstackgerrit | Taseer Ahmed proposed openstack/openstack-ansible master: Integrate Blazar with OpenStack Ansible https://review.openstack.org/549956 | 18:27 |
openstackgerrit | Merged openstack/openstack-ansible master: Make system_crontab_coordination idempotent https://review.openstack.org/611768 | 18:27 |
*** sum12 has quit IRC | 18:27 | |
*** sum12 has joined #openstack-ansible | 18:29 | |
jungleboyj | spotz: Latest complaint ... the bootstrap option for data disk doesn't recognize 'vda' as a disk. :-) | 18:33 |
spotz | jungleboyj: Oh just don't do the export, I never do:) | 18:34 |
jungleboyj | spotz: I need to as I am using a smaller VM that I can keep clones of so I don't have to start from scatch. :-p | 18:34 |
jungleboyj | I got it working. I needed to not try to pass through a disk from the host. Needed to create an image and then share it like a SCSI device, not virtio. | 18:35 |
*** nsmeds__ has quit IRC | 18:38 | |
*** DanyC has joined #openstack-ansible | 18:41 | |
*** nsmeds__ has joined #openstack-ansible | 18:42 | |
*** DanyC_ has joined #openstack-ansible | 18:43 | |
openstackgerrit | Merged openstack/openstack-ansible-os_nova master: Update the pci config for nova. https://review.openstack.org/605789 | 18:46 |
*** DanyC has quit IRC | 18:46 | |
jungleboyj | spotz: I think I have gotten dnsmasq issue resolved in the VM I am using. | 18:47 |
spotz | cool | 18:47 |
*** sum12 has quit IRC | 18:49 | |
*** sum12 has joined #openstack-ansible | 18:51 | |
jungleboyj | spotz: I moved to Rocky by the way. | 19:00 |
spotz | k | 19:00 |
nsmeds__ | cloudnull, just thought I'd share - got it working (grabbing container IP without using openstack-ansible wrapper) - didn't require the `bootstrap-embedded-ansible` either. To test, I edited the Prometheus' targets list as follows | 19:21 |
nsmeds__ | ``` | 19:21 |
nsmeds__ | node-exporter: | 19:21 |
nsmeds__ | - targets: | 19:21 |
nsmeds__ | - "{{ groups['all'] | map('regex_replace', '$', ':9100') | list | sort }}" | 19:21 |
nsmeds__ | + "{{ groups['galera'] | map('extract', hostvars, ['ansible_host']) | map('regex_replace', '$', ':9100') | list | sort }}" | 19:21 |
cloudnull | cool! | 19:22 |
nsmeds__ | and then run it with (sorry for spam, should have used a paste) | 19:22 |
nsmeds__ | ansible-playbook -i pilot -i /opt/openstack-ansible/inventory/dynamic_inventory.py monitoring.yml | 19:22 |
*** openstacking_123 has joined #openstack-ansible | 19:22 | |
nsmeds__ | thanks for help | 19:22 |
cloudnull | anytime ! | 19:22 |
cloudnull | glad you made it go :) | 19:22 |
openstacking_123 | Hope everyone is well. Anyone with experience using magnum? I seem to have an issue after Queens update where Heat registers the stack deployment as complete but Magnum never updates and stays in state CREATE_IN_PROGRESS. | 19:24 |
*** fghaas has quit IRC | 19:25 | |
*** fghaas has joined #openstack-ansible | 19:36 | |
*** DanyC_ has quit IRC | 19:39 | |
ThiagoCMC | Hey guys, the task "systemd_mount : Set the state of the mount" is failing inside of a glance_container. Ansible error output: http://paste.openstack.org/show/732523/ - "systemctl status var-lib-glance-images.mount" shows a problem but, "systemctl restart var-lib-glance-images.mount" works! Any idea? I can even mount the NFS manually... :-/ | 19:43 |
ThiagoCMC | This: `systemctl reload-or-restart $(systemd-escape -p --suffix="mount" "/var/lib/glance/images")` also fails | 19:44 |
ThiagoCMC | File: "/etc/systemd/system/var-lib-glance-images.mount" looks correct! | 19:47 |
ThiagoCMC | systemctl reload-or-restart var-lib-glance-images.mount <- fails | 19:47 |
ThiagoCMC | systemctl stp var-lib-glance-images.mount <- works | 19:47 |
ThiagoCMC | *stop | 19:47 |
ThiagoCMC | systemctl start var-lib-glance-images.mount <- works! | 19:48 |
ThiagoCMC | No idea why... =/ | 19:48 |
ThiagoCMC | Systemd craziness? Error on glance_container: "mount.nfs: access denied by server while mounting 172.29.244.40:/glance" BUT "bin/mount 172.29.244.40:/glance /var/lib/glance/images -t nfs -o _netdev" just works on same container! lol | 19:51 |
*** DanyC has joined #openstack-ansible | 19:53 | |
*** dave-mccowan has quit IRC | 19:57 | |
*** DanyC has quit IRC | 19:58 | |
ThiagoCMC | That's very weird. The "reload-or-restart ONLY works after manually mounting the NFS share and unmount it! Then, it only fails over and over again... | 19:59 |
ThiagoCMC | So, I left it in a state that it would work on next run, and openstack-ansible is proceeding! I tricked it! lol | 20:02 |
openstackgerrit | Merged openstack/openstack-ansible-os_octavia stable/rocky: Only run OpenStack tasks once https://review.openstack.org/608757 | 20:08 |
openstackgerrit | Merged openstack/openstack-ansible-os_octavia stable/rocky: releasenotes: oslo-messaging-separate-backends add project name https://review.openstack.org/611625 | 20:08 |
*** strattao has quit IRC | 20:15 | |
*** fghaas has left #openstack-ansible | 20:31 | |
*** cshen has quit IRC | 20:35 | |
openstackgerrit | Merged openstack/openstack-ansible-os_nova master: SUSE: Add support for openSUSE Leap 15 https://review.openstack.org/604080 | 20:36 |
*** cshen has joined #openstack-ansible | 20:40 | |
*** munimeha1 has quit IRC | 20:51 | |
*** vnogin has joined #openstack-ansible | 20:52 | |
*** ThiagoCMC has quit IRC | 20:53 | |
openstacking_123 | running /openstack/venvs/magnum-17.1.2/bin/python /openstack/venvs/magnum-17.1.2/bin/magnum-conductor in any magnum container sets the managum status to complete | 20:55 |
*** vnogin has quit IRC | 20:56 | |
*** ThiagoCMC has joined #openstack-ansible | 20:57 | |
ThiagoCMC | Guys, any idea about this error: "FAILED! => {"msg": "'nova_oslomsg_notify_host_group' is undefined"}" ? | 20:58 |
ThiagoCMC | Maybe I need this: migrate_openstack_vars.py ? | 20:58 |
ThiagoCMC | TASK: os_ceilometer : Copy ceilometer configuration files | 21:00 |
ThiagoCMC | I can't see "nova_oslomsg_notify_host_group" under /etc/ansible/roles/os_ceilometer! This var only exists under roles/os_nova. | 21:03 |
*** dave-mccowan has joined #openstack-ansible | 21:05 | |
*** dave-mccowan has quit IRC | 21:13 | |
*** goldenfri has quit IRC | 21:22 | |
openstacking_123 | I fixed the queens magnum issue in openstack ansible installed this version of evenlet refernced here https://github.com/eventlet/eventlet/issues/172#issuecomment-379421165 | 21:24 |
openstacking_123 | specifically pip install https://github.com/eventlet/eventlet/archive/1d6d8924a9da6a0cb839b81e785f99b6ac219a0e.zip | 21:24 |
openstacking_123 | I will try to submit a bug fix | 21:25 |
openstacking_123 | Actually don't see a place in github to submit issues | 21:27 |
logan- | openstacking_123: https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L142 is why your queens install didnt have the eventlet 0.23 release. queens upper-constraint for eventlet is 0.20 | 21:29 |
logan- | rocky is still pinned at 0.20 too | 21:31 |
openstacking_123 | Good find | 21:32 |
openstacking_123 | That was a tough issue to find. None of the processes died and I could not quite figure out where the acknowledgement was supposed to happen. The docs for magnum where very sparse. | 21:34 |
openstacking_123 | logan- you able to get that info to the correct people? | 21:34 |
logan- | probably going to have to file a bug with magnum about this | 21:35 |
logan- | i'm trying to figure out where their bug tracker is | 21:36 |
openstacking_123 | Oh its a magnum issue? Nice | 21:36 |
openstacking_123 | There irc is the worse | 21:36 |
openstacking_123 | They don't reply to anything | 21:36 |
logan- | prometheanfire: seems like the eventlet in u-c for queens is breaking magnum ^ not sure if thats going to end up being a fix in requirements or magnum | 21:36 |
logan- | and yeah openstacking_123, I have no idea where their bugs go.. the readme in their repo says to file them here https://bugs.launchpad.net/magnum/ but it seems to be blank | 21:38 |
openstacking_123 | I will ask there non responsive irc as well | 21:38 |
logan- | ¯\_(ツ)_/¯ | 21:38 |
openstacking_123 | Thanks for pointing me in the right direction | 21:39 |
logan- | no prob | 21:39 |
logan- | openstacking_123: looks like they might have migrated magnums bug tracker to storyboard: https://storyboard.openstack.org/#!/project/openstack/magnum | 21:42 |
openstacking_123 | Aww good find | 21:42 |
prometheanfire | logan-: maybe reqs, depends on the exact error | 21:43 |
openstacking_123 | I better do a full deploy and confirm it works | 21:43 |
prometheanfire | I doubt reqs though, since it's been that req for a LONG time | 21:44 |
openstacking_123 | deletes work correctly now. will just confirm deploys don't run into issues | 21:45 |
*** weezS has quit IRC | 21:51 | |
*** DanyC has joined #openstack-ansible | 21:54 | |
openstacking_123 | yup magnum updates correctly now | 22:02 |
openstacking_123 | Made this story https://storyboard.openstack.org/#!/story/2004130 No idea if that is the correct way to do it. Either way thanks for everyones help. | 22:05 |
*** openstacking_123 has quit IRC | 22:08 | |
*** priteau has quit IRC | 22:12 | |
*** ThiagoCMC has quit IRC | 22:19 | |
*** hamzaachi has quit IRC | 22:46 | |
*** cshen has quit IRC | 22:47 | |
greatgatsby | cloudnull, thanks for the glean info, looks very promising | 22:50 |
*** DanyC has quit IRC | 23:31 | |
*** sawblade6 has quit IRC | 23:33 | |
openstackgerrit | Merged openstack/openstack-ansible-ops stable/rocky: Replace Chinese punctuation with English punctuation https://review.openstack.org/611758 | 23:38 |
*** gyee has quit IRC | 23:50 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!