*** oneswig has joined #openstack-ansible | 00:19 | |
*** sdake_ has quit IRC | 00:34 | |
*** sdake has joined #openstack-ansible | 00:39 | |
*** eil397 has quit IRC | 00:39 | |
*** sdake has quit IRC | 00:41 | |
*** baker has joined #openstack-ansible | 00:42 | |
*** oneswig has quit IRC | 00:53 | |
*** baker has quit IRC | 00:53 | |
*** markvoelker has joined #openstack-ansible | 00:56 | |
*** karimb has quit IRC | 00:57 | |
*** markvoelker has quit IRC | 01:01 | |
*** oneswig has joined #openstack-ansible | 01:03 | |
*** openstack has joined #openstack-ansible | 01:09 | |
*** elo has quit IRC | 01:23 | |
*** oneswig has joined #openstack-ansible | 01:30 | |
*** markvoelker has joined #openstack-ansible | 01:37 | |
*** cemmason1 has quit IRC | 01:47 | |
*** oneswig has quit IRC | 02:03 | |
*** baker has joined #openstack-ansible | 02:04 | |
*** galstrom_zzz is now known as galstrom | 02:07 | |
*** eil397 has joined #openstack-ansible | 02:08 | |
*** sdake has joined #openstack-ansible | 02:10 | |
*** oneswig has joined #openstack-ansible | 02:27 | |
*** oneswig has quit IRC | 02:31 | |
*** baker has quit IRC | 02:31 | |
*** galstrom is now known as galstrom_zzz | 02:39 | |
*** oneswig has joined #openstack-ansible | 02:41 | |
*** apuimedo has quit IRC | 02:43 | |
*** apuimedo has joined #openstack-ansible | 02:45 | |
*** apuimedo has quit IRC | 02:49 | |
*** apuimedo has joined #openstack-ansible | 02:50 | |
*** baker has joined #openstack-ansible | 02:57 | |
*** fawadkhaliq has joined #openstack-ansible | 02:59 | |
*** apuimedo has quit IRC | 02:59 | |
*** apuimedo has joined #openstack-ansible | 02:59 | |
*** apuimedo has quit IRC | 03:04 | |
*** apuimedo has joined #openstack-ansible | 03:05 | |
*** sdake has quit IRC | 03:06 | |
*** sdake has joined #openstack-ansible | 03:07 | |
*** apuimedo has quit IRC | 03:10 | |
*** apuimedo has joined #openstack-ansible | 03:10 | |
*** oneswig has quit IRC | 03:15 | |
*** apuimedo has quit IRC | 03:15 | |
*** apuimedo has joined #openstack-ansible | 03:16 | |
*** tlian has quit IRC | 03:20 | |
*** apuimedo has quit IRC | 03:26 | |
*** apuimedo has joined #openstack-ansible | 03:28 | |
*** daledude has quit IRC | 03:29 | |
*** galstrom_zzz is now known as galstrom | 03:33 | |
*** sdake_ has joined #openstack-ansible | 03:34 | |
*** sdake has quit IRC | 03:35 | |
*** apuimedo has quit IRC | 03:37 | |
*** apuimedo has joined #openstack-ansible | 03:38 | |
*** oneswig has joined #openstack-ansible | 03:38 | |
*** oneswig has quit IRC | 03:42 | |
*** apuimedo has quit IRC | 03:42 | |
*** apuimedo has joined #openstack-ansible | 03:43 | |
*** apuimedo has quit IRC | 03:48 | |
*** apuimedo has joined #openstack-ansible | 03:49 | |
*** oneswig has joined #openstack-ansible | 03:52 | |
*** markvoelker has quit IRC | 03:52 | |
*** apuimedo has quit IRC | 03:59 | |
*** apuimedo has joined #openstack-ansible | 03:59 | |
*** galstrom is now known as galstrom_zzz | 03:59 | |
*** apuimedo has quit IRC | 04:04 | |
*** apuimedo has joined #openstack-ansible | 04:05 | |
*** baker has quit IRC | 04:07 | |
*** eil397 has quit IRC | 04:12 | |
*** apuimedo has quit IRC | 04:20 | |
*** apuimedo has joined #openstack-ansible | 04:22 | |
*** galstrom_zzz is now known as galstrom | 04:23 | |
*** openstackstatus has quit IRC | 04:24 | |
*** openstack has joined #openstack-ansible | 04:27 | |
*** woodard has joined #openstack-ansible | 04:32 | |
*** apuimedo has quit IRC | 04:33 | |
*** apuimedo has joined #openstack-ansible | 04:34 | |
*** apuimedo has quit IRC | 04:39 | |
*** apuimedo has joined #openstack-ansible | 04:39 | |
*** galstrom is now known as galstrom_zzz | 04:40 | |
*** galstrom_zzz is now known as galstrom | 04:44 | |
*** oneswig has joined #openstack-ansible | 04:48 | |
*** apuimedo has quit IRC | 04:48 | |
*** apuimedo has joined #openstack-ansible | 04:49 | |
*** oneswig has quit IRC | 04:53 | |
*** markvoelker has joined #openstack-ansible | 04:53 | |
*** apuimedo has quit IRC | 04:54 | |
*** apuimedo has joined #openstack-ansible | 04:54 | |
*** galstrom is now known as galstrom_zzz | 04:57 | |
*** markvoelker has quit IRC | 04:57 | |
*** apuimedo has quit IRC | 04:59 | |
*** apuimedo has joined #openstack-ansible | 05:00 | |
*** oneswig has joined #openstack-ansible | 05:03 | |
*** apuimedo has quit IRC | 05:05 | |
*** apuimedo has joined #openstack-ansible | 05:06 | |
*** apuimedo has quit IRC | 05:15 | |
*** apuimedo has joined #openstack-ansible | 05:17 | |
*** apuimedo has quit IRC | 05:21 | |
*** apuimedo has joined #openstack-ansible | 05:22 | |
*** apuimedo has quit IRC | 05:26 | |
*** apuimedo has joined #openstack-ansible | 05:27 | |
*** oneswig has quit IRC | 05:36 | |
*** apuimedo has quit IRC | 05:37 | |
*** apuimedo has joined #openstack-ansible | 05:37 | |
*** sdake_ has quit IRC | 05:43 | |
*** apuimedo has quit IRC | 05:46 | |
*** apuimedo has joined #openstack-ansible | 05:47 | |
*** javeriak has joined #openstack-ansible | 05:50 | |
*** elo has joined #openstack-ansible | 05:51 | |
*** sdake has joined #openstack-ansible | 05:51 | |
*** apuimedo has quit IRC | 05:57 | |
*** apuimedo has joined #openstack-ansible | 05:57 | |
*** apuimedo has quit IRC | 06:02 | |
*** apuimedo has joined #openstack-ansible | 06:02 | |
*** apuimedo has quit IRC | 06:07 | |
*** apuimedo has joined #openstack-ansible | 06:08 | |
*** oneswig has joined #openstack-ansible | 06:16 | |
*** apuimedo has quit IRC | 06:17 | |
*** apuimedo has joined #openstack-ansible | 06:18 | |
*** apuimedo has quit IRC | 06:25 | |
*** apuimedo has joined #openstack-ansible | 06:25 | |
*** elo_ has joined #openstack-ansible | 06:32 | |
*** apuimedo has quit IRC | 06:32 | |
*** apuimedo has joined #openstack-ansible | 06:33 | |
*** shausy has joined #openstack-ansible | 06:35 | |
*** apuimedo has quit IRC | 06:37 | |
*** apuimedo has joined #openstack-ansible | 06:38 | |
*** elo has quit IRC | 06:39 | |
*** elo_ has quit IRC | 06:40 | |
*** phiche has joined #openstack-ansible | 06:42 | |
*** elo has joined #openstack-ansible | 06:42 | |
*** apuimedo has quit IRC | 06:42 | |
*** elo is now known as Guest11805 | 06:43 | |
*** apuimedo has joined #openstack-ansible | 06:43 | |
*** oneswig has quit IRC | 06:46 | |
*** Guest11805 has quit IRC | 06:47 | |
*** apuimedo has quit IRC | 06:52 | |
*** apuimedo has joined #openstack-ansible | 06:53 | |
*** eric_lopez has joined #openstack-ansible | 06:54 | |
*** markvoelker has joined #openstack-ansible | 06:54 | |
*** eric_lopez has quit IRC | 06:56 | |
*** eric_lopez has joined #openstack-ansible | 06:56 | |
*** apuimedo has quit IRC | 06:57 | |
*** apuimedo has joined #openstack-ansible | 06:58 | |
*** markvoelker has quit IRC | 06:58 | |
*** phiche has quit IRC | 07:07 | |
*** apuimedo has quit IRC | 07:07 | |
*** apuimedo has joined #openstack-ansible | 07:08 | |
*** oneswig has joined #openstack-ansible | 07:10 | |
*** apuimedo has quit IRC | 07:13 | |
*** apuimedo has joined #openstack-ansible | 07:13 | |
*** oneswig has quit IRC | 07:14 | |
*** phiche has joined #openstack-ansible | 07:16 | |
*** apuimedo has quit IRC | 07:18 | |
*** apuimedo has joined #openstack-ansible | 07:19 | |
*** oneswig has joined #openstack-ansible | 07:25 | |
*** apuimedo has quit IRC | 07:28 | |
*** apuimedo has joined #openstack-ansible | 07:30 | |
*** woodard has quit IRC | 07:32 | |
*** javeriak_ has joined #openstack-ansible | 07:39 | |
*** javeriak has quit IRC | 07:39 | |
*** apuimedo has quit IRC | 07:39 | |
*** apuimedo has joined #openstack-ansible | 07:40 | |
*** apuimedo has quit IRC | 07:45 | |
*** apuimedo has joined #openstack-ansible | 07:45 | |
*** javeriak_ has quit IRC | 07:46 | |
*** javeriak has joined #openstack-ansible | 07:47 | |
*** apuimedo has quit IRC | 07:50 | |
*** apuimedo has joined #openstack-ansible | 07:52 | |
*** apuimedo has quit IRC | 07:57 | |
*** oneswig has quit IRC | 07:57 | |
*** apuimedo has joined #openstack-ansible | 07:58 | |
*** adac has joined #openstack-ansible | 08:00 | |
adac | Morning folks | 08:00 |
---|---|---|
*** cemmason1 has joined #openstack-ansible | 08:03 | |
*** apuimedo has quit IRC | 08:05 | |
*** oneswig has joined #openstack-ansible | 08:06 | |
*** apuimedo has joined #openstack-ansible | 08:07 | |
*** egonzalez has joined #openstack-ansible | 08:13 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Add neutron_ceilometer_enabled default https://review.openstack.org/257268 | 08:14 |
*** apuimedo has quit IRC | 08:16 | |
*** agireud has joined #openstack-ansible | 08:17 | |
*** apuimedo has joined #openstack-ansible | 08:18 | |
*** cemmason1 has quit IRC | 08:22 | |
*** cemmason1 has joined #openstack-ansible | 08:23 | |
*** apuimedo has quit IRC | 08:23 | |
*** agireud has quit IRC | 08:24 | |
*** apuimedo has joined #openstack-ansible | 08:24 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-security: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257728 | 08:25 |
*** sdake has quit IRC | 08:27 | |
*** admin0 has joined #openstack-ansible | 08:29 | |
*** agireud has joined #openstack-ansible | 08:31 | |
*** oneswig has quit IRC | 08:32 | |
*** woodard has joined #openstack-ansible | 08:33 | |
*** cemmason1 has quit IRC | 08:42 | |
*** oneswig has joined #openstack-ansible | 08:44 | |
*** woodard has quit IRC | 08:45 | |
*** karimb has joined #openstack-ansible | 08:47 | |
odyssey4me | o/ adac | 08:48 |
*** oneswig has quit IRC | 08:48 | |
*** markvoelker has joined #openstack-ansible | 08:55 | |
adac | odyssey4me, I just rebootet my machine (Installed AIO on a physical machine now) but having some trouble to access the keystone webinterface with the correct credentials. It says: "An error occurred authenticating. Please try again later." Which log would in this case prove more information? The keystone logs are not very verbose about this incident it seems | 08:55 |
*** cemmason1 has joined #openstack-ansible | 08:57 | |
odyssey4me | adac you can enable debug logging across the stack by adding 'debug: True' to /etc/openstack_deploy/user_variables.yml | 08:57 |
odyssey4me | adac are you sure that your database is running/ | 08:57 |
adac | odyssey4me, not really. The processes are running. however a ansible galera_container -m shell -a "cat /var/lib/mysql/grastate.dat" always just shows me status -1 on all nodes | 08:58 |
adac | ok thanks | 08:59 |
*** markvoelker has quit IRC | 08:59 | |
adac | I guess this requires a restart modifying that, right? | 08:59 |
odyssey4me | adac yeah, apparently that's a non issue | 09:01 |
odyssey4me | adac running the applicable playbooks to effect the debug change will restart services, yes | 09:02 |
odyssey4me | adac you can just run setup-openstack.yml to do it for you | 09:02 |
adac | odyssey4me, ok trying that. Do I need afterwards to restart the db manually again? | 09:03 |
odyssey4me | adac no, you never need to restart the database for changes unless they are changes ot the DB configuration | 09:03 |
adac | odyssey4me, but if the machine was rebooted, I have to, right? | 09:03 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257536 | 09:05 |
odyssey4me | adac yes, if the machine was rebooted then you need to bootstrap the galera cluster | 09:06 |
adac | odyssey4me, yes I did that before, maybe an error occured on that | 09:06 |
*** shausy has quit IRC | 09:08 | |
*** admin0 has quit IRC | 09:09 | |
*** shausy has joined #openstack-ansible | 09:09 | |
*** oneswig has joined #openstack-ansible | 09:11 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257752 | 09:12 |
*** cemmason1 has quit IRC | 09:12 | |
*** cemmason1 has joined #openstack-ansible | 09:15 | |
*** karimb_ has joined #openstack-ansible | 09:16 | |
*** karimb has quit IRC | 09:16 | |
*** oneswig has quit IRC | 09:17 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257755 | 09:17 |
*** oneswig has joined #openstack-ansible | 09:18 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257755 | 09:18 |
*** admin0 has joined #openstack-ansible | 09:20 | |
*** oneswig has quit IRC | 09:23 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257758 | 09:24 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257755 | 09:25 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257752 | 09:26 |
*** tricksters has joined #openstack-ansible | 09:27 | |
*** eric_lopez has quit IRC | 09:29 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257761 | 09:32 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257761 | 09:33 |
*** preeti_ has quit IRC | 09:33 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257536 | 09:33 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257752 | 09:33 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-galera_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257755 | 09:34 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257758 | 09:34 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-apt_package_pinning: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257748 | 09:34 |
*** Prithiv has joined #openstack-ansible | 09:36 | |
*** shausy has quit IRC | 09:37 | |
adac | I restarted all containers with openstack-ansible setup-hosts.yml but no luck, could still not login. I tred to reboot the machine afterwards again and restarted the cluster, but still I cannot login. Checking the logs now for verbose output | 09:38 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-memcached_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257767 | 09:38 |
*** shausy has joined #openstack-ansible | 09:38 | |
adac | Seems I'm getting only this 'relevant' output https://gist.github.com/anonymous/ae0a682adeabdd973ca6 | 09:42 |
adac | maybe the db clsuter is corrupted. Need to find out on how to check that | 09:45 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257773 | 09:46 |
*** openstackgerrit has quit IRC | 09:47 | |
*** openstackgerrit has joined #openstack-ansible | 09:48 | |
*** agireud has quit IRC | 09:49 | |
odyssey4me | adac setup-hosts doesn't restart the containers, nor do you need to restart the containers | 09:50 |
odyssey4me | adac from the utility container, if you execute: | 09:51 |
odyssey4me | source /root/openrc | 09:51 |
odyssey4me | then: openstack user list | 09:51 |
odyssey4me | does it work? | 09:51 |
*** agireud has joined #openstack-ansible | 09:52 | |
adac | root@aio1_utility_container-06420a58:/# openstack user list | 09:53 |
adac | An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-bb2de048-9ec6-4361-9dda-0a9bee2e37d3) | 09:53 |
openstackgerrit | Merged openstack/openstack-ansible: FIX: provider_networks module for multiple vlans https://review.openstack.org/252658 | 09:53 |
adac | odyssey4me, Ok I see now. I misunderstood your statement, sorry | 09:54 |
openstackgerrit | Merged openstack/openstack-ansible: Updating AIO docs for Ansible playbook https://review.openstack.org/244720 | 09:54 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-galera_client: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257782 | 09:54 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rabbitmq_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257788 | 09:57 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257791 | 09:59 |
*** agireud has quit IRC | 09:59 | |
odyssey4me | adac ok, you should check in the keystone container - the logs there will tell you the issue | 09:59 |
odyssey4me | it is likely a problem with galera - but check the keystone logs to confirm | 10:00 |
odyssey4me | as keystone does the auth, it's often good to check whether it's working before moving on | 10:00 |
*** sdake has joined #openstack-ansible | 10:06 | |
*** Prithiv has quit IRC | 10:06 | |
*** agireud has joined #openstack-ansible | 10:09 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: FIX: provider_networks module for multiple vlans https://review.openstack.org/257797 | 10:09 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: FIX: provider_networks module for multiple vlans https://review.openstack.org/257798 | 10:10 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Updating AIO docs for Ansible playbook https://review.openstack.org/257799 | 10:12 |
*** Prithiv has joined #openstack-ansible | 10:12 | |
*** Bofu2U has joined #openstack-ansible | 10:17 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Updating AIO docs for Ansible playbook https://review.openstack.org/257805 | 10:17 |
*** manous has joined #openstack-ansible | 10:18 | |
*** miguelgrinberg has quit IRC | 10:20 | |
*** agireud has quit IRC | 10:22 | |
adac | odyssey4me, yes you are right. An error appears and it seems to be db/galera related: https://gist.github.com/anonymous/3a52bb29e7a0d52037ed | 10:25 |
odyssey4me | adac if you attach to one of the galera node, can you access mysql? | 10:27 |
odyssey4me | adac you can try: ansible galera_container -m shell -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" | 10:28 |
odyssey4me | (from http://docs.openstack.org/developer/openstack-ansible/install-guide/ops-galera-recoverymulti.html ) | 10:28 |
*** agireud has joined #openstack-ansible | 10:28 | |
adac | odyssey4me, it shows the following: https://gist.github.com/anonymous/3922b9ff4a9dc2360551 | 10:30 |
adac | ok reading trough the page you send me thanks! | 10:31 |
*** openstackgerrit has quit IRC | 10:32 | |
adac | be back in about 30 mins | 10:32 |
*** openstackgerrit has joined #openstack-ansible | 10:33 | |
odyssey4me | adac ah, so it seems that your cluster is not properly bootstrapped | 10:34 |
*** agireud has quit IRC | 10:34 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257791 | 10:35 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257773 | 10:37 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257761 | 10:38 |
Bofu2U | during haproxy-install.yml running -- "[ALERT] 348/104050 (22733) : parsing [/etc/haproxy/conf.d/nova_console_novnc:19] : Unknown host in 'None:6080'" -- looks like it's on every single service not just nova. :-/ What'd I miss? | 10:41 |
Bofu2U | (brb) | 10:41 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-py_from_git: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257817 | 10:42 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-pip_install: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257818 | 10:44 |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible-pip_lock_down: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257822 | 10:48 |
*** agireud has joined #openstack-ansible | 10:48 | |
*** markvoelker has joined #openstack-ansible | 10:55 | |
*** markvoelker has quit IRC | 11:00 | |
odyssey4me | Bofu2U you mentioned earlier that you have a user_group_vars file in place... what else is in there? | 11:08 |
odyssey4me | fyi we typically encourage all user vars to go into user_variables.yml | 11:09 |
*** cemmason1 has quit IRC | 11:10 | |
*** admin0 has quit IRC | 11:11 | |
*** admin0 has joined #openstack-ansible | 11:12 | |
Bofu2U | odyssey4me: user_variables.yml and openstack_user_config.yml | 11:16 |
*** ig0r_ has quit IRC | 11:20 | |
odyssey4me | Bofu2U yep - optionally conf.d files can also be used to augment openstack_user_config stuff | 11:25 |
Bofu2U | rgr | 11:27 |
Bofu2U | any idea what could cause the IP to not be set for that run? | 11:27 |
odyssey4me | Bofu2U if you check the haproxy host conf.d entries do they look complete?\ | 11:28 |
odyssey4me | is the ansible play failing, or are those just syslog alerts of some sort? | 11:28 |
Bofu2U | ansible play is - 1 sec | 11:30 |
odyssey4me | Bofu2U so the group members are added to the configs, and if there are no group members (ie no containers in that group) then it'll leave a conf line out: https://github.com/openstack/openstack-ansible/blob/12.0.2/playbooks/vars/configs/haproxy_config.yml#L30 | 11:30 |
odyssey4me | but it seems that you somehow have something else going on | 11:30 |
odyssey4me | Bofu2U what version of ansible are you using? | 11:30 |
Bofu2U | 1.9.4, chances are I just forgot to set something somewhere | 11:31 |
Bofu2U | best way to handle assigning would be via the group_binds, yes? | 11:32 |
*** adac has quit IRC | 11:32 | |
odyssey4me | hmm, hang on a minute | 11:33 |
Bofu2U | I have haproxy_hosts set in my openstack_user_config.yml | 11:34 |
Bofu2U | but not sure if I have the right association to galera_all via the others, etc. | 11:34 |
odyssey4me | andymccr ping? | 11:38 |
odyssey4me | Bofu2U yeah, although to my casual eye your config of infra_hosts should work, as that contains shared-infra_hosts, os-infra_hosts, etc... perhaps that's the issue | 11:39 |
odyssey4me | in the AIO we break it out a lot more https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/openstack_user_config.yml.aio | 11:39 |
Bofu2U | I don't mind putting more in there | 11:39 |
Bofu2U | heh | 11:39 |
Bofu2U | ex: I don't have the group_binds on my br-storage either | 11:39 |
Bofu2U | like in that one | 11:39 |
odyssey4me | to be honest, this is the part of the config which confuses me... maybe as a tester add the 'shared-infra_hosts' group | 11:40 |
Bofu2U | let's give it a try | 11:41 |
Bofu2U | 1 sec | 11:41 |
odyssey4me | yeah, that'll hit you later - anything that needs to be able to talk over the storage network should have a group bind there afaik | 11:41 |
odyssey4me | for this particular issue, which is on your container/management network, your group binds look fine to me | 11:42 |
odyssey4me | Bofu2U what you can also do is query your inventory with scripts/inventory-manage.py (use the -g and -G options to see host/group membership) | 11:43 |
Bofu2U | heh | 11:43 |
Bofu2U | so you'll like this | 11:44 |
Bofu2U | on -G, there's galera | 11:44 |
Bofu2U | and galera_container | 11:44 |
Bofu2U | no galera_all\ | 11:44 |
odyssey4me | ah, so I suspect that we need some tweaks to the env.d files to fix that up | 11:45 |
odyssey4me | but I wouldn't do that in your environment right now - an AIO would be a better place to experiment there | 11:45 |
Bofu2U | well these are all clean slate servers | 11:46 |
Bofu2U | so w/e | 11:46 |
Bofu2U | I don't mind getting this to work, then wiping them and doing a start to finish with a working workflow | 11:46 |
odyssey4me | essentially I mean, rather adjust the openstack_user_config for now to get it right | 11:47 |
odyssey4me | that said, if you could register that as a bug it'd be grand | 11:47 |
Bofu2U | I'll make a list as I chug along :P | 11:47 |
odyssey4me | then you, or anyone else, can follow up on it later | 11:47 |
*** adac has joined #openstack-ansible | 11:47 | |
odyssey4me | usually best to create the bugs as you go - they may be finished by the time you're done ;) | 11:47 |
Bofu2U | touche | 11:47 |
Bofu2U | best place to report the bug? github or ...? | 11:48 |
Bofu2U | nvm got the launchpad | 11:48 |
odyssey4me | yup :) | 11:48 |
* odyssey4me points at IRC channel topic | 11:48 | |
Bofu2U | Probably would be good to put that in the github readme too ;) | 11:49 |
*** thegmanagain has joined #openstack-ansible | 11:50 | |
odyssey4me | hmm, I thought I had - it is indirectly linked: dev docs link -> contributor guidelines | 11:52 |
Bofu2U | yeah | 11:52 |
Bofu2U | I'm doing a quick PR for it to be put in the readme | 11:52 |
Bofu2U | just trying to make it easier heh | 11:52 |
odyssey4me | patches are always welcome :) | 11:53 |
Bofu2U | done | 11:54 |
*** mattoliverau has quit IRC | 11:55 | |
*** matt6434 has joined #openstack-ansible | 11:56 | |
odyssey4me | Bofu2U hmm, did you submit a review via gerrit - or a PR via github? | 11:58 |
Bofu2U | PR via github for the readme, finishing the gerrit now | 11:58 |
thegmanagain | Hi folks. I'm trying to create my first instance on openstack via an instance running ansible 2.1 and can't get it to work | 11:59 |
odyssey4me | Bofu2U http://docs.openstack.org/infra/manual/developers.html takes you through how to contribute to an openstack project | 11:59 |
thegmanagain | Mirantis OpenStack 5.1.1 | 11:59 |
odyssey4me | unfortunately github will just auto-reject your PR | 11:59 |
Bofu2U | oh good lord | 11:59 |
Bofu2U | that's a lot of work for 1 line in the readme heh | 11:59 |
odyssey4me | Bofu2U it is, but once you're in you can contribute more :) | 12:00 |
thegmanagain | I have a cloud.yaml file and get a correct response when I run "openstack --os-cloud my_cloud server list" | 12:00 |
odyssey4me | thegmanagain I'm confused - ansible 2.1 has not been released | 12:00 |
thegmanagain | ansible --version ansible 2.1.0 config file = configured module search path = Default w/o overrides | 12:01 |
odyssey4me | thegmanagain also, this is a channel for developers and users of https://github.com/openstack/openstack-ansible - while we could try to assist you, you may be better off asking in #openstack or in #ansible | 12:01 |
Bofu2U | odyssey4me: https://bugs.launchpad.net/openstack-ansible/+bug/1526292 done | 12:01 |
openstack | Launchpad bug 1526292 in openstack-ansible "infra_hosts definition doesn't set galera_all, fails on haproxy_install.yml" [Undecided,New] | 12:01 |
thegmanagain | git clone git://github.com/ansible/ansible.git | 12:01 |
thegmanagain | Ok thanks | 12:02 |
odyssey4me | afk for a bit | 12:02 |
odyssey4me | Bofu2U thanks :) | 12:02 |
Bofu2U | ofc | 12:02 |
*** thegmanagain has left #openstack-ansible | 12:07 | |
evrardjp | cloudnull, pong | 12:12 |
evrardjp | and hello everyone | 12:13 |
odyssey4me | o/ evrardjp | 12:13 |
*** fawadkhaliq has quit IRC | 12:13 | |
*** cemmason1 has joined #openstack-ansible | 12:14 | |
evrardjp | how is it? | 12:14 |
evrardjp | there was someone interested by designate, should I help? | 12:15 |
evrardjp | also, could this merge? https://review.openstack.org/#/c/249227/ | 12:16 |
evrardjp | this way I can focus on implementation | 12:16 |
evrardjp | thanks odyssey4me for having commented/validated already :) | 12:17 |
odyssey4me | evrardjp yeah, it looks like swati is offline | 12:19 |
odyssey4me | someone else was interested in helping too, perhaps you could try to make contact with them to collaborate on the work? | 12:19 |
evrardjp | I'm on holiday, so it's not mandatory... just wanted to make sure everything was alright | 12:20 |
*** fawadkhaliq has joined #openstack-ansible | 12:21 | |
*** fawadkhaliq has quit IRC | 12:23 | |
*** fawadkhaliq has joined #openstack-ansible | 12:23 | |
Bofu2U | ha, got to setup-infra and it fails on the first run. :) | 12:27 |
Bofu2U | File "/usr/local/lib/python2.7/dist-packages/ansible/runner/connection_plugins/ssh.py", line 44, in __init__ | 12:28 |
Bofu2U | self.ipv6 = ':' in self.host | 12:28 |
Bofu2U | TypeError: argument of type 'NoneType' is not iterable | 12:28 |
odyssey4me | Bofu2U oh dear, I'm a bit confused about why it's trying to select the ipv6 address | 12:31 |
Bofu2U | heh | 12:31 |
Bofu2U | looks like it's a dupe of https://bugs.launchpad.net/openstack-ansible/+bug/1477175 | 12:31 |
openstack | Launchpad bug 1477175 in openstack-ansible "Setup-infrastructure error with "NoneType" during Memcached install" [Undecided,Expired] | 12:31 |
Bofu2U | Confirmed what was said there, though - in the inventory the ansible_ssh_host is null | 12:33 |
*** admin0 has quit IRC | 12:34 | |
odyssey4me | hmm, but your ip_from_q is right - unless your inventory has evolved from a bad config before? | 12:35 |
Bofu2U | no it should be fine | 12:35 |
Bofu2U | hm. | 12:36 |
Bofu2U | running ifconfig from within one of the containers shows 10.0.3.183 -- shouldn't it be on one of the CIDR's I set? | 12:36 |
odyssey4me | Bofu2U so there will be a default eth0 address which lxc assigns | 12:37 |
odyssey4me | the addresses from the networks you define should be extra | 12:37 |
Bofu2U | ah ok | 12:37 |
odyssey4me | if the values aren't in your inventory, then they wouldn't be set though | 12:37 |
odyssey4me | so use the inventory-manage script to check things, or open the inventory json file and verify | 12:38 |
Bofu2U | http://pastebin.com/zjZfyFVR | 12:38 |
Bofu2U | that's what I'm worried about - the ansible_ssh_host | 12:38 |
Bofu2U | container_address is also null | 12:39 |
odyssey4me | yeah, if that's busted then the deployment of those containers will be broken | 12:39 |
Bofu2U | hm. What segment of t his should I be trying to troubleshoot to figure out why those aren't being set | 12:39 |
odyssey4me | I can't see why it's busted from your config though | 12:40 |
odyssey4me | this is directly from the inventory | 12:40 |
Bofu2U | yeah | 12:40 |
odyssey4me | so the best in this case is probably to blow away the containers, then remove them fro your inventory, then regenerate the inventory | 12:40 |
Bofu2U | k | 12:40 |
Bofu2U | any chance there's a playbook that does that? ;) | 12:40 |
odyssey4me | openstack-ansible lxc-containers-destroy.yml | 12:41 |
odyssey4me | that'll blow away the containers | 12:41 |
*** markvoelker has joined #openstack-ansible | 12:41 | |
odyssey4me | I would guess that nothing else is sacred, so once that's done you can probably blow away the inventory.json and fact cache | 12:42 |
Bofu2U | rgr | 12:42 |
Bofu2U | on it now | 12:42 |
Bofu2U | then run setup-hosts again I'm assuming | 12:42 |
odyssey4me | you can then just execute: python playbooks/inventory/dynamic_inventory.py | 12:43 |
odyssey4me | that'll output the inventory so you can inspect it before going back down the rabbit hole | 12:43 |
odyssey4me | but yes, your playbooks would need to be executed from the start again | 12:43 |
*** apuimedo has quit IRC | 12:45 | |
*** markvoelker has quit IRC | 12:46 | |
*** oneswig has joined #openstack-ansible | 12:46 | |
*** apuimedo has joined #openstack-ansible | 12:46 | |
Bofu2U | hm | 12:47 |
Bofu2U | even on the python dynamic it's showing a lot of ansible_ssh_host null | 12:48 |
odyssey4me | in that case it's definitely a config issue | 12:48 |
*** admin0 has joined #openstack-ansible | 12:52 | |
*** cemmason1 has quit IRC | 12:54 | |
*** oneswig has quit IRC | 12:55 | |
*** woodard has joined #openstack-ansible | 12:56 | |
*** woodard has quit IRC | 12:56 | |
*** woodard has joined #openstack-ansible | 12:57 | |
Bofu2U | what about the is_metal flag | 13:03 |
Bofu2U | https://github.com/openstack/openstack-ansible/blob/e51ceaa127c2639d39a798c6dc9ee41fa3635d24/playbooks/inventory/dynamic_inventory.py#L157 | 13:03 |
*** fawadkhaliq has quit IRC | 13:03 | |
evrardjp | hello Bofu2U I just saw your previous paste | 13:04 |
*** markvoelker has joined #openstack-ansible | 13:04 | |
evrardjp | are you sure you've correctly setup the networks? | 13:05 |
Bofu2U | As far as I know :P | 13:05 |
evrardjp | could you drop me your config somewhere? (just obfuscate the ips if some are public...) | 13:05 |
Bofu2U | all private, no worries - 1 sec | 13:05 |
Bofu2U | http://pastebin.com/nzmZJsae | 13:06 |
odyssey4me | Bofu2U the is_metal flag defaults to false - it's there to allow you to have stuff deploy on the hardware instead of a container | 13:07 |
odyssey4me | Bofu2U is_metal is set to true for cinder-volume: https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/cinder.yml#L62 | 13:08 |
odyssey4me | also nova-compute: https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/nova.yml#L75 | 13:08 |
Bofu2U | ahhhh got ya | 13:09 |
Bofu2U | makes sense | 13:09 |
Bofu2U | evrardjp let me know if you need anything else and I can grab it | 13:09 |
odyssey4me | and swift {object,container,storage} https://github.com/openstack/openstack-ansible/blob/12.0.2/etc/openstack_deploy/env.d/swift.yml#L55 | 13:09 |
odyssey4me | but that flag in those files allow you to change things up if you want to | 13:09 |
odyssey4me | for example, if you're only using ceph/nfs for cinder then you may as well have cinder-volume in a container on the infra hosts | 13:10 |
Bofu2U | got ya got ya | 13:11 |
Bofu2U | yeah that makes sense | 13:11 |
evrardjp | Bofu2U, provider_networks are part of the global_overrides | 13:17 |
Bofu2U | what | 13:17 |
Bofu2U | son of a | 13:17 |
evrardjp | also, you can enclose your ip of used_ips in " | 13:17 |
*** prometheanfire has quit IRC | 13:18 | |
evrardjp | like - "10.102.0.11,10.102.0.13" | 13:18 |
evrardjp | so the p of your provider_works should be at the same level as the t from tunnel_bridge | 13:18 |
evrardjp | and everything beneath it should follow | 13:19 |
*** prometheanfire has joined #openstack-ansible | 13:19 | |
evrardjp | also I wouldn't use infra_hosts as you already have the rest, but that's another topic | 13:20 |
evrardjp | (you have shared-infra, repo-infra os-infra ...) | 13:20 |
Bofu2U | yeah, I just did that because of the crap I ran into with galera earlier | 13:20 |
evrardjp | it's normally not needed | 13:21 |
evrardjp | let's check if you inventory is fine now :) | 13:21 |
Bofu2U | the dynamic inventory shows what I'm assuming will be the right IP's | 13:21 |
Bofu2U | I"m running setup-hosts again and will see how that ends up. | 13:22 |
Bofu2U | :) | 13:22 |
evrardjp | could you just redrop the config somewhere, to be sure? | 13:26 |
Bofu2U | yeah 1min - it's literally that exact one but with the provider & children indented to match | 13:30 |
Bofu2U | and running the dynamic_inventory file did show container ips | 13:30 |
evrardjp | you should check with the inventory manage -l | 13:32 |
evrardjp | there should be no blanks and all the components you're looking for | 13:32 |
evrardjp | (it's easier on the eyes than pure json) | 13:33 |
odyssey4me | Bofu2U it's plausible that our splitting of the groups earlier was due to some bustedness in the config and wasn't necessary | 13:38 |
Bofu2U | possibly | 13:38 |
Bofu2U | looks like the containers failed on "wait for ssh to be available" | 13:39 |
Bofu2U | I guess now that the network config is technically working it means it's setup wrong ;) | 13:42 |
*** shausy has quit IRC | 13:47 | |
*** tlian has joined #openstack-ansible | 13:48 | |
Bofu2U | isn't the lxc-net-bridge supposed to have br-mgmt as a bridge_port? | 13:50 |
evrardjp | nope | 13:52 |
evrardjp | lxc bridge is lxcbr0 | 13:52 |
evrardjp | it's natted by default IIRC | 13:53 |
Bofu2U | got ya | 13:53 |
Bofu2U | yeah upon running info on a container it has the standard 10.0.3.X IP and then an IP on the correct range after | 13:53 |
evrardjp | it's not necessarily used, depends on your config | 13:53 |
Bofu2U | local master host can see those IP's of all containers on it | 13:53 |
Bofu2U | but others can't | 13:53 |
Bofu2U | so something with the routing | 13:54 |
evrardjp | without config/inventory.json I'll have issues to help you :p | 13:54 |
Bofu2U | 1 sec :P | 13:54 |
mhayden | i know it's a little OT, but does anyone have an opinion on zabbix? | 13:54 |
evrardjp | we have it mhayden | 13:54 |
Bofu2U | mhayden: compared to something or just in general? | 13:54 |
evrardjp | in production | 13:55 |
mhayden | Bofu2U: in general | 13:55 |
evrardjp | in general I don't like it | 13:55 |
mhayden | evrardjp: ah, okay -- the community version or paid? | 13:55 |
mhayden | haha | 13:55 |
* mhayden has not yet a monitoring product he loves yet | 13:55 | |
evrardjp | but I'm not really the ops guy, so I didn't get to decide | 13:55 |
evrardjp | monitoring or event management/statistics aggregation/...? | 13:55 |
evrardjp | we are using the community, because we don't really see a point of the commercial right now | 13:56 |
Bofu2U | after I grab this file for evrardjp ill rant for a min about my experiences with it :P | 13:56 |
cloudnull | Morning | 13:56 |
evrardjp | but my colleague was at the latest zabbix conference if you want to talk with him | 13:56 |
Bofu2U | evrardjp: http://pastebin.com/QiQq3Cf7 | 13:56 |
evrardjp | he could give you insights of the future of zabbix | 13:57 |
evrardjp | o/ cloudnull | 13:57 |
Bofu2U | mhayden: When I set it up it's nice for certain things and others just seemed like a pain to deal with | 13:57 |
Bofu2U | example - I pretty much just use it for alert monitoring on CPU load, temperature | 13:57 |
evrardjp | mhayden, we didn't find something that wasn't workaroundable | 13:57 |
evrardjp | (with zabbix) | 13:57 |
Bofu2U | ^ is pretty much my experience with it | 13:57 |
Bofu2U | It may not work the first time, but with a few tweaks and possibly some research you'll have it running | 13:58 |
Bofu2U | and only specific subsets. The community has good jumping off points to get you started | 13:58 |
Bofu2U | ex: it monitored my juniper SRX just fine, but the EX for some reason had some problems | 13:58 |
evrardjp | yeah, but it could do far more complex scenarios for data gathering/aggregation/reporting | 13:58 |
Bofu2U | changed a few ID's and it was completely fine after that | 13:58 |
cloudnull | o/ evrardjp I wanted to ping you about haproxy. I'm doing the irr work and didn't want to touch our haproxy role if we have a better one we can move to in the nearish future. | 13:59 |
Bofu2U | and yeah, the rolling averages are nice. | 13:59 |
Bofu2U | I use a combo of observium and zabbix tbh mhayden | 13:59 |
evrardjp | cloudnull, I didn't get the chance to work on it. Like I said in the past, the haproxy role I have is working | 14:00 |
*** fawadkhaliq has joined #openstack-ansible | 14:00 | |
evrardjp | we just need to add the convenience tool that is linked to the inventory (the paste you sent me) | 14:00 |
cloudnull | OK. So we should be able to move to that role without out much fuss. | 14:01 |
evrardjp | mmm, not that easily, because it's not really backwards compatible :p | 14:01 |
evrardjp | some wiring should be done | 14:01 |
evrardjp | and docs | 14:01 |
cloudnull | Ok. | 14:01 |
evrardjp | I can work on this | 14:01 |
cloudnull | Well I'll leave you haproxy roll in tree for now and circle back on it a bit later. | 14:02 |
cloudnull | I don't want to pull it into its own repo iwe can get to the better role. | 14:02 |
cloudnull | *if we | 14:02 |
cloudnull | But no worries or pressure. It's not a blocker. | 14:03 |
cloudnull | I just wanted to ping you. | 14:03 |
cloudnull | Because I knew you had some bits in-flight. | 14:03 |
mhayden | Bofu2U: thanks | 14:03 |
evrardjp | cloudnull, yeah sure, no problem :p | 14:03 |
Bofu2U | mhayden no problem | 14:03 |
evrardjp | I'd rather that way :p | 14:03 |
Bofu2U | if you intention is to just have something to check overall health and watch it | 14:03 |
Bofu2U | observium is nice | 14:03 |
Bofu2U | if you need triggers, alerts, thresholds and all of that | 14:04 |
Bofu2U | zabbix, etc. | 14:04 |
odyssey4me | cloudnull I'm doing a tweak to the ldap/domains config for the keystone role, but I'm a bit stuck on something | 14:04 |
evrardjp | mhayden, new relic is kinda nice :p | 14:04 |
cloudnull | Odyssey4me What's going on? | 14:04 |
cloudnull | +1 for newrelic ;) | 14:04 |
evrardjp | mhayden, else at home, I'm using collectd for collection, influxdb for graphing, and I'll check on the new thingy that influxdb team as released for monitoring, but I don't really care about that :p | 14:05 |
odyssey4me | cloudnull given http://pastebin.com/5SdDGdmF I'd like to use with_items to iterate over each list item, but I need the value of the item: eg list_item1 | 14:05 |
odyssey4me | perhaps I should structure it slightly differently, I'm open to options | 14:05 |
evrardjp | with_list? | 14:05 |
odyssey4me | of course I could also use key: value pairs all the way down | 14:05 |
odyssey4me | but I'm trying to keep it less verbose | 14:05 |
evrardjp | if you could reorganise, it would be far more elegant | 14:06 |
cloudnull | Adding multi domain support ? | 14:06 |
mhayden | evrardjp: ah, influx is on my list of "things to look at sometime later when i get that free time" | 14:06 |
odyssey4me | evrardjp I need both the 'key' (ie list_item1) and the 'value' (ie the full dict of dict1) | 14:06 |
odyssey4me | cloudnull yep - busy working on the ldap gate, and need this to make things sane | 14:06 |
odyssey4me | using LDAP for the default domain is dumb | 14:07 |
evrardjp | odyssey4me, no issue | 14:07 |
*** sdake_ has joined #openstack-ansible | 14:07 | |
cloudnull | +1 | 14:07 |
*** sdake has quit IRC | 14:07 | |
evrardjp | +1 too :p | 14:07 |
cloudnull | I think with_dict is going to be the best way | 14:07 |
evrardjp | odyssey4me, I'd move dict1 dict2 UNDER list_items | 14:07 |
cloudnull | And using a kvs is how to best achieve it. | 14:07 |
odyssey4me | we currently only provide the ability to implement it for the default domain, which sucks bad | 14:07 |
cloudnull | I think we keep the mechanism to drop domain specific config but name the specific domains according to some value in the main dict. | 14:09 |
odyssey4me | evrardjp cloudnull ie http://pastebin.com/92kb11VJ ? | 14:09 |
evrardjp | http://docs.ansible.com/ansible/playbooks_loops.html#looping-over-subelements | 14:09 |
odyssey4me | cloudnull yep, that's what I'm doing | 14:09 |
Bofu2U | evrardjp: the IP's in the inventory for ansible_ssh_host and container_address are supposed to be one in the same, correct? | 14:10 |
cloudnull | The trick will be to keep it backwards compatible or. Create some integration script for upgrading the data structure change. | 14:10 |
odyssey4me | ok, let me show you a real config | 14:10 |
odyssey4me | http://pastebin.com/jP1BvBda | 14:10 |
evrardjp | odyssey4me, do you need the item in the list? or could it be just a dict? | 14:10 |
odyssey4me | 'Users' is the name of the domain | 14:10 |
odyssey4me | there will be zero or more domains | 14:10 |
odyssey4me | each domain must have one dict (and only one) under it | 14:11 |
evrardjp | I'd remove "- " before Users | 14:11 |
cloudnull | ^ | 14:11 |
odyssey4me | ok, then how do I loop over it? | 14:11 |
odyssey4me | with_dict? | 14:11 |
cloudnull | And add a key for name or similar. | 14:11 |
evrardjp | odyssey4me, check the link with subelements | 14:12 |
evrardjp | it will help you | 14:12 |
evrardjp | Bofu2U, it depends | 14:12 |
odyssey4me | evrardjp I know that doc entry, every time I read it my innd turns to mush | 14:12 |
odyssey4me | *mind | 14:12 |
cloudnull | Maybe even remove ldap and simply call it options. Which should encompasses all options available in the keystone config. | 14:12 |
odyssey4me | cloudnull 'ldap' is important - it's used for the section | 14:13 |
odyssey4me | for another domain it could be 'sql' | 14:13 |
cloudnull | Right, but that could simply be a key | 14:13 |
odyssey4me | yeah, I was trying to get away from making it all key: value pairs... but perhaps it's better not to | 14:14 |
cloudnull | options: {ldap:..., driver:..} | 14:14 |
cloudnull | Idk what's best tbh | 14:14 |
evrardjp | odyssey4me, mixing items and dicts with ansible is kinda a pain sometimes, but it's doable | 14:15 |
cloudnull | Just pondering | 14:15 |
evrardjp | just think what's best for you :) | 14:15 |
odyssey4me | ok, lemme try with_dict... it seems to be giving me what I want | 14:16 |
odyssey4me | where I had it wrong was that I was listing the dicts | 14:16 |
evrardjp | Bofu2U IIRC, the container addresses can hold a storage address, a management adress, etc. | 14:16 |
odyssey4me | thanks :) I'll give you feedback shortly | 14:16 |
Bofu2U | yeah | 14:16 |
evrardjp | Bofu2U, one of it should be the ssh address | 14:16 |
evrardjp | which is, in general the mgmt one | 14:16 |
Bofu2U | correct | 14:16 |
Bofu2U | yeah | 14:16 |
Bofu2U | and that's all coming back as correct | 14:17 |
evrardjp | ok | 14:17 |
evrardjp | could you do a ansible -m ping all ? | 14:17 |
evrardjp | just to make sure you can connect to all your containers from the deploy node | 14:17 |
Bofu2U | I can't, that's the problem | 14:17 |
evrardjp | you need to be in the appropriate folder | 14:17 |
evrardjp | then you have ssh issues | 14:18 |
Bofu2U | the containers can't be connected to outside of the machine it's on | 14:18 |
Bofu2U | it's also the same network that the machines are currently on, connected to, and talking over | 14:18 |
evrardjp | are the management network reachable from the deploy nodes? | 14:18 |
Bofu2U | so my guess is something that has to do with the route | 14:18 |
Bofu2U | yep | 14:18 |
Bofu2U | that's what I'm deploying with through ansible | 14:19 |
evrardjp | oh you could do fancy stuff with ansible :) | 14:19 |
Bofu2U | hehe, not me at this point ;) | 14:19 |
Bofu2U | the IP's on 10.104.0.X are the bare metal nodes | 14:19 |
Bofu2U | management network, where the containers are *supposed to be* binding to as well | 14:19 |
evrardjp | wait | 14:20 |
evrardjp | I'm interested about the wiring you've done | 14:20 |
evrardjp | on your nodes | 14:20 |
Bofu2U | bonded NICs, 4 VLANs | 14:20 |
*** targon has joined #openstack-ansible | 14:20 | |
odyssey4me | cloudnull evrardjp heh, that totally worked :) patch incoming | 14:21 |
Bofu2U | bond0.101-bond0.104 | 14:21 |
evrardjp | ok | 14:21 |
evrardjp | ouch | 14:21 |
Bofu2U | I can do whatever I want tbh | 14:21 |
Bofu2U | That's how I had it setup for Fuel | 14:21 |
evrardjp | forget Fuel :p | 14:21 |
evrardjp | I dropped it myself :p | 14:21 |
Bofu2U | That's what I'm trying to do ;) | 14:21 |
Bofu2U | Hence I'm here lol | 14:22 |
evrardjp | :) | 14:22 |
Bofu2U | this is all physical hardware 10 feet from me | 14:22 |
Bofu2U | including the switches, routers | 14:22 |
Bofu2U | so I can change literally anything | 14:22 |
evrardjp | question | 14:22 |
evrardjp | Bofu2U, I'm concerned about the host using the same network as inside the cloud | 14:23 |
Bofu2U | Not a problem, I can change it | 14:24 |
evrardjp | you're bridging the vlan interfaces, right? | 14:24 |
evrardjp | or you're bridging the NICs? | 14:24 |
Bofu2U | yes | 14:24 |
Bofu2U | the NICs | 14:24 |
Bofu2U | eth0/eth1 into bond0 | 14:24 |
Bofu2U | then bond0.102 is bridged into br-mgmt | 14:24 |
evrardjp | could you drop your /etc/network/interfaces somewhere? | 14:25 |
Bofu2U | yeah 1 sec | 14:26 |
Bofu2U | I'll grab it from controller1 | 14:26 |
odyssey4me | afk for a bit | 14:26 |
Bofu2U | http://pastebin.com/3EHytSwz | 14:27 |
Bofu2U | just noticed the duplication at the bottom as well :| sigh | 14:27 |
*** apuimedo has quit IRC | 14:29 | |
evrardjp | FYI at some point you'll really want a larger mty | 14:30 |
evrardjp | mtu | 14:30 |
Bofu2U | more than 9k? | 14:30 |
evrardjp | your bond as 1500 | 14:30 |
evrardjp | the nic have 1500 | 14:30 |
Bofu2U | er yeah | 14:30 |
Bofu2U | vlans have 9k | 14:31 |
evrardjp | why not setting the links into 9k too then? | 14:31 |
Bofu2U | I can, must have been reset by the provisioner | 14:31 |
Bofu2U | :( | 14:31 |
*** apuimedo has joined #openstack-ansible | 14:31 | |
evrardjp | it's just to avoid you weird issues afterwards :) | 14:32 |
Bofu2U | of course | 14:32 |
evrardjp | your provisionner is doing weird stuff | 14:32 |
Bofu2U | ... couldn't I do that with ansible to run on all of the nodes? lol | 14:32 |
evrardjp | I did, but it's tricky and not part of openstack-ansible :) | 14:32 |
Bofu2U | touche | 14:32 |
evrardjp | tricky because you can really remove the branch you're standing on :p | 14:32 |
Bofu2U | yeah | 14:33 |
Bofu2U | all good | 14:33 |
evrardjp | anyway, I wouldn't mix configuration of bonding modes too | 14:33 |
evrardjp | I wouldn't set the hwaddress for bond, moreover on balance-xor mode! | 14:33 |
evrardjp | I'll reply to your paste :) | 14:34 |
evrardjp | it's easier | 14:34 |
Bofu2U | appreciated :) | 14:34 |
openstackgerrit | Major Hayden proposed openstack/openstack-ansible: [WIP] Testing parallel playbooks https://review.openstack.org/253706 | 14:34 |
evrardjp | Bofu2U, all 4 nics in one link aggregation? not 2? | 14:38 |
Bofu2U | correct | 14:38 |
Bofu2U | my compute nodes only have 2 NICs | 14:38 |
Bofu2U | wanted to unify with just bond0 | 14:38 |
*** javeriak has quit IRC | 14:38 | |
evrardjp | ok | 14:38 |
Bofu2U | sidenote did you want me to gist that instead | 14:39 |
Bofu2U | so you can literally reply to it? | 14:39 |
*** fawadkhaliq has quit IRC | 14:39 | |
evrardjp | I'll do something generic with master-backup and you'll adapt afterwards when you'll feel more confident | 14:39 |
Bofu2U | yeah that's fine | 14:39 |
Bofu2U | I know I'll saturate 2-4Gbps so I wanted to make sure it had as much available as possible | 14:40 |
Bofu2U | heh | 14:40 |
*** dslevin_ has quit IRC | 14:42 | |
*** dslevin has quit IRC | 14:42 | |
evrardjp | Bofu2U, could you tell me which vlan is for what? | 14:44 |
*** adac has quit IRC | 14:45 | |
evrardjp | it's in your sourced file I guess | 14:45 |
Bofu2U | 101 tunnel | 14:45 |
Bofu2U | 102 container | 14:45 |
Bofu2U | 103 storage | 14:45 |
Bofu2U | 104 public | 14:45 |
evrardjp | k | 14:45 |
*** apuimedo has quit IRC | 14:45 | |
evrardjp | Bofu2U, you need vlan tenant isolation for your customers or just vxlan? | 14:47 |
Bofu2U | either/or | 14:47 |
*** apuimedo has joined #openstack-ansible | 14:47 | |
Bofu2U | vxlan is fine | 14:47 |
Sam-I-Am | you cant do vlan here | 14:48 |
Sam-I-Am | because vlan tags are already used on the sub-ints | 14:48 |
evrardjp | that was my concern Sam-I-Am :) | 14:49 |
*** adac has joined #openstack-ansible | 14:50 | |
evrardjp | he could, but it needs to be done carefully | 14:50 |
Bofu2U | I'll do whatever way makes my and your life easier :P | 14:50 |
Sam-I-Am | there's no q-in-q support | 14:50 |
evrardjp | Bofu2U, management network is untagged on the host? | 14:51 |
*** KLevenstein has joined #openstack-ansible | 14:51 | |
Bofu2U | correct | 14:52 |
*** apuimedo has quit IRC | 14:52 | |
evrardjp | do you have something untagged on the host should be more correct | 14:52 |
Bofu2U | 10.20.0.x | 14:52 |
evrardjp | I'll call that "host network" here | 14:52 |
*** apuimedo has joined #openstack-ansible | 14:52 | |
cloudnull | if any cores are around can we please bang thesse through to help out our CI bretheren https://review.openstack.org/#/q/status:open+branch:master+topic:lint-jobs,n,z | 14:55 |
cloudnull | and https://review.openstack.org/#/c/256016/ | 14:56 |
mattt | cloudnull: looking at the CI-related ones | 14:56 |
mattt | (if anyone wants to peep the last review) | 14:57 |
evrardjp | Bofu2U, ok here is what I drafted for you | 14:59 |
evrardjp | http://pastebin.com/8wwyR2g1 | 14:59 |
cloudnull | yes the last one is the initial gate for the galera_server role https://review.openstack.org/#/c/256016/ which relates to this change in the CI systems https://review.openstack.org/#/c/257755 | 14:59 |
Bofu2U | evrardjp that looks perfect | 15:01 |
evrardjp | Bofu2U, ok mtu as missing at some places, but this should get you a basic networking that should work | 15:01 |
cloudnull | tyvm btw mattt | 15:01 |
evrardjp | copy that to all your hosts, edit appropriately, pray and ifup/down | 15:01 |
Bofu2U | trying the first one now | 15:01 |
Bofu2U | fingers crossed | 15:01 |
evrardjp | ifdown/ifup/reboot because ifdown/ifup will fail | 15:01 |
evrardjp | as usual :p | 15:02 |
cloudnull | hahaha | 15:02 |
evrardjp | then connect on the node using your untagged interface (10.20.0.x) | 15:02 |
* Bofu2U fingers crossed | 15:05 | |
*** egonzalez has quit IRC | 15:05 | |
*** dslevin has joined #openstack-ansible | 15:06 | |
*** apuimedo has quit IRC | 15:06 | |
*** Mudpuppy has joined #openstack-ansible | 15:07 | |
*** apuimedo has joined #openstack-ansible | 15:07 | |
Bofu2U | pinging but no SSH, going to step away for a bit before I go insane :P | 15:10 |
evrardjp | ok | 15:11 |
cloudnull | Bofu2U: was that the vm instance ping'ing but not ssh ? | 15:11 |
Bofu2U | bare metal | 15:11 |
Bofu2U | it's back up now, just took a bit | 15:12 |
cloudnull | sorry lost some scroll back | 15:12 |
Bofu2U | no worries | 15:12 |
cloudnull | kk | 15:12 |
Bofu2U | it's back up and working fully though | 15:12 |
mattt | odyssey4me: you there ? | 15:12 |
Bofu2U | so that's always good lol | 15:12 |
evrardjp | yeah let him a few minutes to start everything | 15:12 |
evrardjp | it* | 15:12 |
cloudnull | mattt: i think he's afk a bit | 15:12 |
mattt | cloudnull: ah ok | 15:12 |
evrardjp | the containers will definitely take a while to boot Bofu2U | 15:12 |
mattt | cloudnull: hey, regarding these reviews, https://review.openstack.org/#/c/257773/2/tox.ini for example | 15:12 |
evrardjp | you can check with lxc command line | 15:13 |
cloudnull | mattt: yes ? | 15:13 |
mattt | cloudnull: i'm not familiar w/ this tox testing stuff, does this assume you're not running these tests on your local workstation ? | 15:13 |
*** sigmavirus24_awa is now known as sigmavirus24 | 15:14 | |
mattt | cloudnull: i'm just wondering what hte workflow is for a developer who wants to maybe do some linting locally | 15:14 |
cloudnull | no. they work on a local workstation, however if you ran the functional part it would pollute some things | 15:14 |
evrardjp | tox is basically a pip aware job runner, you can run these on your workstation | 15:14 |
*** apuimedo has quit IRC | 15:14 | |
mattt | cloudnull: yeah, so i was wondering if the ansible-functional should be removed from envlist ? | 15:14 |
mattt | because that is quite dangerous no | 15:14 |
*** oneswig has joined #openstack-ansible | 15:15 | |
*** apuimedo has joined #openstack-ansible | 15:15 | |
cloudnull | i guess it could be . however i think infra wanted a single combined job for all ansible tests | 15:15 |
evrardjp | yup it could | 15:15 |
evrardjp | sorry for commenting :p | 15:15 |
* cloudnull goes to read the infra convo from yesterday | 15:16 | |
cloudnull | evrardjp: never be sorry | 15:16 |
mattt | evrardjp: sorry not sorry? :) | 15:16 |
evrardjp | dammit! | 15:16 |
evrardjp | it proves there is room for improvement :) | 15:17 |
mattt | cloudnull: based on the commit message in the review it sounds like pep8/bashate are to be merged | 15:17 |
mattt | not merge every job into a single run? | 15:18 |
mattt | s/job/test/ ? | 15:18 |
cloudnull | possibly. it seems odyssey4me based all of the commits on https://review.openstack.org/#/c/257536/4 which came from AJaeger | 15:18 |
mattt | i mentioned to odyssey4me a few weeks back, i guess i wanted to just express some concern that we don't run the functional test out of the box | 15:19 |
cloudnull | based on http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2015-12-14.log.html#t2015-12-14T18:37:29 | 15:19 |
mattt | ie. i don't know if it's standard practice to just blindly run 'tox' while dev'ing on a repo | 15:19 |
*** oneswig has quit IRC | 15:19 | |
cloudnull | Im actually good with saying that we leave the functional testing off of the default tox test | 15:20 |
mattt | cloudnull: i'll check back w/ odyssey4me once he's online, but otherwise i'm assuming these changes are all fine :) | 15:20 |
cloudnull | it makes sense to me that you wouldnt want ansible running things on your local station without instructing it to do so | 15:20 |
cloudnull | but idk tbh . | 15:21 |
mattt | cloudnull: cool, well thanks anyway :) | 15:21 |
mattt | (but i do generally agree w/ your sentiment) | 15:21 |
cloudnull | sorry . im useless. | 15:21 |
evrardjp | who runs tox in its host for openstack stuff, even in a venv? | 15:21 |
mattt | cloudnull: bah! | 15:21 |
mattt | what is it with everyone all apologetic today :) | 15:21 |
cloudnull | its tuesday | 15:21 |
Bofu2U | im sorry I don't have anything to apologize for yet mattt | 15:21 |
Bofu2U | but I'll work on it. | 15:21 |
cloudnull | hahahahaa | 15:21 |
mattt | :P | 15:22 |
evrardjp | :) | 15:22 |
*** mattronix_ has joined #openstack-ansible | 15:23 | |
bgmccollum | Bofu2U: i see what you did there | 15:24 |
bgmccollum | i mean...im sorry to say...i see what you did there... | 15:24 |
*** baker has joined #openstack-ansible | 15:24 | |
Bofu2U | I'm sorry I didn't make it more obvious for you. It's another thing I'll work on, I promise. | 15:25 |
Bofu2U | <3 | 15:25 |
cloudnull | look at all the helpers. it brings a tear to my eye... | 15:25 |
*** mattronix has quit IRC | 15:25 | |
Bofu2U | you know what brings a tear to my eye? the amount of red coming from my ansible log over the last 5 hours | 15:26 |
Bofu2U | :| | 15:26 |
bgmccollum | Bofu2U: turn off colors...problem solved | 15:26 |
Bofu2U | the servers hate me ;( | 15:26 |
cloudnull | new OpenStack bitterness level unlocked | 15:26 |
Bofu2U | HEY! you with your solutions! | 15:26 |
Bofu2U | quiet! | 15:26 |
cloudnull | hahahaha | 15:26 |
Bofu2U | going to start redirecting to /dev/null | 15:26 |
Bofu2U | schrodingers ansible | 15:27 |
bgmccollum | nothing to see here | 15:27 |
cloudnull | I seriously LOLd for a moment there. | 15:27 |
Bofu2U | now I have to apologize for pulling you away from your standard emotional state. | 15:27 |
Bofu2U | I apologize. | 15:27 |
Bofu2U | ok im done now. | 15:27 |
*** mattronix_ has quit IRC | 15:28 | |
cloudnull | Bofu2U: im totally late to the party but what is making you see red ? | 15:28 |
Bofu2U | ...sorry | 15:28 |
*** mattronix has joined #openstack-ansible | 15:28 | |
Bofu2U | uh - literally almost anything you can think of so far | 15:28 |
evrardjp | can't ping his hosts :p | 15:28 |
Bofu2U | been working my way through the guide/tutorial | 15:28 |
cloudnull | not that you want to say all the things again . | 15:28 |
* cloudnull goes to see if the logs are online already | 15:29 | |
Bofu2U | and since I'm not using AIO there's a lot of subtle changes throughout the process that make me want to place said head on said wall rapidly. | 15:29 |
evrardjp | Bofu2U, oh you using AIO? | 15:29 |
evrardjp | didn't know that! | 15:29 |
Bofu2U | no, not using AIO | 15:29 |
evrardjp | my bad | 15:29 |
Bofu2U | hehe | 15:29 |
Bofu2U | using my 11 physical machines sitting in the server closet | 15:30 |
evrardjp | yeah :) | 15:30 |
Bofu2U | screaming as I provision them over, and over, and over | 15:30 |
evrardjp | why do you reprovision them? | 15:30 |
Bofu2U | because at some point I give up trying to fix and just want to start fresh | 15:30 |
evrardjp | seems a right approach, it should be fast that way | 15:30 |
Bofu2U | yeah. MAAS makes it a little less hectic. | 15:31 |
Bofu2U | (except for the interfaces file that you just helped me replace) | 15:31 |
evrardjp | never used ubuntu's MAAS | 15:32 |
evrardjp | is it nice for partitioning disks? | 15:32 |
Bofu2U | when it does it, yes | 15:32 |
Bofu2U | lol | 15:32 |
evrardjp | :p | 15:32 |
Bofu2U | I've had a problem with sticky MBRs | 15:32 |
adac | Maybe here http://docs.openstack.org/developer/openstack-ansible/install-guide/ops-galera-start.html in the 1. step in "This command results in a cluster containing a single node. The wsrep_cluster_size value shows the number of nodes in the cluster." there should be mentioned with which command one can check the example output in the box, which would be: ansible galera_container -m shell -a "mysql \ | 15:32 |
adac | -h localhost -e 'show status like \"%wsrep_cluster_%\";'" | 15:32 |
Bofu2U | Going to take a break and get back to it in a bit. Clear my mind and all of that. | 15:33 |
Bofu2U | I sincerely appreciate all of the help I've received thus far, thank you all. | 15:33 |
Bofu2U | ... and sorry. For everything. | 15:33 |
Bofu2U | ^ there you go mattt actual apology. | 15:34 |
evrardjp | mattt, is it now the time for a "yw"? | 15:35 |
evrardjp | (english 101, someone?) | 15:35 |
mattt | you guyz | 15:37 |
cloudnull | haha | 15:39 |
cloudnull | adac: that'd be a good add | 15:39 |
adac | cloudnull, :-) | 15:40 |
cloudnull | adac: also https://review.openstack.org/#/c/256016/ | 15:40 |
cloudnull | we added some tests | 15:40 |
cloudnull | https://github.com/openstack/openstack-ansible-galera_server/blob/master/tests/test.yml#L130-L144 | 15:41 |
cloudnull | useful commands | 15:41 |
evrardjp | I'm not familiar to galera, what means wsrep_incoming_addresses ? | 15:42 |
cloudnull | which if run through ansible, would give you a big picture that all nodes had the same data | 15:42 |
cloudnull | evrardjp: thats a list of all nodes in the cluster. | 15:42 |
cloudnull | <IP>:<PORT> | 15:42 |
evrardjp | ok | 15:42 |
adac | cloudnull, I currently have the AIO installed, how can I update this to the newest openstack-ansible version? | 15:43 |
cloudnull | was it off of master? | 15:43 |
cloudnull | or another tag? | 15:43 |
adac | cloudnull, I was installing it via this curl one liner | 15:43 |
cloudnull | ah. | 15:44 |
cloudnull | so go to /opt/openstack-ansible | 15:44 |
adac | yepp there I am :) | 15:44 |
cloudnull | if there are any changes you may need to stash them | 15:44 |
cloudnull | then git pull origin master | 15:44 |
cloudnull | then cd playbooks | 15:45 |
cloudnull | openstack-ansible setup-everything.yml | 15:45 |
adac | kk thanks a lot! | 15:45 |
adac | would this update shut down everything at once or one thing after another so there is no real interruption? | 15:45 |
cloudnull | itll iterate through the stack | 15:46 |
cloudnull | you'll see api service interruptions as it does it, however if you have running vms they should all remain online . | 15:46 |
adac | cloudnull, So basically it would not shut down my virtual machines or something like that (I'm not in production with that so it would be fine if it would) | 15:47 |
cloudnull | now this will be a deployment which was kicked using master IE Mitaka, so guaranteeing up time may be a hard thing to say | 15:47 |
cloudnull | it will not shut down the vms | 15:47 |
cloudnull | or take away networks | 15:47 |
adac | cloudnull, so cool :) | 15:47 |
*** alextricity has quit IRC | 15:47 | |
adac | cloudnull, when I try to start the setup-everything.yml I get: https://gist.github.com/anonymous/964c5062173189009a01 | 15:50 |
cloudnull | ah so your deployment was a bit ago. | 15:51 |
cloudnull | cd ../ | 15:51 |
cloudnull | ./scripts/bootstrap-ansible.sh | 15:51 |
cloudnull | then cd playbooks; openstack-ansible ... | 15:51 |
evrardjp | I see what happened there :D | 15:51 |
cloudnull | yup | 15:52 |
cloudnull | adac: the issue is that we've moved a lot of the roles into their own repos | 15:52 |
cloudnull | and more will happen again in the nearish future | 15:52 |
*** admin0 has quit IRC | 15:53 | |
cloudnull | that task is safe to run all the time to get the roles that you may be missible. | 15:53 |
cloudnull | *missing. | 15:53 |
* cloudnull fingers need more coffee | 15:53 | |
evrardjp | cloudnull, quick question about tox tests, shouldn't increase test coverage? I'm thinking to add tests like idempotency test | 15:54 |
evrardjp | and maybe multiple config tests | 15:54 |
cloudnull | that'd be nice. | 15:54 |
*** Bjoern_ has joined #openstack-ansible | 15:54 | |
cloudnull | idempotency tests may be hard. we have places where we use shell commands and things however its deffiniatly not impossible. | 15:55 |
evrardjp | my haproxy role has idempotency tests for now, but it's completely not running in the same way | 15:55 |
cloudnull | multi-config tests would be excellent | 15:55 |
evrardjp | I'm running shell script for checking idempotency right now | 15:55 |
evrardjp | but I need to move all that to somewhere more visible | 15:55 |
evrardjp | and with tox :/ | 15:56 |
*** Bjoern_ is now known as Bjoern_\T | 15:56 | |
*** Bjoern_\T is now known as BjoernT | 15:56 | |
cloudnull | i've been doing tests for cluster systems and such by spinning up things using our container roles. we can do more of that to really smoke test multiple configs quickly. | 15:56 |
evrardjp | nice! | 15:56 |
evrardjp | it would just be dependencies | 15:56 |
cloudnull | I've been doing this so far | 15:57 |
cloudnull | https://github.com/openstack/openstack-ansible-rabbitmq_server/blob/master/tests/test.yml | 15:57 |
evrardjp | however this would just increase the build time | 15:57 |
cloudnull | it does increase the build time, however in the case of the rabbit role we're testing a cluster and then asserting that it works like a cluster. all of that is completed in <10 min | 15:58 |
evrardjp | I'm reading the file | 15:58 |
evrardjp | it makes sense | 15:58 |
adac | cloudnull, thanks again! | 15:58 |
cloudnull | evrardjp: example https://review.openstack.org/#/c/257788/ | 15:58 |
cloudnull | functional test took <5min | 15:58 |
evrardjp | I don't see what you'll do with the shell that does ps |grep rabbit but that's another story :p | 15:59 |
evrardjp | cool only 5 mins | 15:59 |
cloudnull | we can do that same thing with other tests and do multi-config scenarios per role as needed | 16:00 |
evrardjp | yeah that was my goal, I'll just shamelessly copy yours | 16:00 |
evrardjp | mine was for testing keepalived | 16:00 |
cloudnull | please do , and if you find a better way we'll shamelessly copy yours | 16:00 |
*** javeriak has joined #openstack-ansible | 16:00 | |
cloudnull | :) | 16:00 |
evrardjp | so I may have issues :) | 16:00 |
cloudnull | that'd be a cool test to work out. because i think we could do the same thing within neutron role later down the road too. | 16:01 |
evrardjp | I'm using semaphoreci right now, so I was planning to simply use the multiple "thread" system which builds on many hosts | 16:02 |
*** javeriak has quit IRC | 16:02 | |
evrardjp | that would be enough to improve coverage, but it can't test clustering | 16:02 |
*** javeriak has joined #openstack-ansible | 16:02 | |
evrardjp | I'll move my role to a more "openstack" approach: rst files, tox testing... | 16:03 |
*** targon has quit IRC | 16:03 | |
evrardjp | I'll need help on the setup of the repo etc. | 16:03 |
*** javeriak_ has joined #openstack-ansible | 16:05 | |
*** dslevin has quit IRC | 16:06 | |
cloudnull | anytime you let me know what you need | 16:06 |
*** javeriak has quit IRC | 16:07 | |
openstackgerrit | Kevin Carter proposed openstack/openstack-ansible-rabbitmq_server: changed the rabbitmq command test https://review.openstack.org/257979 | 16:07 |
cloudnull | evrardjp: ^ fixed earlier stupidity | 16:08 |
evrardjp | cloudnull, shell module can fail tasks now? | 16:09 |
evrardjp | I thought we still had to check the rc | 16:10 |
evrardjp | by registering a variable | 16:10 |
cloudnull | yes. if rc != 0 | 16:10 |
cloudnull | i think... | 16:10 |
cloudnull | http://cdn.pasteraw.com/d4u0ouan9eb3w8f6rufe0pghqdgt8l5 | 16:11 |
cloudnull | yes | 16:11 |
evrardjp | :) | 16:12 |
evrardjp | cool | 16:12 |
openstackgerrit | Merged openstack/openstack-ansible: Use fastest Linux mirrors for gate jobs https://review.openstack.org/256301 | 16:12 |
evrardjp | note that you could use command instead of shell | 16:12 |
evrardjp | :p | 16:12 |
openstackgerrit | Merged openstack/openstack-ansible: Update for PLUMgrid config - appending identity version to auth uri https://review.openstack.org/257148 | 16:12 |
evrardjp | (now that you don't have a | anymore) | 16:12 |
openstackgerrit | Merged openstack/openstack-ansible: Skip Keystone task when not using swift w keystone https://review.openstack.org/254286 | 16:12 |
openstackgerrit | Kevin Carter proposed openstack/openstack-ansible-rabbitmq_server: changed the rabbitmq command test https://review.openstack.org/257979 | 16:13 |
cloudnull | done | 16:13 |
evrardjp | :) | 16:13 |
evrardjp | ok I'll give a +1 for your hard efforts :p | 16:13 |
* cloudnull taking the rest of the day off | 16:14 | |
cloudnull | :0 | 16:14 |
cloudnull | :) | 16:14 |
cloudnull | bah.. . ima bad person... | 16:14 |
palendae | Sure are | 16:14 |
cloudnull | bug triage time | 16:14 |
cloudnull | cloudnull, mattt, andymccr, d34dh0r53, hughsaunders, b3rnard0, palendae, Sam-I-Am, odyssey4me, serverascode, rromans, erikmwilson, mancdaz, _shaps_, BjoernT, claco, echiu, dstanek, jwagner, ayoung, prometheanfire, evrardjp, arbrandes, mhayden, scarlisle, luckyinva, ntt, javeriak | 16:14 |
palendae | Oh, wait, is that a time where I'm not supposed to agree? | 16:14 |
cloudnull | not the agreement is on point. | 16:15 |
cloudnull | :p | 16:15 |
d34dh0r53 | o/ | 16:15 |
palendae | (present) | 16:16 |
mattt | o/ | 16:16 |
evrardjp | o/ | 16:16 |
Sam-I-Am | yo | 16:17 |
cloudnull | lets jump right in | 16:17 |
cloudnull | first up https://bugs.launchpad.net/openstack-ansible/+bug/1524770 | 16:17 |
openstack | Launchpad bug 1524770 in openstack-ansible juno "Cinder LVs are monitored by disk util MaaS " [Medium,In progress] - Assigned to Andy McCrae (andrew-mccrae) | 16:17 |
cloudnull | need people from rax to chime in here. | 16:17 |
cloudnull | seems like andymccr is already working on this | 16:17 |
cloudnull | however whats the heat level ? | 16:17 |
cloudnull | and should it target 10.1.19 ? | 16:18 |
cloudnull | related review https://review.openstack.org/#/c/255833/ | 16:19 |
andymccr | cloudnull: i think the PR is already in for backport | 16:19 |
cloudnull | palendae mattt d34dh0r53 Sam-I-Am andymccr ? | 16:20 |
andymccr | so basically, when that merges its done | 16:20 |
andymccr | id like it targeted at whatever the next 10 release is if we can get it in, but nobody will die if we dont - so its not massively critical i imagine :) | 16:21 |
cloudnull | done. | 16:21 |
cloudnull | next https://bugs.launchpad.net/openstack-ansible/+bug/1525900 | 16:21 |
openstack | Launchpad bug 1525900 in openstack-ansible " Adding multipath-tools package for nova hosts" [Undecided,New] | 16:21 |
palendae | Honestly not sure on heat level myself - since 10 branches are now more or less EOL, I'd assume it gets rolled up with some critical mass has happened | 16:21 |
evrardjp | hadn't we decided to include docs inside each commit and avoid DocImpact? | 16:23 |
evrardjp | shouldn't we target someone from the dev team to coordinate with doc team and go on? | 16:23 |
cloudnull | we did, however this was noting the override capability now. idk that we need to doc that specifically. | 16:23 |
cloudnull | Sam-I-Am: thoughts? | 16:23 |
Sam-I-Am | that would be nice | 16:24 |
Sam-I-Am | including docs means no docimpact | 16:24 |
cloudnull | i think DocImpact is a misnomer in this case. | 16:24 |
cloudnull | the change added a package, the override capability has always been there. | 16:24 |
evrardjp | so maybe this bug should be assigned to someone from the doc team, but with a comment from the commiter to explain why the docimpact was mentionned | 16:24 |
Sam-I-Am | looking at bug - in 5983453 meetings at once, and no coffee for 3 hours now | 16:25 |
cloudnull | im happy to close this as "not a bug" | 16:25 |
evrardjp | yeah that's why it went through the merging :p | 16:25 |
evrardjp | Sam-I-Am, :D | 16:25 |
cloudnull | invalid , moving on | 16:25 |
Sam-I-Am | if we're documenting this override anywhere, thats the docs thing | 16:26 |
Sam-I-Am | if its self-documenting, then no | 16:26 |
Sam-I-Am | cloudnull: did you notice the change in how docimpact works? | 16:26 |
evrardjp | Sam-I-Am, there is a doc to explain the override already | 16:26 |
Sam-I-Am | evrardjp: ok, then its invalid | 16:26 |
cloudnull | Sam-I-Am: no, what was the change? | 16:26 |
evrardjp | when the deployer uses the overrides, there are issues that can happen. IMO the deployer using overrides should know what he does | 16:26 |
Sam-I-Am | cloudnull: it used to be that docimpact opened a bug in openstack-manuals, but that just allowed devs to throw docs over the fence | 16:27 |
cloudnull | evrardjp: ++ | 16:27 |
Sam-I-Am | cloudnull: so now using docimpact opens a separate bug in the original repo for tracking documentation | 16:27 |
cloudnull | nice! | 16:27 |
cloudnull | thats actually a good change. | 16:27 |
Sam-I-Am | in some cases it may be necessary to tag openstack-manuals with it, but not all of them (think devref) | 16:27 |
evrardjp | indeed | 16:27 |
cloudnull | next: https://bugs.launchpad.net/openstack-ansible/+bug/1526292 | 16:28 |
openstack | Launchpad bug 1526292 in openstack-ansible "infra_hosts definition doesn't set galera_all, fails on haproxy_install.yml" [Undecided,New] | 16:28 |
Sam-I-Am | if your patch includes all of the docs, you don't need docimpact... then you dont get another bug to handle. | 16:28 |
cloudnull | Robert Adler you around ? | 16:28 |
*** mattronix has quit IRC | 16:30 | |
cloudnull | so idk if we fix this, because this is a dynamic inventory/environmental bug and only an issue when using the infra_hosts group. | 16:30 |
cloudnull | which was deprecated in favor of shared-infra | 16:30 |
cloudnull | thoughts ? | 16:30 |
evrardjp | invalid? | 16:30 |
evrardjp | it's been ages that it's shared-infra, and definitely not in the branch it's mentionned | 16:31 |
openstackgerrit | Bjoern Teipel proposed openstack/openstack-ansible: Allow for multiple store backends for Glance https://review.openstack.org/255589 | 16:31 |
cloudnull | im good with that. | 16:31 |
cloudnull | ill add a note to the issue | 16:31 |
cloudnull | thats all we have | 16:32 |
cloudnull | anything else we should bring up ? | 16:32 |
*** galstrom_zzz is now known as galstrom | 16:32 | |
evrardjp | maybe we should check if there is something in the docs that would make these error come | 16:32 |
evrardjp | it's maybe a doc bug | 16:32 |
cloudnull | thats fair | 16:32 |
evrardjp | but we don't have info, so... | 16:32 |
mhayden | could someone double-check me on https://bugs.launchpad.net/openstack-ansible/+bug/1477273 ? that one looks like it might be done already | 16:33 |
openstack | Launchpad bug 1477273 in openstack-ansible " Fix Horizon SSL certificate management and distribution" [Low,In progress] - Assigned to Major Hayden (rackerhacker) | 16:33 |
cloudnull | evrardjp: so that is a doc issue | 16:34 |
evrardjp | yup I was grepping | 16:34 |
cloudnull | http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-hostlist.html | 16:35 |
evrardjp | yup I'm on this one too.. | 16:35 |
cloudnull | ok so we'll need to update the docs on that one | 16:35 |
evrardjp | what's the new syntax? | 16:35 |
evrardjp | I didn't remember there was a change | 16:35 |
evrardjp | shared-infra? | 16:35 |
cloudnull | shared-infra_hosts | 16:35 |
evrardjp | do you know when was it changed? like before liberty? | 16:36 |
evrardjp | (to know where to stop backporting) | 16:36 |
cloudnull | kilo | 16:36 |
evrardjp | ok | 16:37 |
evrardjp | thanks | 16:37 |
evrardjp | :) | 16:37 |
cloudnull | mhayden: looks like its fixed released | 16:37 |
mhayden | cloudnull: should i flip status? normally i defer to ol' odyssey4me | 16:37 |
cloudnull | i'd say yes however it may need to be backported to liberty/kilo as a doc update | 16:38 |
cloudnull | mhayden: yes the vars that power the doc change are in kilo | 16:39 |
cloudnull | so it should be brought all the way back | 16:39 |
mhayden | cloudnull: can do | 16:40 |
odyssey4me | mattt cloudnull mhayden back | 16:40 |
*** cemmason1 has joined #openstack-ansible | 16:41 | |
odyssey4me | cloudnull mhayden FYI OpenStack-CI now flips all launchpad bugs from in-progress straight to fix-released | 16:42 |
cloudnull | ohai | 16:42 |
BjoernT | is there any gating for https://review.openstack.org/#/c/257104/ ? | 16:42 |
BjoernT | dont see anything triggering | 16:42 |
cloudnull | ha, maybe infra was down. ill retest | 16:42 |
odyssey4me | mattt cloudnull in my view anyone running tox should know what they're doing - if anything make run_tests.sh default to not running the functional test, but leave ansible-functional in the list so that run_tests can work through it | 16:43 |
evrardjp | cloudnull, I see there is still a infra_hosts in the env.d, is that correct? | 16:43 |
*** sdake_ is now known as sdake | 16:44 | |
cloudnull | there is | 16:45 |
evrardjp | should it? | 16:45 |
cloudnull | its there for posterity , but should no longer be used. | 16:45 |
cloudnull | i'd say keep it for now. im sure removing it completly would break some folks. | 16:45 |
cloudnull | BjoernT: its testing now | 16:46 |
evrardjp | I agree | 16:46 |
evrardjp | I'll document shared-infra but also os-infra, which isn't in the doc | 16:46 |
BjoernT | thanls | 16:46 |
BjoernT | thanks | 16:46 |
cloudnull | seems zuul missed it, was busted at that time, not enough goats were sacrificed, etc ... | 16:47 |
cloudnull | tyvm evrardjp | 16:47 |
cloudnull | in future times, it might be good to cover the use of the bits in env.d so that folks can build or add to things as needed. | 16:48 |
cloudnull | what we have is not absolutely required they could be customized per a deployment etc. | 16:48 |
cloudnull | but that may be more than we want to take of for some of that right now | 16:49 |
cloudnull | odyssey4me: im good with the changes as is | 16:49 |
cloudnull | we just need another core to agree. | 16:49 |
cloudnull | then in the infra gate we can consolidate those tests. | 16:50 |
evrardjp | cloudnull, I agree... I'll check when I have time to document a little more about the env.d, but don't expect anything before january :p | 16:51 |
cloudnull | ++ | 16:52 |
*** alextricity has joined #openstack-ansible | 16:52 | |
palendae | evrardjp: I think a lot of things are like that right now - on hold or very slow til Jan | 16:53 |
evrardjp | palendae, I'm on holidays :) | 16:54 |
palendae | evrardjp: Right, a significant portion of our team are or will be, too :) | 16:54 |
evrardjp | which makes it difficult to work, right? ;) | 16:54 |
palendae | It should, yes | 16:54 |
palendae | Doesn't stop some people | 16:54 |
*** mattronix has joined #openstack-ansible | 16:54 | |
evrardjp | I'm connected :p | 16:54 |
evrardjp | sometimes | 16:55 |
evrardjp | but yeah, I understand :) | 16:55 |
openstackgerrit | Jean-Philippe Evrard proposed openstack/openstack-ansible: Old references of infra_hosts in the documentation https://review.openstack.org/258012 | 17:00 |
openstackgerrit | Merged openstack/openstack-ansible-repo_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257773 | 17:00 |
*** adac has quit IRC | 17:01 | |
evrardjp | I'm off for... at least today! | 17:01 |
evrardjp | enjoy your evening | 17:02 |
cloudnull | have a good one evrardjp | 17:05 |
*** phiche has quit IRC | 17:06 | |
openstackgerrit | Jesse Pretorius proposed openstack/openstack-ansible: Implement multi-domain configuration for Keystone https://review.openstack.org/258015 | 17:09 |
odyssey4me | cloudnull ^ lemme know what you think | 17:12 |
odyssey4me | it's not portable back to Kilo, but will be fine for Liberty | 17:12 |
cloudnull | ^ reviewed | 17:15 |
cloudnull | i agree, kilo should be left alone for this specific case. | 17:16 |
cloudnull | my nit https://review.openstack.org/#/c/258015/1/playbooks/roles/os_keystone/templates/keystone.conf.j2 is that if domain specific config is activated we should do it all the way down and get rid of the global driver | 17:17 |
*** Prithiv has quit IRC | 17:17 | |
odyssey4me | cloudnull yep, I was just thinking that | 17:18 |
cloudnull | then its explicit what domain does what and where. | 17:18 |
odyssey4me | yup agreed | 17:19 |
cloudnull | itll also force the deployer to consider domains when activating them not just create things all on the fly and forget about them later. | 17:19 |
cloudnull | but otherwise it looks great | 17:19 |
odyssey4me | I was thinking that perhaps another task should go in to also create the domains listed... otherwise they won't work. | 17:20 |
odyssey4me | that said, perhaps we should rather let deployers be the experts and create the domain afterwards? | 17:20 |
*** phiche has joined #openstack-ansible | 17:21 | |
cloudnull | that makes sense we do the same thing in cinder when multiple backends are specified. | 17:21 |
cloudnull | because they wont work otherwise . | 17:21 |
cloudnull | so i think thats a healthy pattern to follow | 17:21 |
cloudnull | but also concede that deployers should be the expert in how they want the domians created when using multiple domains. so either way is fine and regardless of the direction we chose a note should be added to the docs to tell the deployer what they need to know to make it all go . | 17:23 |
odyssey4me | cloudnull yep, I'll do the doc thing - good catch :) | 17:25 |
*** adac has joined #openstack-ansible | 17:29 | |
*** galstrom is now known as galstrom_zzz | 17:29 | |
cloudnull | odyssey4me: i think your also going to have to update https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_keystone/tasks/keystone_service_setup.yml | 17:29 |
cloudnull | to define a default domain | 17:30 |
cloudnull | which will have to match something in the multi-domain config | 17:30 |
cloudnull | maybe a keystone_service_domain | 17:30 |
cloudnull | and then other services will have to have *_service_domain | 17:31 |
cloudnull | beacuase if i did multi-domain config i'd be able to move services into sql, called ServiceDomainX, and but that would break the rest of the plays. | 17:32 |
odyssey4me | so for the moment OpenStack has a basic assumption of using the Default domain for services | 17:33 |
odyssey4me | some of the services are still dependent on Keystone's v2 API, so other domains are not an option | 17:33 |
odyssey4me | We still have that issue in Aodh, although a patch merged in master recently to fix it. | 17:34 |
odyssey4me | but yeah, it'll require some more changes in various places | 17:34 |
odyssey4me | more than I have energy for right now :) | 17:34 |
cloudnull | yes. but if you define a users domain w/ ldap called "Users" and didn't define the "Default" domain keystone would be effectively broken | 17:35 |
openstackgerrit | Miguel Alex Cantu proposed openstack/openstack-ansible: Added notification options for keystone https://review.openstack.org/257547 | 17:35 |
cloudnull | go rest up, this can wait for another day. | 17:35 |
*** japplewhite has joined #openstack-ansible | 17:35 | |
cloudnull | :) | 17:35 |
odyssey4me | cloudnull yeah, so that's kinda where I got stuck thinking... dammit - more complication | 17:35 |
odyssey4me | one method could be to set one dict with the defaults for the Default domain... but that seems wasteful | 17:36 |
cloudnull | maybe the answer is to simply define it by default and users could add to it in their specific config as they deem fi t | 17:37 |
odyssey4me | how do you mean? a static template with a config override? | 17:37 |
cloudnull | that way all deployments are all using the multi-domain backend regardless of ldap, or other domain | 17:37 |
*** sigmavirus24 is now known as sigmavirus24_awa | 17:38 | |
cloudnull | simply make the example an actual var | 17:38 |
cloudnull | https://review.openstack.org/#/c/258015/1/playbooks/roles/os_keystone/defaults/main.yml | 17:38 |
*** baker has quit IRC | 17:38 | |
odyssey4me | well, that's what I was thinking - all deployments use domain_specific_drivers_enabled, including the Default domain | 17:38 |
cloudnull | with the default domain defined within i t | 17:38 |
odyssey4me | sure, but then someone wanting to add another var will end up unintentionally overriding it | 17:38 |
cloudnull | then we're converging on a single use case and ldap is simply an extension of whats already in-place | 17:39 |
cloudnull | if the user wants to add ldap then they redefine the keystone_domains var as needed | 17:40 |
odyssey4me | what I mean is, someone will add just a Users domain, and forget to add the Default domain bits | 17:40 |
*** markvoelker_ has joined #openstack-ansible | 17:40 | |
cloudnull | we should doc that it always needs to be defined for now, but if they do that it will break. theyll log a bug, and well point at the docs. | 17:41 |
*** markvoelker has quit IRC | 17:41 | |
*** karimb_ has quit IRC | 17:42 | |
odyssey4me | I'm thinking of a slightly different method, although that would also be fine. | 17:42 |
cloudnull | either way. im just spit-balling. | 17:42 |
odyssey4me | the alternative method is to somehow check whether the dict has the Default domain in the list, and if not then implement the sql config - otherwise trust what's in the dict | 17:43 |
*** galstrom_zzz is now known as galstrom | 17:43 | |
*** baker has joined #openstack-ansible | 17:43 | |
cloudnull | we could also use the assert module to check for it and fail if not | 17:43 |
odyssey4me | pre-requisite checks are something we could do a lot more of all over the place | 17:44 |
odyssey4me | but much, much earlier | 17:45 |
odyssey4me | perhaps even rolled into the openstack-ansible command | 17:45 |
cloudnull | yea, a sanity check play could go a long way to helping a lot of people | 17:45 |
cloudnull | they setup all the config, run openstack-ansible sanity-check.yml and its does a quick check on all the things we can think of to make sure the deployment is successful . | 17:46 |
*** markvoelker_ has quit IRC | 17:47 | |
openstackgerrit | Merged openstack/openstack-ansible-galera_server: Updated repo for new org https://review.openstack.org/256016 | 17:48 |
*** markvoelker has joined #openstack-ansible | 17:49 | |
*** baker has quit IRC | 17:59 | |
*** daneyon_ has joined #openstack-ansible | 17:59 | |
*** baker has joined #openstack-ansible | 17:59 | |
odyssey4me | I'm out for the evening. My 'flu is messing with my ability to think. | 18:01 |
odyssey4me | Night all. | 18:02 |
Sam-I-Am | s/think/drink | 18:02 |
Sam-I-Am | feel better | 18:02 |
*** daneyon has quit IRC | 18:03 | |
cloudnull | take care | 18:05 |
*** cemmason1 has quit IRC | 18:12 | |
*** phiche1 has joined #openstack-ansible | 18:17 | |
*** galstrom is now known as galstrom_zzz | 18:18 | |
*** phiche has quit IRC | 18:18 | |
*** japplewhite has quit IRC | 18:25 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 18:31 | |
*** tricksters has quit IRC | 18:32 | |
*** elo has joined #openstack-ansible | 18:32 | |
*** daneyon has joined #openstack-ansible | 18:33 | |
*** daneyon_ has quit IRC | 18:35 | |
*** Guest73233 is now known as mgagne | 18:42 | |
mhayden | if a user doesn't specify an affinity in their openstack_user_config.yml file, how do we know the quantity of containers to make of each type? | 18:42 |
*** mgagne is now known as Guest76434 | 18:42 | |
mhayden | i'm trawling through dynamic_inventory.py now | 18:43 |
cloudnull | 1 | 18:43 |
cloudnull | per relevant host | 18:43 |
mhayden | so if a user says they have one host in "shared-infra_hosts", does that mean they'll have one galera container and one rabbimtmq container? | 18:44 |
cloudnull | yes | 18:44 |
mhayden | ah, so the .yml.aio makes much more sense now | 18:44 |
*** adac has quit IRC | 18:44 | |
mhayden | we have to push galera_container affinity to 3 to make them stack up three on one host, eh? | 18:45 |
bgmccollum | exactly | 18:46 |
mhayden | okay, that makes sense now | 18:46 |
mhayden | thanks, folks | 18:46 |
bgmccollum | or set to 0 in the case of rabbit and a stand alone swift configuration | 18:46 |
mhayden | yeah, i just picked up the bug to document that :P | 18:46 |
bgmccollum | orly | 18:46 |
bgmccollum | :) | 18:46 |
mhayden | and it forced me to learn something new! :P | 18:46 |
mhayden | which is fun | 18:46 |
bgmccollum | stash away in brain...immediately forget. | 18:47 |
*** harlowja_ has quit IRC | 18:49 | |
*** harlowja has joined #openstack-ansible | 18:50 | |
cloudnull | bgmccollum: got the right idea | 18:53 |
cloudnull | :) | 18:53 |
*** galstrom_zzz is now known as galstrom | 18:54 | |
cloudnull | as a reminder if any of our cores are around we'd like to make these go https://review.openstack.org/#/q/status:open+branch:master+topic:lint-jobs,n,z which will help reduce load on infra | 18:56 |
cloudnull | also this would be useful https://review.openstack.org/#/c/257979/ | 18:59 |
*** richoid has quit IRC | 19:00 | |
stevelle | working on them | 19:02 |
stevelle | while other tasks run | 19:02 |
mhayden | cloudnull / bgmccollum: am i on the right track? https://gist.github.com/major/9272b37f66169336a621 | 19:02 |
mhayden | well i have messed up indentions for affinity there :P | 19:03 |
cloudnull | thanks stevelle ! | 19:03 |
cloudnull | mhayden: yes | 19:03 |
cloudnull | that would result in 0 rabbitmq containers on those hosts | 19:03 |
mhayden | cloudnull: WOOT | 19:03 |
mhayden | thanks sir | 19:03 |
cloudnull | mhayden: for prez! | 19:03 |
mhayden | cloudnull: you're still my most favorite former openstack-ansible PTL named kevin | 19:04 |
cloudnull | that makes one | 19:04 |
mhayden | haha | 19:04 |
*** richoid has joined #openstack-ansible | 19:05 | |
*** lkoranda has quit IRC | 19:07 | |
*** eil397 has joined #openstack-ansible | 19:09 | |
openstackgerrit | Merged openstack/openstack-ansible-openstack_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257752 | 19:09 |
openstackgerrit | Major Hayden proposed openstack/openstack-ansible: Adding docs for affinity https://review.openstack.org/258116 | 19:10 |
*** galstrom is now known as galstrom_zzz | 19:11 | |
*** oneswig has joined #openstack-ansible | 19:13 | |
*** richoid has quit IRC | 19:13 | |
*** lkoranda has joined #openstack-ansible | 19:14 | |
*** oneswig has quit IRC | 19:17 | |
*** richoid has joined #openstack-ansible | 19:19 | |
*** elo has quit IRC | 19:28 | |
*** KLevenstein_ has joined #openstack-ansible | 19:32 | |
*** openstackgerrit has quit IRC | 19:32 | |
*** KLevenstein has quit IRC | 19:32 | |
*** KLevenstein_ is now known as KLevenstein | 19:32 | |
*** openstackgerrit has joined #openstack-ansible | 19:33 | |
* mhayden hugs bgmccollum | 19:46 | |
bgmccollum | hugs not bugs | 19:47 |
mhayden | hah, yes | 19:47 |
*** b3rnard0 is now known as b3rnard0_away | 19:52 | |
openstackgerrit | Merged openstack/openstack-ansible-apt_package_pinning: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257748 | 19:59 |
openstackgerrit | Merged openstack/openstack-ansible-security: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257728 | 20:00 |
*** javeriak_ has quit IRC | 20:07 | |
*** Bofu2U2 has joined #openstack-ansible | 20:15 | |
Bofu2U2 | alrighty, no longer want to bang head against wall. | 20:16 |
bgmccollum | Bofu2U2: good to hear | 20:16 |
Sam-I-Am | Bofu2U2: we can fix that | 20:16 |
cloudnull | bgmccollum: fixed? | 20:16 |
Bofu2U2 | Sam-I-Am - I can too, just need to run openstack-ansible setup-hosts.yml | 20:16 |
Bofu2U2 | :P | 20:16 |
cloudnull | or did you take the nuclear option. | 20:16 |
bgmccollum | cloudnull: you mean Bofu2U2 ? | 20:17 |
bgmccollum | tabcomplete fail | 20:17 |
Bofu2U2 | cloudnull nuclear. | 20:17 |
Bofu2U2 | started over but with the interface configs. | 20:17 |
cloudnull | yup tabcomplete failure | 20:19 |
cloudnull | Bofu2U2: and all is right with the world | 20:19 |
Bofu2U2 | well, not really | 20:19 |
Bofu2U2 | but so far so good with the setup-hosts - just orange and yellow so far. | 20:19 |
openstackgerrit | Merged openstack/openstack-ansible-rsyslog_client: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257536 | 20:20 |
*** elo has joined #openstack-ansible | 20:22 | |
Bofu2U2 | alrighty, it's at lxc_container_create - awaiting a sea of red. | 20:23 |
alextricity | Where are we keeping the rabbitmq_server role these days? | 20:24 |
*** galstrom_zzz is now known as galstrom | 20:25 | |
alextricity | found it | 20:26 |
cloudnull | alextricity: https://github.com/openstack/openstack-ansible-rabbitmq_server | 20:26 |
alextricity | By any chance, would anybody know why 'rabbitmqctl list_qeueus' wouldn't return anything? | 20:27 |
alextricity | Isn't there suppose to be a bunch a queues for each service? | 20:28 |
cloudnull | this is a thing we should have an opinion on https://review.openstack.org/#/c/257530/ if anyone has some spare cycles. | 20:29 |
cloudnull | alextricity: each service in each vhost | 20:29 |
alextricity | cloudnull: Ah... that's the keyword i'm missing :) | 20:29 |
* alextricity needs to practice his rabbitmq-fu | 20:30 | |
*** karimb has joined #openstack-ansible | 20:30 | |
cloudnull | rabbitmqctl list_queues -p /nova | 20:31 |
cloudnull | would get you what your looking for | 20:31 |
cloudnull | and list_vhosts for a complete list of all the vhosts we have | 20:31 |
alextricity | cloudnull: Thanks :) | 20:32 |
cloudnull | anytime | 20:32 |
cloudnull | off for a bit, bbl | 20:33 |
Bofu2U2 | glhflo3 cloudnull | 20:34 |
BjoernT | who can finish https://review.openstack.org/#/c/257104/ | 20:35 |
BjoernT | so we can get the kilo/liberty changes reviewed and merged | 20:36 |
Bofu2U2 | Looks like the only failure was the neutron agents | 20:38 |
sigmavirus24 | cloudnull: odyssey4me btw, https://bugs.launchpad.net/nova/+bug/1526413 might bite us at the gate if our version of requests isn't pinned | 20:40 |
openstack | Launchpad bug 1526413 in OpenStack Compute (nova) liberty "test_app_using_ipv6_and_ssl fails with requests 2.9.0" [High,Confirmed] | 20:40 |
Bofu2U2 | Probably because I still had the vlan neutron network in the vars. | 20:40 |
*** dslevin has joined #openstack-ansible | 20:47 | |
*** matt6434 is now known as mattoliverau | 20:48 | |
*** b3rnard0_away is now known as b3rnard0 | 20:48 | |
*** Prithiv has joined #openstack-ansible | 20:51 | |
*** galstrom is now known as galstrom_zzz | 20:51 | |
*** phalmos has joined #openstack-ansible | 20:53 | |
Bofu2U2 | Looks like still timing out hard on the "wait for ssh to be available" | 20:55 |
Bofu2U2 | The controllers can ping the lxc containers that are running on themselves, but 1 can't ping the ones on 2, etc. | 20:57 |
*** dslev has joined #openstack-ansible | 20:57 | |
Bofu2U2 | Actually ... I take that back. they can ping the instances within themselves, but deploy server can't | 20:58 |
Bofu2U2 | So all 3 controllers can ping all containers within the group of 3 (cont1 can ping instances on cont2, etc) but deploy can't access any of them. Hm. | 20:58 |
*** phiche has joined #openstack-ansible | 21:08 | |
bgmccollum | who / what updates the upstream repo for juno? | 21:10 |
*** phiche1 has quit IRC | 21:10 | |
bgmccollum | the rackspace_monitoring_cli update requires a newer version of rackspace_monitoring... | 21:12 |
*** sigmavirus24 is now known as sigmavirus24_awa | 21:17 | |
*** sigmavirus24_awa is now known as sigmavirus24 | 21:17 | |
openstackgerrit | Merged openstack/openstack-ansible-lxc_container_create: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257758 | 21:20 |
*** admin0 has joined #openstack-ansible | 21:22 | |
openstackgerrit | Merged openstack/openstack-ansible-lxc_hosts: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257791 | 21:24 |
bgmccollum | is it as simple as updating the appropriate repo_vars file for rackspace_monitoring? | 21:25 |
cloudnull | Bofu2U2: so you can ping containers everywhere except the deploy host ? | 21:27 |
Bofu2U2 | yeah I think I messed up the routing | 21:27 |
Bofu2U2 | re-doing the interfaces on it and rebooting | 21:27 |
cloudnull | ah thatll do it. | 21:27 |
* cloudnull hands Bofu2U2 a beer | 21:27 | |
Bofu2U2 | had all of the 10.10X interfaces running through 10.20 | 21:27 |
Bofu2U2 | instead of the .1 gateway on each vlan | 21:28 |
openstackgerrit | Merged openstack/openstack-ansible-pip_install: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257818 | 21:28 |
cloudnull | BjoernT: we're waiting on another core for https://review.openstack.org/#/c/257104/ | 21:28 |
openstackgerrit | Merged openstack/openstack-ansible-pip_lock_down: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257822 | 21:29 |
*** KLevenstein has quit IRC | 21:30 | |
openstackgerrit | Merged openstack/openstack-ansible-rabbitmq_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257788 | 21:30 |
*** KLevenstein has joined #openstack-ansible | 21:31 | |
*** Guest76434 is now known as mgagne | 21:33 | |
*** mgagne is now known as Guest160 | 21:34 | |
*** Guest160 has quit IRC | 21:34 | |
*** Guest160 has joined #openstack-ansible | 21:34 | |
*** Guest160 is now known as mgagne | 21:35 | |
openstackgerrit | Merged openstack/openstack-ansible-py_from_git: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257817 | 21:37 |
openstackgerrit | Merged openstack/openstack-ansible-rsyslog_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257761 | 21:38 |
BjoernT | cloudnull: yes i know | 21:42 |
BjoernT | that's why i asked to get all this stuff into the other branches | 21:42 |
cloudnull | BjoernT: its PR'd right ? | 21:42 |
BjoernT | https://review.openstack.org/#/c/257104/ | 21:43 |
BjoernT | ? | 21:43 |
cloudnull | yes | 21:43 |
BjoernT | that's the one for https://review.openstack.org/256082 | 21:43 |
openstackgerrit | Merged openstack/openstack-ansible-memcached_server: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257767 | 21:44 |
cloudnull | just checking that we have https://review.openstack.org/257434 and https://review.openstack.org/257425 | 21:45 |
cloudnull | off to eye dr. bbl | 21:46 |
BjoernT | correct, those are for the sub branches | 21:46 |
Bofu2U2 | Alright, got it fixed with the IP's and now it's pulling the "Unknown host in 'None:8776'" etc on the haproxy config. Sigh. At least the containers are up now -- progress. | 22:06 |
openstackgerrit | Byron McCollum proposed openstack/openstack-ansible: Upgrade rackspace-monitoring package https://review.openstack.org/258162 | 22:06 |
openstackgerrit | Byron McCollum proposed openstack/openstack-ansible: Upgrade rackspace-monitoring package https://review.openstack.org/258162 | 22:11 |
*** admin0 has quit IRC | 22:28 | |
*** dslev has quit IRC | 22:39 | |
*** phiche has quit IRC | 22:45 | |
*** phiche has joined #openstack-ansible | 22:47 | |
*** phiche has quit IRC | 22:47 | |
*** dslev_ has joined #openstack-ansible | 22:47 | |
*** phalmos has quit IRC | 22:51 | |
*** dstanek has quit IRC | 22:56 | |
*** dstanek has joined #openstack-ansible | 22:56 | |
*** sdake has quit IRC | 23:01 | |
*** dstanek has quit IRC | 23:06 | |
*** prometheanfire has quit IRC | 23:06 | |
*** prometheanfire has joined #openstack-ansible | 23:07 | |
*** dstanek has joined #openstack-ansible | 23:07 | |
*** Bofu2U2 has quit IRC | 23:13 | |
*** sigmavirus24 is now known as sigmavirus24_awa | 23:15 | |
*** Guest75 has joined #openstack-ansible | 23:17 | |
*** manous has quit IRC | 23:22 | |
*** baker has quit IRC | 23:26 | |
*** elo has quit IRC | 23:26 | |
*** errr has quit IRC | 23:30 | |
*** errr has joined #openstack-ansible | 23:31 | |
Sam-I-Am | cloudnull: when/where is the o-a midcycle? | 23:31 |
Sam-I-Am | seems there was some discussion but i cant find the final decision | 23:32 |
openstackgerrit | Merged openstack/openstack-ansible: Updating AIO docs for Ansible playbook https://review.openstack.org/257805 | 23:33 |
openstackgerrit | Merged openstack/openstack-ansible: Fix typos in doc/source/developer-docs https://review.openstack.org/257257 | 23:33 |
openstackgerrit | Merged openstack/openstack-ansible: Fix typos in doc/source/developer-docs https://review.openstack.org/257258 | 23:39 |
openstackgerrit | Merged openstack/openstack-ansible-galera_client: Merge bashate/pep8 lint jobs in common job https://review.openstack.org/257782 | 23:39 |
stevelle | fwiw Sam-I-Am I haven't heard a final yet either | 23:39 |
openstackgerrit | Merged openstack/openstack-ansible-openstack_hosts: Increasing max AIO kernel limit https://review.openstack.org/257104 | 23:39 |
*** KLevenstein has quit IRC | 23:46 | |
openstackgerrit | Michael Carden proposed openstack/openstack-ansible: Add missing file extension https://review.openstack.org/258196 | 23:47 |
*** errr has quit IRC | 23:49 | |
*** karimb has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!