*** dave-mccowan has joined #openstack-ansible | 00:18 | |
*** tosky has quit IRC | 00:51 | |
*** dave-mccowan has quit IRC | 01:32 | |
*** gyee has quit IRC | 01:36 | |
*** jawad_axd has joined #openstack-ansible | 02:22 | |
*** jawad_axd has quit IRC | 02:27 | |
*** jawad_axd has joined #openstack-ansible | 02:43 | |
*** jawad_axd has quit IRC | 02:47 | |
openstackgerrit | Merged openstack/openstack-ansible-ops master: Allow user override of beat install locations https://review.opendev.org/706528 | 03:42 |
---|---|---|
*** Neurognostic_ has joined #openstack-ansible | 04:15 | |
*** Neurognostic has quit IRC | 04:17 | |
*** Neurognostic_ has quit IRC | 04:31 | |
*** udesale has joined #openstack-ansible | 05:04 | |
*** nicolasbock has quit IRC | 05:09 | |
*** DanyC has joined #openstack-ansible | 05:21 | |
*** DanyC has quit IRC | 05:23 | |
*** evrardjp has quit IRC | 05:34 | |
*** evrardjp has joined #openstack-ansible | 05:34 | |
*** raukadah is now known as chandankumar | 05:36 | |
*** CeeMac has quit IRC | 05:50 | |
*** goldyfruit has quit IRC | 06:00 | |
*** goldyfruit has joined #openstack-ansible | 06:00 | |
*** elenalindq has joined #openstack-ansible | 06:17 | |
*** jhesketh has quit IRC | 06:48 | |
*** jhesketh has joined #openstack-ansible | 06:50 | |
*** mcarden has quit IRC | 06:51 | |
*** rgogunskiy has joined #openstack-ansible | 07:00 | |
*** jhesketh has quit IRC | 07:15 | |
*** jawad_axd has joined #openstack-ansible | 07:22 | |
*** jhesketh has joined #openstack-ansible | 07:25 | |
*** sluna has joined #openstack-ansible | 07:27 | |
*** miloa has joined #openstack-ansible | 07:30 | |
*** shyamb has joined #openstack-ansible | 07:41 | |
miloa | Morning | 07:41 |
*** cshen has joined #openstack-ansible | 07:45 | |
*** shyamb has quit IRC | 07:47 | |
*** ivve has joined #openstack-ansible | 07:56 | |
*** blue_asni has joined #openstack-ansible | 08:03 | |
blue_asni | hello fork, nice to join you. i have a problem with deploying "train" AIO version using openstack-ansible in opensuse leap but "failed retrying ensure image has been pre-staged | 08:08 |
blue_asni | anyone help appreciated | 08:08 |
*** sluna has quit IRC | 08:08 | |
*** sluna has joined #openstack-ansible | 08:09 | |
blue_asni | opemstack-ansible setup-hosts.yml | 08:22 |
blue_asni | failer | 08:22 |
cshen | blue_asni: put your error logs in http://paste.openstack.org/ | 08:30 |
cshen | will help people investigate your issue. | 08:30 |
jrosser | notably we don't test containerised deployments at all as far as i can see on opensuse | 08:33 |
*** tosky has joined #openstack-ansible | 08:36 | |
*** shyamb has joined #openstack-ansible | 08:51 | |
*** xakaitetoia has joined #openstack-ansible | 08:59 | |
*** masterpe has quit IRC | 09:00 | |
*** blue_asni has quit IRC | 09:26 | |
*** fridtjof[m] has joined #openstack-ansible | 09:31 | |
*** shyamb has quit IRC | 09:33 | |
*** shyamb has joined #openstack-ansible | 09:40 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/openstack-ansible master: Imported Translations from Zanata https://review.opendev.org/707794 | 09:42 |
admin0 | good morning .. what playbook adds the VIPs to the controllers ? | 09:44 |
admin0 | i am trying for the first time with a single controller only .. the vip is not coming up .. on multi controller env, i recall it used to be the haproxy playbook | 09:44 |
admin0 | or i am mistaken .. it was ages(rocky) ago | 09:45 |
admin0 | i am trying 20.0.1 now | 09:45 |
*** ysastri has joined #openstack-ansible | 09:45 | |
*** shyamb has quit IRC | 09:52 | |
jrosser | admin0: on a single controller there is no VIP | 09:54 |
admin0 | so i need to add the vip addresses manually myself ? | 09:54 |
jrosser | just the IP for the external interface that haproxy is listening on | 09:54 |
jrosser | it is very much like how an AIO is setup | 09:55 |
jrosser | the haproxy role only deploys keepalived when there are >1 controllers | 09:56 |
admin0 | ok .. | 09:56 |
admin0 | thanks | 09:56 |
admin0 | i have either done aio or multi controllers.. but never a single node multi setup .. | 09:56 |
admin0 | thanks.. i will add the ips myself then | 09:56 |
jrosser | afaik it should look kindof like one controller from your existing multnode deploys, minus keepalived | 09:57 |
*** masterpe has joined #openstack-ansible | 10:00 | |
*** andrewbonney has joined #openstack-ansible | 10:06 | |
*** L_SR has joined #openstack-ansible | 10:33 | |
L_SR | Hello guys, some people in #openstack-infra get error 500 from opendev.org/openstack; do you? (it fail in the middle of an OSA deployment I was doing) | 10:34 |
*** shyamb has joined #openstack-ansible | 10:36 | |
*** ysastri has quit IRC | 11:05 | |
*** L_SR has quit IRC | 11:14 | |
openstackgerrit | Merged openstack/openstack-ansible master: Drop virtualenv pip package for CI https://review.opendev.org/707212 | 11:27 |
*** udesale_ has joined #openstack-ansible | 11:56 | |
*** shyamb has quit IRC | 11:59 | |
*** udesale has quit IRC | 11:59 | |
*** shyamb has joined #openstack-ansible | 12:01 | |
admin0 | stuck in a strange error: Create openstack client bash_completion script /bin/bash: openstack: command not found .. on the utility container in the setup-infra playbook | 12:07 |
admin0 | rerunning hosts and infra with limit to this container fixed it :) | 12:11 |
cshen | weird :-) | 12:11 |
*** nicolasbock has joined #openstack-ansible | 12:13 | |
admin0 | hi .. i am getting this error during setup-infra .. http://paste.openstack.org/show/789569/ .. set_fact _current_monitor_address\n ^ here\n" .. what might i be missing .. in the config, ceph-mon_hosts: *infrastructure_hosts is set | 12:18 |
noonedeadpunk | admin0: I'm sure I know what it is | 12:25 |
admin0 | current_monitor_address does not look like a var | 12:25 |
admin0 | \o/ | 12:25 |
*** shyamb has quit IRC | 12:25 | |
admin0 | my savior | 12:25 |
noonedeadpunk | let me find patch fixing this | 12:25 |
noonedeadpunk | admin0: https://review.opendev.org/#/c/702853/ | 12:27 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/train: Allow ceph metal CI deployments https://review.opendev.org/707829 | 12:27 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/stein: Allow ceph metal CI deployments https://review.opendev.org/707830 | 12:27 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/rocky: Allow ceph metal CI deployments https://review.opendev.org/707831 | 12:28 |
noonedeadpunk | admin0: you do aio deployment? | 12:28 |
admin0 | no .. | 12:28 |
admin0 | its a multi node .. but with a single controller | 12:28 |
noonedeadpunk | hm.. with lxc? | 12:29 |
admin0 | kvm | 12:29 |
noonedeadpunk | ok, anyway I think you need to set valid monitor_interface | 12:30 |
admin0 | but how .. i did not find a variable for this | 12:30 |
noonedeadpunk | just "monitor_interface: bond0" or whatever in user_variables.yml | 12:31 |
*** cshen has quit IRC | 12:31 | |
noonedeadpunk | I was catching exact the same error in aio builds when monitor_interface was set to eth1, which didin't exist on the node | 12:31 |
noonedeadpunk | It's used to retrieve monitor address by interface name | 12:32 |
admin0 | i put monitor_interface: br-mgmt in variables, .. still got the same error | 12:32 |
noonedeadpunk | what interface is used on ceph monitor hosts as ceph public network? | 12:33 |
admin0 | br-storage | 12:34 |
admin0 | public_network: 172.29.244.0/22 | 12:34 |
admin0 | monitor_address_block: 172.29.244.0/22 | 12:34 |
noonedeadpunk | does br-storage exist on ceph-mon hosts? | 12:35 |
admin0 | yep | 12:35 |
admin0 | all my hosts have the 4 bridges setup | 12:35 |
noonedeadpunk | ok, so then set monitor_interface: br-storage | 12:35 |
admin0 | they are blades, so they come with identical interfaces | 12:35 |
admin0 | same error -- error output and relevant lines from user_variables.yml http://paste.openstack.org/show/xvYLFCrAdLyv89QVPjdg/ | 12:38 |
admin0 | tag 20.0.1 | 12:38 |
noonedeadpunk | ah, ok, so you set monitor adress not by interface, but by address... | 12:38 |
noonedeadpunk | So everything I said before is unrelated^( | 12:39 |
admin0 | i just copied over what i was doing in a rocky install | 12:39 |
admin0 | i should remove this ? monitor_address_block: 172.29.244.0/22 | 12:41 |
admin0 | 12:41 | |
noonedeadpunk | you may try... but not sure what's better option. | 12:42 |
admin0 | the user_variables.yml.prod-ceph.example also users monitor_address_block: "{{ cidr_networks.container }}" | 12:42 |
admin0 | there is no mention of monitor_address anywhere | 12:42 |
noonedeadpunk | we use it in CI actually | 12:43 |
admin0 | the default is set to 0.0.0.0 | 12:43 |
admin0 | from the /etc/ansible/roles/ceph-ansible/roles/ceph-defaults/defaults/main.yml | 12:44 |
noonedeadpunk | admin0: do you have IP assigned on br-storage? | 12:44 |
admin0 | i do .. 172.29.244.15 | 12:44 |
noonedeadpunk | hum... what if you try to run some simple task with the only debug of the var hostvars[inventory_hostname]['ansible_all_ipv4_addresses'] against ceph-mon? | 12:46 |
noonedeadpunk | The thing is that while setting fact https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-facts/tasks/set_monitor_address.yml#L4 you have undefined addr key | 12:46 |
noonedeadpunk | that means that most likely one of the used vars there are not defined | 12:47 |
admin0 | how do I debug of that hostvar ? | 12:47 |
admin0 | i am not so great in debugging ansible | 12:47 |
noonedeadpunk | so it's either hostvars[item]['ansible_all_ipv4_addresses'] or hostvars[item]['monitor_address_block']. Since I think that monitor_address_block is defined, I have doubts regarding the first one | 12:47 |
admin0 | big output .. finding the right key | 12:49 |
noonedeadpunk | you can try just runnign that playbook http://paste.openstack.org/show/789573/ | 12:49 |
admin0 | 172.29.238.104 -- it gave me the IP of the container ceph-mon | 12:50 |
admin0 | http://paste.openstack.org/show/789574/ | 12:51 |
noonedeadpunk | hum | 12:51 |
noonedeadpunk | this looks like your monitor do not have appropriate storage isnbtreface it should have | 12:52 |
noonedeadpunk | *interface | 12:52 |
admin0 | ceph-mon_hosts: *infrastructure_hosts | 12:52 |
admin0 | right .. it only has mgmt as extra and not storage | 12:52 |
noonedeadpunk | ok, so probably double check your provider_networks | 12:53 |
jrosser | noonedeadpunk: I think we may have discussed previously how the osa/ceph networking could be better | 12:53 |
jrosser | the default example is exactly like you are finding | 12:53 |
jrosser | with many things on mgmt which should be on storage | 12:53 |
noonedeadpunk | hm, I think we have CI on mgmt for somplicity, by I was pretty sure that in examples we have storage networks mapping for containers.... | 12:55 |
fridtjof[m] | Hi! I'm trying to migrate my openstack (rocky) environment from Ubuntu 16.04 to 18.04. First thing I attempted to do was set up a second infra host, to make migrating the db etc possible. I based this infra node on 18.04, but the last step (setup-everything.yml limited to the new host) is failing, because the repo (on the existing infra host) only has packages etc for 16.04. | 12:55 |
fridtjof[m] | Is there an easy way to extend the repo for 18.04? | 12:55 |
admin0 | noonedeadpunk, jrosser my config and user_variable http://paste.openstack.org/show/789576/ | 12:55 |
noonedeadpunk | fridtjof[m]: I think that once you have at least one ubuntu 18.04 host running repo-build should generate stuff for 18.04 | 12:56 |
admin0 | based on what jrosser if i change my monitor_interface to br-mgmt and monitor_address_block to .236.0/22 , it might work with the current playbooks ? | 12:57 |
noonedeadpunk | admin0: I think youve missed ceph-mon for br-storage | 12:57 |
noonedeadpunk | in terms of group_binds | 12:57 |
noonedeadpunk | actually it's probably worth doing ceph_all (or what group do we have for all ceph?) | 12:58 |
admin0 | its not in this file: openstack_user_config.yml.prod-ceph.example | 12:58 |
admin0 | i see only ceph-osd listed | 12:58 |
admin0 | i thought this file passed QA, so i did not read line by line or compared with my rocky one 0 | 12:58 |
noonedeadpunk | we might eventually have errors in examples :p | 12:58 |
admin0 | now i realize | 12:59 |
noonedeadpunk | and, you can offer it's change if you find one :p | 12:59 |
admin0 | i checked from the working one done using rocky .. it does have ceph-mon on the group binds | 12:59 |
jrosser | there are two Ceph networks, and I think we confuse the cluster and public networks | 12:59 |
admin0 | i will submit a patch | 12:59 |
noonedeadpunk | jrosser: oh... not sure about that actually - never deployed ceph with cluster net with my own... | 13:00 |
noonedeadpunk | but what I know that we don't make use of storage network at all in CI | 13:00 |
noonedeadpunk | it's totally broken there | 13:01 |
fridtjof[m] | noonedeadpunk: that seems kind of circular to me. That 18.04 infra host is my first 18.04 host, and repo-build does not seem to have generated anything new | 13:01 |
fridtjof[m] | ls /var/www/repo/pools/ on the repo container only gives me one dir, ubuntu-16.04-x86_64 | 13:01 |
*** dave-mccowan has joined #openstack-ansible | 13:03 | |
noonedeadpunk | fridtjof[m]: have you runned setup-hosts against this new ubuntu host? | 13:03 |
noonedeadpunk | iirc repo build relies on cahced host facts | 13:04 |
admin0 | now failing at ceph-mon : waiting for the monitor(s) to form the quorum...] | 13:06 |
admin0 | monitor_name' is undefined | 13:07 |
noonedeadpunk | hum, it should be set by ceph-ansible ceph-facts role here https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-facts/tasks/facts.yml#L27-L29 | 13:08 |
*** ysastri has joined #openstack-ansible | 13:09 | |
admin0 | -vvvv gave me more info : http://paste.openstack.org/show/ARDgEi51sv9RPVPgD4iP/ | 13:10 |
admin0 | line 29 has the error | 13:11 |
admin0 | thisip .. 244.51 is the ip of the ceph-mon_container | 13:11 |
*** rh-jelabarre has joined #openstack-ansible | 13:12 | |
admin0 | i think i have to drop the idea of a single controller and just go to either multinode or aio | 13:24 |
*** ivve has quit IRC | 13:28 | |
admin0 | is everything that has .aio tested in CI and working ? | 13:33 |
admin0 | is there a scenario/command that will activate/install all of the things that aio can do | 13:33 |
admin0 | is this valid: export SCENARIO='aio_swift_barbican_cinder_designate_heat_magnum_masakari_murano_octavia_manila_trove' ? | 13:43 |
*** sshnaidm|afk has quit IRC | 13:50 | |
*** sshnaidm has joined #openstack-ansible | 13:50 | |
*** rgogunskiy has quit IRC | 13:52 | |
*** rgogunskiy has joined #openstack-ansible | 13:53 | |
noonedeadpunk | admin0: if you want to deploy all of these components, than yes. Actually for aio you can run the folowing ./scripts/gate-check-commit.sh aio_swift_barbican_designate_magnum_masakari_murano_octavia_manila_trove - but kinda manila may probably work only on master | 13:55 |
noonedeadpunk | *"./scripts/gate-check-commit.sh aio_swift_barbican_designate_magnum_masakari_murano_octavia_manila_trove deploy source" | 13:56 |
noonedeadpunk | and you probably need to add ceph to the scenariio as well | 13:58 |
*** cshen has joined #openstack-ansible | 14:05 | |
*** rgogunskiy has quit IRC | 14:05 | |
admin0 | noonedeadpunk, what does this gate_check command does ? | 14:05 |
*** sshnaidm is now known as sshnaidm|off | 14:05 | |
admin0 | its downloading and doing a bunch of stuff :D | 14:05 |
noonedeadpunk | it does the same as CI do | 14:06 |
noonedeadpunk | actually deploys aio with provided scenario | 14:06 |
admin0 | hmm.. so no need to run the playbooks etc individually ? | 14:07 |
noonedeadpunk | no - it does everything needed from blank host with only openstack-ansible cloned | 14:07 |
*** ysastri has quit IRC | 14:08 | |
noonedeadpunk | so make aio takes 3 steps - 1. create vm, 2. clone openstack-ansible 3. run gate-check-commit.sh $SCENARIO deploy|upgrade source|distro | 14:09 |
admin0 | i had already ran setup hosts .. does it conflict ? | 14:09 |
noonedeadpunk | hm | 14:10 |
admin0 | i will try the regular way first . if it fails .. will nuke, recreate and try this way | 14:10 |
admin0 | noonedeadpunk, using your way :) | 14:28 |
admin0 | waiting for this "magic" to finish | 14:28 |
admin0 | so question .. say this gate-check works, and i want to make little changes .. like change the haproxy ip etc , can i use the regular playbooks for that ? | 14:29 |
*** jawad_axd has quit IRC | 14:38 | |
*** jawad_axd has joined #openstack-ansible | 14:39 | |
openstackgerrit | Duncan Martin Walker proposed openstack/openstack-ansible-ops master: [WIP] Optional subdivision of elastic-logstash group https://review.opendev.org/707849 | 14:40 |
*** jawad_axd has quit IRC | 14:41 | |
admin0 | i see .. it just calls the playbooks aferwards :) | 15:02 |
*** cshen has quit IRC | 15:39 | |
openstackgerrit | Merged openstack/openstack-ansible-ops master: Allow specified http ports to be ignored by packetbeat https://review.opendev.org/707124 | 15:52 |
tacco | hey guys.. does anyone of you also have problems with creating a stack with os_stack ansible module? If i try to do a stack create it always fails but in the output you see stack state "create in progress" even with wait: True | 15:57 |
tacco | but. if i run again with state presend and it does a update to the stack everything works as expected. | 15:57 |
tacco | this is just confusing.. and maybe a bug in the os_stack module right? | 15:58 |
*** udesale_ has quit IRC | 16:17 | |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/train: Drop virtualenv pip package for CI https://review.opendev.org/707871 | 16:18 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible stable/train: Drop virtualenv pip package for CI https://review.opendev.org/707871 | 16:20 |
admin0 | does the containers copy the apt sources from the aio machine ? | 16:20 |
admin0 | hmm.. the aip containers cannot ping outside | 16:22 |
admin0 | somewhere, some nat rule is missing | 16:22 |
noonedeadpunk | tacco: I'd write either to #openstack-ansible-sig or to #openstack-sdks (I think you may find there ppl who look after ansible modules as well) | 16:25 |
bjoernt | Hello does OSA is considering full multi cell support at some point which localized cell controllers etc? | 16:35 |
bjoernt | Right now we do it with custom env.d groups which is near impossible to manage | 16:36 |
bjoernt | So the inventory manager would have to be rewritten to support ironc or other types to allow for a more flexible role based management | 16:37 |
bjoernt | not to mention changes in roles themselves to limit services to roles | 16:37 |
fridtjof[m] | noonedeadpunk: sorry for coming back on this so late, but I have not done that yet (the docs didn't say so). I'll try that now. | 16:39 |
fridtjof[m] | okay, I ran setup-hosts against the new infrastructure host. The repo is still not updated, but i see that repo-build is not called from setup-hosts anyway. Running setup-everything now. | 16:46 |
fridtjof[m] | oh. weird, setup-everything should actually have run setup-hosts already | 16:47 |
noonedeadpunk | bjoernt: that is pretty nice feature request but not sure if we have capacity for realization... So I'd say if someone would offer a solution that would be great :p | 16:47 |
fridtjof[m] | i'll report back | 16:47 |
fridtjof[m] | oh. | 16:52 |
fridtjof[m] | https://imgur.com/a/7aHUxKW | 16:54 |
fridtjof[m] | noonedeadpunk: i think this is the root cause | 16:54 |
fridtjof[m] | my limit group only contains localhost (the deployment host) and the new infra host | 16:54 |
fridtjof[m] | should I run repo-build with all hosts? | 16:54 |
fridtjof[m] | (or maybe just all of setup-hosts?) | 16:55 |
noonedeadpunk | oh, sure you shouldn't use any limits with repo-build | 16:55 |
noonedeadpunk | actually for setup-infrastructure you also probably shouldn't use limit only new controller | 16:56 |
noonedeadpunk | (as rabbit, galera, etc may not join cluster correctly) | 16:56 |
fridtjof[m] | yeaaah, i just scrolled back and it completely skipped even installing memcached, galera, and rabbitmq | 16:57 |
fridtjof[m] | is it safe to also run setup-hosts without a limiter? | 17:03 |
fridtjof[m] | I think the containers on my new hosts did not get set up properly | 17:03 |
*** feichh has quit IRC | 17:04 | |
admin0 | fridtjof[m], yes | 17:11 |
admin0 | you can safely run the setup-hosts, infra and openstack anytime | 17:11 |
*** xakaitetoia has quit IRC | 17:11 | |
fridtjof[m] | great, thanks :) | 17:11 |
*** pcaruana has quit IRC | 17:19 | |
*** evrardjp has quit IRC | 17:34 | |
*** evrardjp has joined #openstack-ansible | 17:34 | |
*** errr has quit IRC | 17:55 | |
*** errr has joined #openstack-ansible | 17:56 | |
*** miloa has quit IRC | 18:13 | |
*** macz_ has joined #openstack-ansible | 18:29 | |
*** macz_ has quit IRC | 18:30 | |
fridtjof[m] | ahhhhhhhhh | 18:34 |
fridtjof[m] | galera broke | 18:34 |
fridtjof[m] | it somehow skipped over setting up galera on the new infra host, which made replication fail because the old galera couldn't reach the new one | 18:35 |
fridtjof[m] | okay, it seems that i fixed it | 18:48 |
*** andrewbonney has quit IRC | 18:55 | |
*** tosky has quit IRC | 19:36 | |
admin0 | on aio, manila fails with 'manila_default_store' is undefined .... when i do lvs in the aio, i do see that it created the manila-shares vg .. | 19:51 |
admin0 | i added manila_default_store: lvm in variables and checking if that fixes it | 19:53 |
*** hwoarang has quit IRC | 20:43 | |
*** spatel has joined #openstack-ansible | 20:51 | |
*** hwoarang has joined #openstack-ansible | 20:51 | |
*** spatel has quit IRC | 21:08 | |
*** dave-mccowan has quit IRC | 21:23 | |
*** Neurognostic has joined #openstack-ansible | 21:28 | |
*** nicolasbock has quit IRC | 21:54 | |
*** macz_ has joined #openstack-ansible | 22:09 | |
openstackgerrit | Merged openstack/openstack-ansible stable/train: Drop virtualenv pip package for CI https://review.opendev.org/707871 | 22:31 |
*** rh-jelabarre has quit IRC | 22:31 | |
*** elenalindq has quit IRC | 22:51 | |
*** mcarden has joined #openstack-ansible | 22:54 | |
*** spatel has joined #openstack-ansible | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!