*** djhankb has quit IRC | 00:01 | |
*** djhankb has joined #openstack-ansible | 00:02 | |
*** spatel has joined #openstack-ansible | 00:23 | |
*** spatel has quit IRC | 00:28 | |
openstackgerrit | Merged openstack/openstack-ansible-tests stable/train: Pin virtualenv<20 for python2 functional tests https://review.opendev.org/756883 | 00:32 |
---|---|---|
*** maharg101 has joined #openstack-ansible | 00:33 | |
*** maharg101 has quit IRC | 00:38 | |
*** nurdie has joined #openstack-ansible | 00:41 | |
*** nurdie has quit IRC | 00:46 | |
openstackgerrit | Jimmy McCrory proposed openstack/ansible-role-python_venv_build master: Install git package on hosts building venvs https://review.opendev.org/756939 | 00:57 |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron master: Remove support for lxc2 config keys https://review.opendev.org/756251 | 01:22 |
*** cshen has joined #openstack-ansible | 02:26 | |
*** cshen has quit IRC | 02:31 | |
*** nurdie has joined #openstack-ansible | 02:42 | |
*** nurdie has quit IRC | 02:47 | |
*** maharg101 has joined #openstack-ansible | 03:06 | |
*** maharg101 has quit IRC | 03:11 | |
*** macz_ has joined #openstack-ansible | 03:22 | |
*** macz_ has quit IRC | 03:26 | |
*** pmannidi has quit IRC | 04:16 | |
*** mpsairam has joined #openstack-ansible | 04:16 | |
*** cshen has joined #openstack-ansible | 04:27 | |
*** cshen has quit IRC | 04:31 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-ansible | 04:33 | |
*** nurdie has joined #openstack-ansible | 04:43 | |
*** nurdie has quit IRC | 04:48 | |
*** maharg101 has joined #openstack-ansible | 05:07 | |
*** maharg101 has quit IRC | 05:12 | |
*** miloa has joined #openstack-ansible | 05:40 | |
*** rpittau|afk is now known as rpittau | 05:43 | |
*** sshnaidm is now known as sshnaidm|off | 06:06 | |
*** nurdie has joined #openstack-ansible | 06:44 | |
*** nurdie has quit IRC | 06:49 | |
noonedeadpunk | finally | 06:58 |
noonedeadpunk | mornings | 06:59 |
noonedeadpunk | uh, I thought that lxc merged.... | 06:59 |
recyclehero | morning | 07:00 |
recyclehero | I have commented out belongs_to of heat.yml in /opt/openstacK_anssible/env.d | 07:01 |
recyclehero | but it gets included when I run dynamcin_inteventory.py? | 07:01 |
recyclehero | why is that and is it a good way to exclude services? | 07:02 |
noonedeadpunk | recyclehero: yeah, we don't clean up automatically inventory. | 07:02 |
noonedeadpunk | so you need to run scripts/inventory_manage.py | 07:02 |
recyclehero | aha thank u,then I will go run destory_lxc_container and then setup_hosts,infra and openstack | 07:03 |
recyclehero | seems good for major reconfiguration? | 07:03 |
noonedeadpunk | um... | 07:04 |
noonedeadpunk | if you need to drop heat containers, then you need to run destory_lxc_container with `--limit heat_all` | 07:04 |
recyclehero | great, thanks | 07:05 |
noonedeadpunk | and after that just remove hosts from inventory with scripts/inventory-manage.py -r container_name | 07:05 |
recyclehero | so what is dynamic_inventory.py | 07:06 |
recyclehero | I got it | 07:07 |
noonedeadpunk | dynamic_inventory generates inventory and is passed to ansible-playbook as inventory source | 07:07 |
*** maharg101 has joined #openstack-ansible | 07:08 | |
noonedeadpunk | but it can only add hosts, but not drop them out | 07:11 |
noonedeadpunk | inventory-manage.py is for looking inventory in table view | 07:11 |
*** cshen has joined #openstack-ansible | 07:11 | |
*** maharg101 has quit IRC | 07:12 | |
recyclehero | noonedeadpunk: whats their relationship with openstack_inventory.json in /etc/openstack_deploy | 07:12 |
noonedeadpunk | that;s good question:) | 07:13 |
noonedeadpunk | consider openstack_inventory.json as cache file for dynamic_inventory.py | 07:13 |
recyclehero | if it isnt there it will make one | 07:13 |
noonedeadpunk | yep | 07:14 |
noonedeadpunk | but it will use different container names | 07:14 |
noonedeadpunk | which will be really bad if you have containers already running | 07:15 |
noonedeadpunk | there's also /etc/openstack_deploy/openstack_hostnames_ips.yml which stores ip relation | 07:15 |
jrosser | recyclehero: if you are trying to not deploy heat then the best thing to do is not define orchestration_hosts in openstack_user_config | 07:16 |
jrosser | adjusting env.d should not be needed for that | 07:16 |
noonedeadpunk | (it may be in conf.d as alternative to openstack_user_config) | 07:17 |
recyclehero | jrosser: I used os-infra_hosts | 07:17 |
jrosser | env.d says what is going where | 07:17 |
jrosser | openstack_user_config and conf.d say what you have / have not got | 07:18 |
jrosser | so layout vs. presence | 07:18 |
noonedeadpunk | jrosser: heat is part of os-infra_hosts I think because of that https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/heat.yml#L32 | 07:19 |
recyclehero | noonedeadpunk: exactly thats what I ahve commented | 07:19 |
noonedeadpunk | so I'd say worth replacing os-infra_hosts with specific groups | 07:20 |
recyclehero | jrosser: I dont know how to use conf.d yet | 07:20 |
jrosser | isn't it this (for AIO) which makes those exist though https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/conf.d/heat.yml.aio | 07:21 |
noonedeadpunk | recyclehero: so, I'd probably instead of editing env.d would just define openstack_user_config that way https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.pod.example#L327 | 07:22 |
noonedeadpunk | instead of setting os-infra_hosts which would include set for passing refstack (I guess that was idea why we included heat) | 07:23 |
*** MickyMan77 has quit IRC | 07:24 | |
jrosser | yeah i see we don't make this totally clear | 07:24 |
jrosser | os_infra-hosts is a group including a bunch of stuff | 07:25 |
noonedeadpunk | yep | 07:25 |
jrosser | but nothing stops you individually defining the hosts for each service | 07:25 |
*** maharg101 has joined #openstack-ansible | 07:33 | |
recyclehero | I destoryed all lxc-containers last night and let setup host-infra-openstack to ran. now I am getting connection refused all over from rabbitmq | 07:41 |
recyclehero | why could this be | 07:41 |
recyclehero | i wanted to do some tuning and also remove heat like how I said | 07:44 |
recyclehero | I expect to not see neutron-metering agent but I am seeing in between other connection refuesd | 07:45 |
recyclehero | http://paste.openstack.org/show/798892/ | 07:45 |
*** tosky has joined #openstack-ansible | 07:46 | |
gillesMo | Hello, I already asked without succes, but has anyone used the ceph-rgw-install.yml in Stein ? Ansible refuse the always_run option | 07:52 |
*** gillesMo has left #openstack-ansible | 07:54 | |
*** gillesMo has joined #openstack-ansible | 07:54 | |
noonedeadpunk | gillesMo: I think we had it in CI... | 08:31 |
noonedeadpunk | gillesMo: but I can't find where always_run is used there | 08:35 |
noonedeadpunk | we don;t have `always_run` in playbook itself and ceph-ansible we use don't have it as well | 08:36 |
noonedeadpunk | (on stable-3.2 branch which is used for stein) | 08:36 |
ebbex | I'm getting this from cinder-volume; Error starting thread.: cinder.exception.ClusterNotFound: Cluster {'name': 'ceph@RBD'} could not be found. | 08:43 |
*** nurdie has joined #openstack-ansible | 08:45 | |
ebbex | is this a rabbitmq queue thing gone missing? | 08:46 |
recyclehero | I am doing evaluation so I dont f... up in production. my openstack_inventory.json and openstack_hostnames_ips.yml are completly out of sync with whats there. what can I do to delete containers? | 08:48 |
*** nurdie has quit IRC | 08:50 | |
gillesMo | noonedeadpunk: Have the roles been changed, perhaps I have old ones in /etc/ansible/roles, I check | 08:50 |
noonedeadpunk | gillesMo: you need specificly ceph-ansible repo get to origin/stable-3.2 | 08:51 |
noonedeadpunk | recyclehero: well, if you've dropped openstack_inventory.json before dropping containers - you can remove them now only manually | 08:52 |
noonedeadpunk | as ansible does not know anything about current containers | 08:52 |
recyclehero | thats what I thought. what about ips? what will happen with them? | 08:53 |
noonedeadpunk | ebbex: I can recall facing this but can't recall how I've fixed it... | 08:53 |
ebbex | noonedeadpunk: https://bugs.launchpad.net/openstack-ansible/+bug/1877421 | 08:53 |
openstack | Launchpad bug 1877421 in openstack-ansible "Cinder-volume is not able to recognize a ceph cluster on OpenStack Train." [Undecided,New] | 08:53 |
ebbex | I can't remember having deployed stable/train directly before, always upgraded from a previous release. | 08:54 |
noonedeadpunk | ebbex: well I deployed | 08:55 |
noonedeadpunk | Let me check what I have in vars | 08:56 |
ebbex | so never encountered this before, the others on this bug-report claim their config worked on stein. | 08:56 |
noonedeadpunk | well, I think the point here is that backend name != 'ceph' | 08:57 |
noonedeadpunk | as 'ceph' is kind of reserved name for cluster | 08:57 |
noonedeadpunk | but according to bug it worked as well | 08:58 |
noonedeadpunk | ebbex: can you share your cinder config? | 09:00 |
gillesMo | noonedeadpunk: For info, I 'm not deploying ceph, just use it. I only have client config (mons IPs,rgw config, cinder overrid to use EC pools with specific user defaults) | 09:01 |
ebbex | noonedeadpunk: http://paste.openstack.org/show/0bxC0x8n5GcoiajxFnW4/ | 09:02 |
gillesMo | Do I need to remove all ceph roles I have in /etc/ansible/roles and define (where) ceph-ansible path ? | 09:03 |
gillesMo | I haven't seen something about that in the (Stein) release notes | 09:04 |
noonedeadpunk | gillesMo: you should go to /etc/ansible/roles/ceph-ansible and try to use git pull or git reset origin/stable-3.2 --hard | 09:04 |
gillesMo | noonedeadpunk: I also have sevral roles like ceph-defaults, ceph-osd, ceph-mons, which are certainly older versions now under ceph-ansible/roles ? | 09:07 |
noonedeadpunk | ebbex: hm I have really very simmilar one and it is working.... http://paste.openstack.org/show/798900/ | 09:09 |
noonedeadpunk | oh, well | 09:09 |
noonedeadpunk | ebbex: try adding `backend_host = rbd` to each backend | 09:09 |
noonedeadpunk | or pasted one worked for you? | 09:20 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: [doc] Define backend_host for Ceph backends https://review.opendev.org/757043 | 09:32 |
ebbex | noonedeadpunk: adding `backend_host = rbd` seems to have worked. | 09:39 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_barbican master: Cleanup stop handler https://review.opendev.org/756689 | 09:39 |
ebbex | Which happens to be the last change commited to os_cinder on train, https://review.opendev.org/#/c/672078/ | 09:42 |
recyclehero | Any changes to the containers must also be reflected in the deployment’s load balancer. | 09:43 |
recyclehero | https://docs.openstack.org/openstack-ansible/latest/reference/inventory/manage-inventory.html | 09:43 |
recyclehero | but how? | 09:43 |
noonedeadpunk | well, you will need to run haproxy-install.yml playbook | 09:43 |
recyclehero | or setup-openstack.yml | 09:44 |
recyclehero | I will now try to delete the containers manually and also delte inventory json and ips yaml. then try to redeploy | 09:45 |
noonedeadpunk | ebbex: hm, I can recall such discussion... | 09:45 |
ebbex | noonedeadpunk: I wonder how that discussion went ;) | 09:49 |
noonedeadpunk | ebbex: here you are http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2019-07-22.log.html#t2019-07-22T14:03:11 | 09:51 |
ebbex | Adding backend_host=rbd to the config on just 1 node fixed the issue with cinder-volume on two other hosts... | 09:53 |
noonedeadpunk | hm.... | 09:53 |
noonedeadpunk | https://docs.openstack.org/cinder/latest/contributor/high_availability.html#cinder-volume | 09:54 |
noonedeadpunk | `The name of the cluster must be unique and cannot match any of the host or backend_host values. Non unique values will generate duplicated names for message queues.` | 09:55 |
noonedeadpunk | well, I least I recalled that cluster name and backend name can't be same | 09:56 |
jrosser | i seem to remember someone else having a very similar thing possibly with the name 'rbd' | 09:58 |
jrosser | and it all started working when changing the name | 09:58 |
jrosser | this maybe something we could add an assert for in the cinder role | 09:59 |
noonedeadpunk | ebbex had `RBD`, and in the bug report ppl claimed that changing to `RBD` worked for them | 10:01 |
noonedeadpunk | so could it be the point to just rename backend lol?:) | 10:02 |
jrosser | we'd have to look back in eavesdrop but we had an OSA user having terrible difficulty with this | 10:02 |
noonedeadpunk | well, I read it through, and patch seems valid | 10:06 |
noonedeadpunk | would be interesting to reproduce that | 10:08 |
gillesMo | [OSA pb with ceph playbook] I removed all ceph roles in /etc/ansible/roles and relaunched the bootstrap-ansible script. There now only ceph_client and ceph-ansible roles. That was the problem. The bootstrap script or the release note should tell that we must remove old ceph roles | 10:16 |
recyclehero | When I deleted the containers obv haproxy started complaining there are no backends and broadasted it. | 10:17 |
recyclehero | i found it annoying and stopped it | 10:17 |
recyclehero | running setup-infrastructure doesnt start the service ? | 10:17 |
ebbex | noonedeadpunk: thanks for the read :) | 10:24 |
noonedeadpunk | well., it seems that setting backend_host was a bad call after all | 10:25 |
recyclehero | (item.service.state is defined and item.service.state == 'absent') | 10:27 |
recyclehero | in haproxy service config | 10:27 |
ebbex | so, the weird thing is i've since removed the line, restarted the services, and everything seems fine. | 10:27 |
noonedeadpunk | um | 10:28 |
recyclehero | whats item.service.state | 10:28 |
noonedeadpunk | ebbex: any chance you can drop volume containers along with cinder db and re-deploy them?:) | 10:28 |
noonedeadpunk | just really would be great to understand how this should be fixed for ral... | 10:29 |
noonedeadpunk | *real | 10:29 |
noonedeadpunk | but whatever, I think I can spawn aio with multiple cinder containers on it | 10:29 |
ebbex | so something might created during the active/passive `backend_host` thing, that's not getting created with active/active `cluster` option. | 10:29 |
noonedeadpunk | it looks like this yes | 10:30 |
ebbex | yeah, i can try a redeploy in an hour or two. | 10:30 |
noonedeadpunk | maybe it's really better to spawn aio and not to play in your deployment | 10:45 |
*** nurdie has joined #openstack-ansible | 10:47 | |
*** nurdie has quit IRC | 10:51 | |
*** mensis has joined #openstack-ansible | 10:58 | |
*** ioni has quit IRC | 11:02 | |
*** fridtjof[m] has quit IRC | 11:02 | |
*** csmart has quit IRC | 11:02 | |
*** masterpe has quit IRC | 11:02 | |
*** mensis21 has joined #openstack-ansible | 11:09 | |
*** mensis21 has quit IRC | 11:10 | |
*** mensis2 has joined #openstack-ansible | 11:10 | |
ebbex | noonedeadpunk: my deployment is for fun/testing features, it gets redeployed once it's up and running. Which it finally is :) | 11:11 |
*** ioni has joined #openstack-ansible | 11:11 | |
ebbex | Just gonna add some stuff, and try upgrade to ussuri. | 11:11 |
*** mensis has quit IRC | 11:12 | |
*** lkoranda has joined #openstack-ansible | 11:14 | |
*** watersj has joined #openstack-ansible | 11:21 | |
*** jamesdenton has quit IRC | 11:34 | |
*** fridtjof[m] has joined #openstack-ansible | 11:37 | |
*** masterpe has joined #openstack-ansible | 11:37 | |
*** csmart has joined #openstack-ansible | 11:37 | |
*** jamesdenton has joined #openstack-ansible | 11:40 | |
*** jbadiapa has joined #openstack-ansible | 11:45 | |
*** sum12 has quit IRC | 11:57 | |
*** lkoranda has quit IRC | 11:58 | |
*** lkoranda has joined #openstack-ansible | 12:01 | |
*** cshen has quit IRC | 12:13 | |
*** itsjg has quit IRC | 12:18 | |
*** nurdie has joined #openstack-ansible | 12:48 | |
*** nurdie has quit IRC | 12:52 | |
openstackgerrit | Merged openstack/openstack-ansible master: Convert lxc2 config keys to lxc3 format https://review.opendev.org/756244 | 12:55 |
*** nurdie has joined #openstack-ansible | 13:00 | |
*** rpittau is now known as rpittau|afk | 13:03 | |
*** nurdie has quit IRC | 13:15 | |
*** mmethot_ is now known as mmethot | 13:15 | |
*** cshen has joined #openstack-ansible | 13:21 | |
noonedeadpunk | ebbex: if you're somewhere around, would be great to have vote on this https://review.opendev.org/#/c/755497 | 13:26 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Remove glance_registry from inventory https://review.opendev.org/756318 | 13:32 |
openstackgerrit | Merged openstack/openstack-ansible-lxc_hosts stable/train: Updated from OpenStack Ansible Tests https://review.opendev.org/690342 | 13:48 |
ebbex | This letsencrypt part of haproxy, how bad is it (in regards to limits imposed from letsencrypt) when you're ordering a certificate for the same domain for each haproxy node? | 13:56 |
noonedeadpunk | well, we don't ask for each) | 13:56 |
noonedeadpunk | we ask for the first one and then distribute among others iirc | 13:56 |
ebbex | What's what I would expect aswell, but my deployment creates an account and fires up certbot on each node with request/response. Have I missed a patch? | 13:59 |
ebbex | Error creating new order :: too many certificates already issued for exact set of domains. | 13:59 |
noonedeadpunk | well, it's probably for renewal? | 14:00 |
ebbex | https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/tasks/main.yml#L46-L50 | 14:00 |
ebbex | That doesn't look like a run once, and distribute to other setup... | 14:01 |
ebbex | Nor does anything in https://github.com/openstack/openstack-ansible-haproxy_server/blob/master/tasks/haproxy_ssl_letsencrypt.yml | 14:03 |
noonedeadpunk | hm yes | 14:03 |
jrosser | if you're just messing then you should use --staging | 14:03 |
ebbex | Should I perhaps take a stab at implementing it? | 14:03 |
jrosser | then there is no rate limit | 14:03 |
jrosser | effectively | 14:03 |
jrosser | the idea was to not do any distribution becasue it is hard | 14:04 |
noonedeadpunk | `There is no certificate distribution implementation at this time, so this will only work for a single haproxy-server environment. ` | 14:04 |
jrosser | hold on | 14:04 |
jrosser | previously there was a patch for LE which worked only non-HA | 14:04 |
jrosser | what i did was make it work HA with independant certbot on each haproxy | 14:05 |
jrosser | it would be possible to improve it so that there is only one account used across all of them | 14:05 |
jrosser | but i don't think that changes the issuance limits at all | 14:05 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible master: Simplify path for letsencrypt usage https://review.opendev.org/751327 | 14:06 |
ebbex | Well, I actually like the security-aspect of the keys not leaving the server. So having multiple made sense. | 14:06 |
ebbex | Distribution could be something a little more complicated, but along the lines of this; https://github.com/eb4x/openstack-playbooks/blob/master/openstack-ansible/deployment-key.yml | 14:06 |
jrosser | for acoount keys? | 14:07 |
jrosser | *account | 14:07 |
noonedeadpunk | ebbex: I mixed that up with self-signed yeah https://opendev.org/openstack/openstack-ansible-haproxy_server/src/branch/master/tasks/haproxy_ssl_self_signed.yml | 14:07 |
ebbex | jrosser: Yeah, account keys, and the key used in the csr. | 14:08 |
jrosser | we'd need a way of initialiseing it, which i guess could be done on the first haproxy | 14:09 |
jrosser | then the relevant keys recovered to the deploy host and moved across to the remaining ones | 14:09 |
jrosser | the only benefit from a consistent account seemed to be if you were a really heavy user you could ask for the issuance limit to be increased for a specific account | 14:10 |
ebbex | letsencrypt seems to have a limit of 5 requests for cert for the same domain, and it went fine the first time around, 3 accounts generated, 3 separate challenge/responses issued, the haproxy registers when a backend goes up and forwards correctly. | 14:10 |
ebbex | but if it's not this way by design, I'll be willing to look into the distribution of accounts/keys/certs. | 14:12 |
jrosser | i guess you are running into LE limiting to 5 duplicate certs per week for the same exact domain | 14:13 |
jrosser | which for a production deployment will be fine | 14:14 |
jrosser | for any testing i'd really be using this https://letsencrypt.org/docs/staging-environment/ | 14:14 |
jrosser | as a workaround for now i think you may be able to add an additional -d <another-domain> to the certbot command and you get another 5 issuances | 14:15 |
ebbex | jrosser: hehe, noice :) | 14:15 |
ebbex | Yeah, cause I kinda like it the way it is, just dumb of me to request new ones on a fresh deploy. | 14:17 |
ebbex | I'll look into using staging. | 14:17 |
jrosser | ebbex: i wonder if we had -d <openstack.example.com> and -d <haproxy<N>.example.com> as a SAN you'd be requesting non-unique certs per haproxy | 14:17 |
jrosser | that dows maybe assume DNS records and IP per haproxy on the public side though | 14:18 |
jrosser | sorry *unique* certs cer haproxy | 14:18 |
ebbex | probably just wants the dns entries to be there, pointing all <haproxy<N>.example.com> to the same external_lb_vip might work fine aswell. | 14:21 |
jrosser | the other thing to be mindful of is every cert you issue with certbot gets recorded in the certificate transparency logs | 14:21 |
jrosser | that maybe something you do or dont care about being public | 14:21 |
*** nurdie has joined #openstack-ansible | 14:22 | |
jrosser | the LE staging endpoint keep your dev/test environment activity out of those logs | 14:23 |
openstackgerrit | Merged openstack/openstack-ansible master: Use nodepool epel mirror in CI for systemd-networkd package https://review.opendev.org/754706 | 14:32 |
*** d34dh0r53 has quit IRC | 15:07 | |
*** d34dh0r53 has joined #openstack-ansible | 15:20 | |
*** macz_ has joined #openstack-ansible | 15:41 | |
*** d34dh0r53 has quit IRC | 15:42 | |
*** maharg101 has quit IRC | 15:50 | |
*** d34dh0r53 has joined #openstack-ansible | 16:01 | |
*** d34dh0r53 has quit IRC | 16:07 | |
*** d34dh0r53 has joined #openstack-ansible | 16:07 | |
*** lkoranda has quit IRC | 16:24 | |
*** gyee has joined #openstack-ansible | 16:32 | |
*** tosky has quit IRC | 16:35 | |
*** maharg101 has joined #openstack-ansible | 16:38 | |
recyclehero | i am getting this log at a crazy rate | 16:42 |
recyclehero | [ValueError("unsupported format character '{' (0x7b) at index 1")] _periodic_resync_helper /openstack/venvs/neutron-21.0.1/lib/python3.7/site-packages/neutron/agent/dhcp/agent.py:321 | 16:42 |
recyclehero | any ideas | 16:43 |
*** maharg101 has quit IRC | 16:45 | |
*** recyclehero has quit IRC | 16:53 | |
*** recyclehero has joined #openstack-ansible | 17:10 | |
jrosser | recyclehero: that would be here https://github.com/openstack/neutron/blob/stable/ussuri/neutron/agent/dhcp/agent.py#L320-L321 | 17:25 |
jrosser | and from the error it suggests that the network name has a '{' character in it, which is suspiciously pointing to templating something including a { maybe in a neutron config file | 17:25 |
*** jbadiapa has quit IRC | 17:56 | |
recyclehero | thank you jrosser | 18:00 |
recyclehero | jrosser: I have add this to user_variables and this error came up. do u see anything wrong here. I need an extra pair of eyes | 18:05 |
recyclehero | http://paste.openstack.org/show/798892/ | 18:05 |
recyclehero | alos I deleted all the containers manually then setup-hosts-infra-openstack. after that when I sign in admin I only have the admin project and not service anymore | 18:08 |
*** spatel has joined #openstack-ansible | 18:19 | |
spatel | jrosser: hey | 18:20 |
spatel | Do you have br-host and br-mgmt interface both in your cloud ? | 18:20 |
spatel | or just br-mgmt (routed?) | 18:21 |
jrosser | just br-mgmt, routed | 18:21 |
jrosser | but also each host has a 1G oob interface | 18:21 |
jrosser | for ssh and whatever | 18:21 |
spatel | currently i have br-host and br-mgmt both and br-host is routed | 18:22 |
spatel | problem is i can't ssh or connect directly to br-mgmt | 18:22 |
jrosser | do you need to? i can't do that either btw | 18:22 |
spatel | i have to ssh on br-host to get access of br-mgmt network | 18:22 |
spatel | I am building new cloud so thinking to remove br-host and just stay on br-mgmt (routed) | 18:23 |
jrosser | i do have web reverse proxy from mgmt net to 'internal' network for visiting ceph dashboard and things | 18:23 |
spatel | i have 10G+10G bonded interface so enough speed | 18:23 |
spatel | i like br-mgmt only idea (less interface so less complexity) | 18:24 |
jrosser | so long as you can pxeboot sensibly - i have some hosts with only 10G+10G bond and that was tricky | 18:25 |
spatel | why tricky? | 18:25 |
jrosser | need careful config on the switch to make it fall back out of lacp mode correctly when the bond isnt up (i.e during boot) | 18:25 |
spatel | i do native VLAN for pxe traffic | 18:26 |
spatel | I do pxe boot and then i have custom bash script with take compute number (Ex: setup-compute.sh 123 ) | 18:27 |
jrosser | see 'no lacp suspend-individual' | 18:27 |
spatel | that script set hostname compute-123 also set same compute number and set IP address like 10.71.0.123 | 18:27 |
spatel | hmm | 18:29 |
spatel | we have HP7000 blade center and those switches doesn't support vPC so i am running active-standby on my bond0 | 18:30 |
spatel | HP7000 blade switches connected to my cisco nexus vPC in spine-leaf | 18:30 |
*** miloa has quit IRC | 18:31 | |
spatel | jrosser: quick question if you are using br-mgmt only interface then what do you set in happroxy external and internal IP? | 18:31 |
*** tosky has joined #openstack-ansible | 18:32 | |
*** maharg101 has joined #openstack-ansible | 18:43 | |
*** maharg101 has quit IRC | 18:47 | |
recyclehero | Unable to s | 18:49 |
recyclehero | ync network state on deleted network 1291c921-d97e-4444-bf11-c277d6243ec9 | 18:49 |
recyclehero | I deleted every container, and redeployed. I dont know how it found a deleted network. I would say rabbitmq but its container was deleted too | 18:49 |
recyclehero | I want to start clean without reinstalling the host OS. | 18:50 |
recyclehero | neutron is on metal aha | 18:51 |
recyclehero | does it store anywhere else from galera which I have also deleted | 18:51 |
ebbex | noonedeadpunk: So I couldn't reproduce the cinder-volume issue on a new deployment :/ | 18:53 |
*** cshen has quit IRC | 19:01 | |
*** cshen has joined #openstack-ansible | 19:04 | |
*** SecOpsNinja has joined #openstack-ansible | 19:45 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_keystone master: Do not manage /etc/ssl or /etc/pki directories or symlinks https://review.opendev.org/754092 | 19:59 |
spatel | jrosser: Do guys have multiple openstack or single cloud? | 20:01 |
jrosser | hello :) | 20:02 |
jrosser | i have two production clouds right now | 20:02 |
spatel | I am looking for multi site openstack solution (not sure how to sync keystone ) | 20:02 |
spatel | jrosser: how do you sync keystone or are you running keystone in federation ? | 20:02 |
jrosser | i went with 'shared nothing' | 20:02 |
spatel | oh! | 20:03 |
jrosser | and federation to external identity provider, SSO | 20:03 |
spatel | i am looking for unified openstack cloud where we can just pick region to deploy vms | 20:04 |
spatel | currently i have isolated cloud so i have to create foo user everywhere | 20:04 |
jrosser | my use case is for engineering / r&d users | 20:05 |
jrosser | company SSO provides auth to keystone and keystone mappings auto-create a project for each user when they first log in | 20:06 |
jrosser | then I have a git repo and ansible for shared projects and groups/quotas | 20:06 |
jrosser | but there are many many ways to do this | 20:07 |
spatel | hmm | 20:07 |
jrosser | some you can share the keystone if you want, depends what you need to acheieve really | 20:07 |
spatel | I want to sync keystone between two geo datacenter so my project/role/users all get replicate itself | 20:08 |
spatel | I may need to look into that side.. i will run some test in lab to see how it goes | 20:09 |
jrosser | maybe start here? https://www.slideshare.net/Colleen_Murphy/bridging-clouds-with-keystone-to-keystone-federation-143258350 | 20:10 |
jrosser | like i say lots and lots of choices here | 20:10 |
spatel | looking promising, i will try that in lab | 20:12 |
spatel | thank you jrosser | 20:15 |
jrosser | no worries - AIO is good for testing some of this federation stuff, don’t need a bit deploy for that | 20:16 |
jrosser | *big | 20:16 |
SecOpsNinja | hi. hou can we enalble automatic backups of mariadb? | 20:18 |
SecOpsNinja | when i tried to add more infra nodes to my galera server (with only hade 1 node) it didn't start normaly because it cound connect to the another nodes. now i was able to start it using the recover opntion and mariadbd-safe --wsrep_cluster_address=gcomm:// but now i owuld like to make a bakcup before trying to add the next host with ansible playbooks | 20:19 |
SecOpsNinja | but i can't find the corect information regarding the backup of galera db | 20:20 |
*** SmearedBeard has joined #openstack-ansible | 20:26 | |
spatel | SecOpsNinja: it should be standard method mysqldump -- | 20:28 |
spatel | i don't think galera has any special method to take backup, Just dump full database including user/password/permission etc.. | 20:29 |
SecOpsNinja | spatel, ok i will check. the problem is that i dont understand what i did wrong because i was following thei sinformation https://docs.openstack.org/openstack-ansible/latest/admin/scale-environment.html to add neuy infra nodes,. and now my first node as the information regarding the other 3 nodes, but in the other galeara containers the mariadb isn't installed .... | 20:30 |
SecOpsNinja | i trying to see what is the best way to recover the mariadb data if i need to recreate all galera containers | 20:31 |
spatel | did you see this - https://docs.openstack.org/openstack-ansible/newton/developer-docs/ops-galera-recovery.html | 20:33 |
spatel | i would say take full backup first and also restore it somewhere to just verify its working (if this is first installation then nothing to worry) | 20:34 |
SecOpsNinja | i was checking this ... https://docs.openstack.org/openstack-ansible/latest/admin/maintenance-tasks.html#galera-cluster-recovery | 20:34 |
SecOpsNinja | but in only talks about recovery but dont way how... i will check that link | 20:35 |
spatel | you are saying mysql isn't get installed on container right? | 20:37 |
SecOpsNinja | on the seond and third no | 20:37 |
spatel | you didn't get any error during playbook run? | 20:38 |
SecOpsNinja | because the openstack ansible crashed when tried to start mariadb on 1º but it failed because it could connect to any other nodes and for what i understand it didn't have qorum to start | 20:38 |
SecOpsNinja | so i used this "mariadbd --wsrep-recover --skip-networking" and "mariadbd-safe --wsrep_cluster_address=gcomm://" to be able to start it in the first node | 20:38 |
spatel | you need to bootstrap galera to get everything running | 20:38 |
spatel | playbook take care of everything you don't need to do anything manual | 20:39 |
SecOpsNinja | yep but my problem is with the mariadb-safe runing if the paybook will continue to run and configure the other nodes correctly | 20:40 |
spatel | sometime i have seen issue of /var/lib/mysql/grastate.dat (i mostly move this file out) | 20:40 |
spatel | first you need to find out why playbook not installing mysql on node-2 and 3 | 20:41 |
spatel | i would start from there before troubleshooting node-1 | 20:41 |
SecOpsNinja | yep but in this https://docs.openstack.org/openstack-ansible/latest/admin/backup-restore.html it doenst say much regarding to recover all galera cluster if i ned to recteeate all novdes | 20:42 |
SecOpsNinja | so i will try to backup mariadb amd see if i can safe some data | 20:42 |
jrosser | SecOpsNinja: 1 infra node to >1 infra node you have to get keepalived handling the vip | 20:43 |
jrosser | make sure thats all working properly first | 20:43 |
SecOpsNinja | yep i allready have that working | 20:43 |
*** maharg101 has joined #openstack-ansible | 20:43 | |
spatel | Have a good weekend guys! see you on Monday | 20:44 |
recyclehero | is there a way to uninstall neutron? | 20:44 |
SecOpsNinja | spatel, thanks for the help :) | 20:44 |
*** spatel has quit IRC | 20:45 | |
*** maharg101 has quit IRC | 20:48 | |
*** mensis2 has quit IRC | 21:04 | |
*** SmearedBeard has quit IRC | 21:06 | |
*** nurdie has quit IRC | 21:31 | |
*** SecOpsNinja has left #openstack-ansible | 21:43 | |
*** nurdie has joined #openstack-ansible | 22:44 | |
*** nurdie has quit IRC | 22:50 | |
*** macz_ has quit IRC | 23:06 | |
*** tosky has quit IRC | 23:26 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!